00:00:00.000 Started by upstream project "autotest-per-patch" build number 126251 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.118 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu20-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.119 The recommended git tool is: git 00:00:00.119 using credential 00000000-0000-0000-0000-000000000002 00:00:00.121 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu20-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.149 Fetching changes from the remote Git repository 00:00:00.151 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.198 Using shallow fetch with depth 1 00:00:00.198 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.198 > git --version # timeout=10 00:00:00.242 > git --version # 'git version 2.39.2' 00:00:00.242 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.271 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.272 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.973 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.985 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.997 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:03.997 > git config core.sparsecheckout # timeout=10 00:00:04.008 > git read-tree -mu HEAD # timeout=10 00:00:04.026 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:04.049 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:04.049 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:04.140 [Pipeline] Start of Pipeline 00:00:04.155 [Pipeline] library 00:00:04.157 Loading library shm_lib@master 00:00:04.157 Library shm_lib@master is cached. Copying from home. 00:00:04.170 [Pipeline] node 00:00:04.177 Running on VM-host-WFP7 in /var/jenkins/workspace/ubuntu20-vg-autotest 00:00:04.178 [Pipeline] { 00:00:04.187 [Pipeline] catchError 00:00:04.188 [Pipeline] { 00:00:04.198 [Pipeline] wrap 00:00:04.207 [Pipeline] { 00:00:04.215 [Pipeline] stage 00:00:04.217 [Pipeline] { (Prologue) 00:00:04.237 [Pipeline] echo 00:00:04.238 Node: VM-host-WFP7 00:00:04.244 [Pipeline] cleanWs 00:00:04.252 [WS-CLEANUP] Deleting project workspace... 00:00:04.252 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.258 [WS-CLEANUP] done 00:00:04.430 [Pipeline] setCustomBuildProperty 00:00:04.510 [Pipeline] httpRequest 00:00:04.533 [Pipeline] echo 00:00:04.534 Sorcerer 10.211.164.101 is alive 00:00:04.541 [Pipeline] httpRequest 00:00:04.544 HttpMethod: GET 00:00:04.544 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.545 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.546 Response Code: HTTP/1.1 200 OK 00:00:04.547 Success: Status code 200 is in the accepted range: 200,404 00:00:04.547 Saving response body to /var/jenkins/workspace/ubuntu20-vg-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.282 [Pipeline] sh 00:00:05.565 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.577 [Pipeline] httpRequest 00:00:05.598 [Pipeline] echo 00:00:05.599 Sorcerer 10.211.164.101 is alive 00:00:05.608 [Pipeline] httpRequest 00:00:05.612 HttpMethod: GET 00:00:05.613 URL: http://10.211.164.101/packages/spdk_0663932f504f7e873432b6fb363ab180df70f8a0.tar.gz 00:00:05.613 Sending request to url: http://10.211.164.101/packages/spdk_0663932f504f7e873432b6fb363ab180df70f8a0.tar.gz 00:00:05.626 Response Code: HTTP/1.1 200 OK 00:00:05.626 Success: Status code 200 is in the accepted range: 200,404 00:00:05.627 Saving response body to /var/jenkins/workspace/ubuntu20-vg-autotest/spdk_0663932f504f7e873432b6fb363ab180df70f8a0.tar.gz 00:00:52.922 [Pipeline] sh 00:00:53.245 + tar --no-same-owner -xf spdk_0663932f504f7e873432b6fb363ab180df70f8a0.tar.gz 00:00:55.841 [Pipeline] sh 00:00:56.122 + git -C spdk log --oneline -n5 00:00:56.123 0663932f5 util: add spdk_net_getaddr 00:00:56.123 9da437b46 util: move module/sock/sock_kernel.h contents to net.c 00:00:56.123 35c6d81e6 util: add spdk_net_get_interface_name 00:00:56.123 f8598a71f bdev/uring: use util functions in bdev_uring_check_zoned_support 00:00:56.123 4903ec649 ublk: use spdk_read_sysfs_attribute_uint32 to get max ublks 00:00:56.143 [Pipeline] writeFile 00:00:56.160 [Pipeline] sh 00:00:56.442 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:56.455 [Pipeline] sh 00:00:56.739 + cat autorun-spdk.conf 00:00:56.739 SPDK_TEST_UNITTEST=1 00:00:56.739 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:56.739 SPDK_TEST_NVME=1 00:00:56.739 SPDK_TEST_BLOCKDEV=1 00:00:56.739 SPDK_RUN_ASAN=1 00:00:56.739 SPDK_RUN_UBSAN=1 00:00:56.739 SPDK_TEST_RAID5=1 00:00:56.739 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:56.747 RUN_NIGHTLY=0 00:00:56.749 [Pipeline] } 00:00:56.765 [Pipeline] // stage 00:00:56.781 [Pipeline] stage 00:00:56.783 [Pipeline] { (Run VM) 00:00:56.798 [Pipeline] sh 00:00:57.080 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:57.080 + echo 'Start stage prepare_nvme.sh' 00:00:57.080 Start stage prepare_nvme.sh 00:00:57.080 + [[ -n 6 ]] 00:00:57.080 + disk_prefix=ex6 00:00:57.080 + [[ -n /var/jenkins/workspace/ubuntu20-vg-autotest ]] 00:00:57.080 + [[ -e /var/jenkins/workspace/ubuntu20-vg-autotest/autorun-spdk.conf ]] 00:00:57.080 + source /var/jenkins/workspace/ubuntu20-vg-autotest/autorun-spdk.conf 00:00:57.080 ++ SPDK_TEST_UNITTEST=1 00:00:57.080 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:57.080 ++ SPDK_TEST_NVME=1 00:00:57.080 ++ SPDK_TEST_BLOCKDEV=1 00:00:57.080 ++ SPDK_RUN_ASAN=1 00:00:57.080 ++ SPDK_RUN_UBSAN=1 00:00:57.080 ++ SPDK_TEST_RAID5=1 00:00:57.080 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:57.080 ++ RUN_NIGHTLY=0 00:00:57.080 + cd /var/jenkins/workspace/ubuntu20-vg-autotest 00:00:57.080 + nvme_files=() 00:00:57.080 + declare -A nvme_files 00:00:57.080 + backend_dir=/var/lib/libvirt/images/backends 00:00:57.080 + nvme_files['nvme.img']=5G 00:00:57.080 + nvme_files['nvme-cmb.img']=5G 00:00:57.080 + nvme_files['nvme-multi0.img']=4G 00:00:57.080 + nvme_files['nvme-multi1.img']=4G 00:00:57.080 + nvme_files['nvme-multi2.img']=4G 00:00:57.080 + nvme_files['nvme-openstack.img']=8G 00:00:57.080 + nvme_files['nvme-zns.img']=5G 00:00:57.080 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:57.080 + (( SPDK_TEST_FTL == 1 )) 00:00:57.080 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:57.080 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:57.080 + for nvme in "${!nvme_files[@]}" 00:00:57.080 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:00:57.080 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:57.080 + for nvme in "${!nvme_files[@]}" 00:00:57.080 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:00:57.080 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:57.080 + for nvme in "${!nvme_files[@]}" 00:00:57.080 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:00:57.080 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:57.080 + for nvme in "${!nvme_files[@]}" 00:00:57.080 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:00:57.080 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:57.080 + for nvme in "${!nvme_files[@]}" 00:00:57.080 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:00:57.080 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:57.080 + for nvme in "${!nvme_files[@]}" 00:00:57.080 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:00:57.080 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:57.080 + for nvme in "${!nvme_files[@]}" 00:00:57.080 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:00:58.014 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:58.014 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:00:58.014 + echo 'End stage prepare_nvme.sh' 00:00:58.014 End stage prepare_nvme.sh 00:00:58.025 [Pipeline] sh 00:00:58.302 + DISTRO=ubuntu2004 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:58.302 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex6-nvme.img -H -a -v -f ubuntu2004 00:00:58.302 00:00:58.302 DIR=/var/jenkins/workspace/ubuntu20-vg-autotest/spdk/scripts/vagrant 00:00:58.302 SPDK_DIR=/var/jenkins/workspace/ubuntu20-vg-autotest/spdk 00:00:58.302 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu20-vg-autotest 00:00:58.302 HELP=0 00:00:58.302 DRY_RUN=0 00:00:58.302 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img, 00:00:58.302 NVME_DISKS_TYPE=nvme, 00:00:58.302 NVME_AUTO_CREATE=0 00:00:58.302 NVME_DISKS_NAMESPACES=, 00:00:58.302 NVME_CMB=, 00:00:58.302 NVME_PMR=, 00:00:58.302 NVME_ZNS=, 00:00:58.302 NVME_MS=, 00:00:58.302 NVME_FDP=, 00:00:58.302 SPDK_VAGRANT_DISTRO=ubuntu2004 00:00:58.302 SPDK_VAGRANT_VMCPU=10 00:00:58.302 SPDK_VAGRANT_VMRAM=12288 00:00:58.302 SPDK_VAGRANT_PROVIDER=libvirt 00:00:58.302 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:58.302 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:58.302 SPDK_OPENSTACK_NETWORK=0 00:00:58.302 VAGRANT_PACKAGE_BOX=0 00:00:58.302 VAGRANTFILE=/var/jenkins/workspace/ubuntu20-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:58.302 FORCE_DISTRO=true 00:00:58.302 VAGRANT_BOX_VERSION= 00:00:58.302 EXTRA_VAGRANTFILES= 00:00:58.302 NIC_MODEL=virtio 00:00:58.302 00:00:58.302 mkdir: created directory '/var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt' 00:00:58.302 /var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt /var/jenkins/workspace/ubuntu20-vg-autotest 00:01:00.830 Bringing machine 'default' up with 'libvirt' provider... 00:01:01.093 ==> default: Creating image (snapshot of base box volume). 00:01:01.094 ==> default: Creating domain with the following settings... 00:01:01.094 ==> default: -- Name: ubuntu2004-20.04-1712646987-2220_default_1721078074_2ed655ca7372e9aedda9 00:01:01.094 ==> default: -- Domain type: kvm 00:01:01.094 ==> default: -- Cpus: 10 00:01:01.094 ==> default: -- Feature: acpi 00:01:01.094 ==> default: -- Feature: apic 00:01:01.094 ==> default: -- Feature: pae 00:01:01.094 ==> default: -- Memory: 12288M 00:01:01.094 ==> default: -- Memory Backing: hugepages: 00:01:01.094 ==> default: -- Management MAC: 00:01:01.094 ==> default: -- Loader: 00:01:01.094 ==> default: -- Nvram: 00:01:01.094 ==> default: -- Base box: spdk/ubuntu2004 00:01:01.094 ==> default: -- Storage pool: default 00:01:01.094 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2004-20.04-1712646987-2220_default_1721078074_2ed655ca7372e9aedda9.img (20G) 00:01:01.094 ==> default: -- Volume Cache: default 00:01:01.094 ==> default: -- Kernel: 00:01:01.094 ==> default: -- Initrd: 00:01:01.094 ==> default: -- Graphics Type: vnc 00:01:01.094 ==> default: -- Graphics Port: -1 00:01:01.094 ==> default: -- Graphics IP: 127.0.0.1 00:01:01.094 ==> default: -- Graphics Password: Not defined 00:01:01.094 ==> default: -- Video Type: cirrus 00:01:01.094 ==> default: -- Video VRAM: 9216 00:01:01.094 ==> default: -- Sound Type: 00:01:01.094 ==> default: -- Keymap: en-us 00:01:01.094 ==> default: -- TPM Path: 00:01:01.094 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:01.094 ==> default: -- Command line args: 00:01:01.094 ==> default: -> value=-device, 00:01:01.094 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:01.094 ==> default: -> value=-drive, 00:01:01.094 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:01:01.094 ==> default: -> value=-device, 00:01:01.094 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:01.351 ==> default: Creating shared folders metadata... 00:01:01.351 ==> default: Starting domain. 00:01:02.751 ==> default: Waiting for domain to get an IP address... 00:01:12.724 ==> default: Waiting for SSH to become available... 00:01:13.655 ==> default: Configuring and enabling network interfaces... 00:01:16.181 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:20.367 ==> default: Mounting SSHFS shared folder... 00:01:20.934 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt/output => /home/vagrant/spdk_repo/output 00:01:20.934 ==> default: Checking Mount.. 00:01:23.502 ==> default: Checking Mount.. 00:01:23.502 ==> default: Folder Successfully Mounted! 00:01:23.502 ==> default: Running provisioner: file... 00:01:23.761 default: ~/.gitconfig => .gitconfig 00:01:24.021 00:01:24.021 SUCCESS! 00:01:24.021 00:01:24.021 cd to /var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt and type "vagrant ssh" to use. 00:01:24.021 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:24.021 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt" to destroy all trace of vm. 00:01:24.021 00:01:24.030 [Pipeline] } 00:01:24.048 [Pipeline] // stage 00:01:24.058 [Pipeline] dir 00:01:24.058 Running in /var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt 00:01:24.060 [Pipeline] { 00:01:24.075 [Pipeline] catchError 00:01:24.077 [Pipeline] { 00:01:24.093 [Pipeline] sh 00:01:24.373 + vagrant ssh-config --host vagrant 00:01:24.373 + sed -ne /^Host/,$p 00:01:24.373 + tee ssh_conf 00:01:27.657 Host vagrant 00:01:27.657 HostName 192.168.121.81 00:01:27.657 User vagrant 00:01:27.657 Port 22 00:01:27.657 UserKnownHostsFile /dev/null 00:01:27.657 StrictHostKeyChecking no 00:01:27.657 PasswordAuthentication no 00:01:27.657 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2004/20.04-1712646987-2220/libvirt/ubuntu2004 00:01:27.657 IdentitiesOnly yes 00:01:27.657 LogLevel FATAL 00:01:27.657 ForwardAgent yes 00:01:27.657 ForwardX11 yes 00:01:27.657 00:01:27.672 [Pipeline] withEnv 00:01:27.674 [Pipeline] { 00:01:27.688 [Pipeline] sh 00:01:28.049 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:28.049 source /etc/os-release 00:01:28.049 [[ -e /image.version ]] && img=$(< /image.version) 00:01:28.049 # Minimal, systemd-like check. 00:01:28.049 if [[ -e /.dockerenv ]]; then 00:01:28.049 # Clear garbage from the node's name: 00:01:28.049 # agt-er_autotest_547-896 -> autotest_547-896 00:01:28.049 # $HOSTNAME is the actual container id 00:01:28.049 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:28.049 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:28.049 # We can assume this is a mount from a host where container is running, 00:01:28.049 # so fetch its hostname to easily identify the target swarm worker. 00:01:28.049 container="$(< /etc/hostname) ($agent)" 00:01:28.049 else 00:01:28.049 # Fallback 00:01:28.049 container=$agent 00:01:28.049 fi 00:01:28.049 fi 00:01:28.049 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:28.049 00:01:28.626 [Pipeline] } 00:01:28.644 [Pipeline] // withEnv 00:01:28.652 [Pipeline] setCustomBuildProperty 00:01:28.665 [Pipeline] stage 00:01:28.667 [Pipeline] { (Tests) 00:01:28.684 [Pipeline] sh 00:01:28.959 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:29.907 [Pipeline] sh 00:01:30.187 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:30.767 [Pipeline] timeout 00:01:30.767 Timeout set to expire in 1 hr 30 min 00:01:30.769 [Pipeline] { 00:01:30.784 [Pipeline] sh 00:01:31.063 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:32.000 HEAD is now at 0663932f5 util: add spdk_net_getaddr 00:01:32.013 [Pipeline] sh 00:01:32.293 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:32.866 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:32.882 [Pipeline] sh 00:01:33.163 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:34.118 [Pipeline] sh 00:01:34.447 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu20-vg-autotest ./autoruner.sh spdk_repo 00:01:35.015 ++ readlink -f spdk_repo 00:01:35.015 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:35.015 + [[ -n /home/vagrant/spdk_repo ]] 00:01:35.015 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:35.015 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:35.015 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:35.016 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:35.016 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:35.016 + [[ ubuntu20-vg-autotest == pkgdep-* ]] 00:01:35.016 + cd /home/vagrant/spdk_repo 00:01:35.016 + source /etc/os-release 00:01:35.016 ++ NAME=Ubuntu 00:01:35.016 ++ VERSION='20.04.6 LTS (Focal Fossa)' 00:01:35.016 ++ ID=ubuntu 00:01:35.016 ++ ID_LIKE=debian 00:01:35.016 ++ PRETTY_NAME='Ubuntu 20.04.6 LTS' 00:01:35.016 ++ VERSION_ID=20.04 00:01:35.016 ++ HOME_URL=https://www.ubuntu.com/ 00:01:35.016 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:01:35.016 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:01:35.016 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:01:35.016 ++ VERSION_CODENAME=focal 00:01:35.016 ++ UBUNTU_CODENAME=focal 00:01:35.016 + uname -a 00:01:35.016 Linux ubuntu2004-cloud-1712646987-2220 5.4.0-176-generic #196-Ubuntu SMP Fri Mar 22 16:46:39 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:01:35.016 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:35.016 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:35.582 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:01:35.582 Hugepages 00:01:35.582 node hugesize free / total 00:01:35.582 node0 1048576kB 0 / 0 00:01:35.582 node0 2048kB 0 / 0 00:01:35.582 00:01:35.582 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:35.582 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:35.582 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:35.582 + rm -f /tmp/spdk-ld-path 00:01:35.582 + source autorun-spdk.conf 00:01:35.582 ++ SPDK_TEST_UNITTEST=1 00:01:35.582 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:35.582 ++ SPDK_TEST_NVME=1 00:01:35.582 ++ SPDK_TEST_BLOCKDEV=1 00:01:35.582 ++ SPDK_RUN_ASAN=1 00:01:35.582 ++ SPDK_RUN_UBSAN=1 00:01:35.582 ++ SPDK_TEST_RAID5=1 00:01:35.582 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:35.582 ++ RUN_NIGHTLY=0 00:01:35.582 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:35.582 + [[ -n '' ]] 00:01:35.582 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:35.582 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:35.582 + for M in /var/spdk/build-*-manifest.txt 00:01:35.582 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:35.582 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:35.582 + for M in /var/spdk/build-*-manifest.txt 00:01:35.582 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:35.582 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:35.582 ++ uname 00:01:35.582 + [[ Linux == \L\i\n\u\x ]] 00:01:35.582 + sudo dmesg -T 00:01:35.582 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:35.582 + sudo dmesg --clear 00:01:35.582 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:35.582 + dmesg_pid=2443 00:01:35.582 + sudo dmesg -Tw 00:01:35.582 + [[ Ubuntu == FreeBSD ]] 00:01:35.582 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:35.582 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:35.582 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:35.582 + [[ -x /usr/src/fio-static/fio ]] 00:01:35.582 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:35.582 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:35.582 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:35.582 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:01:35.582 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:35.582 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:35.582 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:35.582 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:35.582 Test configuration: 00:01:35.582 SPDK_TEST_UNITTEST=1 00:01:35.582 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:35.582 SPDK_TEST_NVME=1 00:01:35.582 SPDK_TEST_BLOCKDEV=1 00:01:35.582 SPDK_RUN_ASAN=1 00:01:35.582 SPDK_RUN_UBSAN=1 00:01:35.582 SPDK_TEST_RAID5=1 00:01:35.582 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:35.841 RUN_NIGHTLY=0 21:15:08 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:35.841 21:15:08 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:35.841 21:15:08 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:35.841 21:15:08 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:35.841 21:15:08 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:35.841 21:15:08 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:35.841 21:15:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:35.841 21:15:08 -- paths/export.sh@5 -- $ export PATH 00:01:35.841 21:15:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:35.841 21:15:08 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:35.841 21:15:08 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:35.841 21:15:08 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721078108.XXXXXX 00:01:35.841 21:15:08 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721078108.45z7ak 00:01:35.841 21:15:08 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:35.841 21:15:08 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:35.841 21:15:08 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:35.841 21:15:08 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:35.841 21:15:08 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:35.841 21:15:08 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:35.841 21:15:08 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:35.841 21:15:08 -- common/autotest_common.sh@10 -- $ set +x 00:01:35.841 21:15:08 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:01:35.841 21:15:08 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:35.841 21:15:08 -- pm/common@17 -- $ local monitor 00:01:35.841 21:15:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:35.841 21:15:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:35.841 21:15:08 -- pm/common@25 -- $ sleep 1 00:01:35.841 21:15:08 -- pm/common@21 -- $ date +%s 00:01:35.841 21:15:08 -- pm/common@21 -- $ date +%s 00:01:35.841 21:15:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721078108 00:01:35.841 21:15:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721078108 00:01:35.841 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721078108_collect-vmstat.pm.log 00:01:35.842 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721078108_collect-cpu-load.pm.log 00:01:36.777 21:15:09 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:36.777 21:15:09 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:36.777 21:15:09 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:36.777 21:15:09 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:36.777 21:15:09 -- spdk/autobuild.sh@16 -- $ date -u 00:01:36.777 Mon Jul 15 21:15:09 UTC 2024 00:01:36.777 21:15:09 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:36.777 v24.09-pre-217-g0663932f5 00:01:36.777 21:15:09 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:36.777 21:15:09 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:36.777 21:15:09 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:36.777 21:15:09 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:36.778 21:15:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:36.778 ************************************ 00:01:36.778 START TEST asan 00:01:36.778 ************************************ 00:01:36.778 using asan 00:01:36.778 21:15:09 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:01:36.778 00:01:36.778 real 0m0.000s 00:01:36.778 user 0m0.000s 00:01:36.778 sys 0m0.000s 00:01:36.778 21:15:09 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:36.778 21:15:09 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:36.778 ************************************ 00:01:36.778 END TEST asan 00:01:36.778 ************************************ 00:01:36.778 21:15:09 -- common/autotest_common.sh@1142 -- $ return 0 00:01:36.778 21:15:09 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:36.778 21:15:09 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:36.778 21:15:09 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:36.778 21:15:09 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:36.778 21:15:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:37.035 ************************************ 00:01:37.035 START TEST ubsan 00:01:37.035 ************************************ 00:01:37.035 using ubsan 00:01:37.035 21:15:09 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:37.035 00:01:37.035 real 0m0.000s 00:01:37.035 user 0m0.000s 00:01:37.035 sys 0m0.000s 00:01:37.035 21:15:09 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:37.035 21:15:09 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:37.035 ************************************ 00:01:37.035 END TEST ubsan 00:01:37.035 ************************************ 00:01:37.035 21:15:09 -- common/autotest_common.sh@1142 -- $ return 0 00:01:37.035 21:15:09 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:37.035 21:15:09 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:37.035 21:15:09 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:37.035 21:15:09 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:37.035 21:15:09 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:37.035 21:15:09 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:01:37.035 21:15:09 -- spdk/autobuild.sh@58 -- $ unittest_build 00:01:37.035 21:15:09 -- common/autobuild_common.sh@420 -- $ run_test unittest_build _unittest_build 00:01:37.035 21:15:09 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:01:37.035 21:15:09 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:37.035 21:15:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:37.035 ************************************ 00:01:37.036 START TEST unittest_build 00:01:37.036 ************************************ 00:01:37.036 21:15:09 unittest_build -- common/autotest_common.sh@1123 -- $ _unittest_build 00:01:37.036 21:15:09 unittest_build -- common/autobuild_common.sh@411 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --without-shared 00:01:37.036 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:37.036 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:37.610 Using 'verbs' RDMA provider 00:01:56.412 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:11.278 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:11.278 Creating mk/config.mk...done. 00:02:11.278 Creating mk/cc.flags.mk...done. 00:02:11.278 Type 'make' to build. 00:02:11.278 21:15:42 unittest_build -- common/autobuild_common.sh@412 -- $ make -j10 00:02:11.278 make[1]: Nothing to be done for 'all'. 00:02:12.284 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.284 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.284 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.284 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.284 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.284 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.542 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.542 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.542 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.542 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.542 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.542 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.542 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.542 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.799 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.799 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.799 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.799 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.799 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.799 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.799 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.057 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.057 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.057 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.057 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.057 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.057 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.057 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.057 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.316 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.316 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.316 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.316 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.316 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.316 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.316 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.573 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.573 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.573 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.573 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.573 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.573 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.573 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.574 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.831 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.831 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.831 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.831 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.831 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.832 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.090 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.090 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.090 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.090 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.090 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.090 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.090 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.090 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.090 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.090 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.348 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.348 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.348 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.348 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.348 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.348 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.348 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.348 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.348 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.348 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.606 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.606 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.606 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.606 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.606 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.606 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.864 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.864 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.864 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.864 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.122 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.122 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.122 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.122 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.122 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.122 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.122 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.122 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.122 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.380 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.380 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.380 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.380 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.380 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.380 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.380 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.380 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.638 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.638 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.638 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.638 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.638 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.638 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.638 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.638 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.897 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.897 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.897 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.897 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.156 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.156 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.156 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.156 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.156 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.156 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.156 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.414 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.414 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.414 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.673 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.673 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.673 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.673 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.673 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.932 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.932 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.932 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.932 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.932 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.932 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.932 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.932 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.932 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.190 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.190 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.190 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.448 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.448 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.448 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.448 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.705 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.705 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.705 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.705 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.705 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.705 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.705 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.705 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.963 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.963 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.963 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.963 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.963 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.963 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.963 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.220 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.220 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.220 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.220 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.220 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.220 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.220 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.220 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.220 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.478 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.478 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.478 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.478 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.478 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.803 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.803 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.803 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.803 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.803 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.803 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.803 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.059 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.059 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.059 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.059 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.059 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.059 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.317 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.317 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.317 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.317 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.317 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.317 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.317 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.317 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.317 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.317 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.576 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.576 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.576 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.576 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.576 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.576 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.576 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.576 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.576 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.576 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.835 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.835 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.835 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.835 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.835 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.835 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.835 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.835 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.093 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.093 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.093 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.093 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.093 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.093 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.351 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.351 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.351 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.608 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.608 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.608 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.866 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.866 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.866 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.866 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.866 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.867 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.867 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.867 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.867 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.125 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.125 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.125 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.125 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.125 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.125 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.125 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.384 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.384 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.384 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.952 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.952 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.952 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.952 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.211 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.785 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.785 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.785 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.786 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.044 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.044 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.044 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.302 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.302 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.302 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.560 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.560 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.560 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.818 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.818 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.818 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.076 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.076 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.076 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.076 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.076 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.336 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.336 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.336 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.336 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.336 The Meson build system 00:02:24.336 Version: 1.4.0 00:02:24.336 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:24.336 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:24.336 Build type: native build 00:02:24.336 Program cat found: YES (/usr/bin/cat) 00:02:24.336 Project name: DPDK 00:02:24.336 Project version: 24.03.0 00:02:24.336 C compiler for the host machine: cc (gcc 9.4.0 "cc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0") 00:02:24.336 C linker for the host machine: cc ld.bfd 2.34 00:02:24.336 Host machine cpu family: x86_64 00:02:24.336 Host machine cpu: x86_64 00:02:24.336 Message: ## Building in Developer Mode ## 00:02:24.336 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:24.336 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:24.336 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:24.336 Program python3 found: YES (/usr/bin/python3) 00:02:24.336 Program cat found: YES (/usr/bin/cat) 00:02:24.336 Compiler for C supports arguments -march=native: YES 00:02:24.336 Checking for size of "void *" : 8 00:02:24.336 Checking for size of "void *" : 8 (cached) 00:02:24.336 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:24.336 Library m found: YES 00:02:24.336 Library numa found: YES 00:02:24.336 Has header "numaif.h" : YES 00:02:24.336 Library fdt found: NO 00:02:24.336 Library execinfo found: NO 00:02:24.336 Has header "execinfo.h" : YES 00:02:24.336 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.1 00:02:24.336 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:24.336 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:24.336 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:24.336 Run-time dependency openssl found: YES 1.1.1f 00:02:24.336 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:24.336 Library pcap found: NO 00:02:24.336 Compiler for C supports arguments -Wcast-qual: YES 00:02:24.336 Compiler for C supports arguments -Wdeprecated: YES 00:02:24.336 Compiler for C supports arguments -Wformat: YES 00:02:24.336 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:24.336 Compiler for C supports arguments -Wformat-security: YES 00:02:24.336 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:24.336 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:24.336 Compiler for C supports arguments -Wnested-externs: YES 00:02:24.336 Compiler for C supports arguments -Wold-style-definition: YES 00:02:24.336 Compiler for C supports arguments -Wpointer-arith: YES 00:02:24.336 Compiler for C supports arguments -Wsign-compare: YES 00:02:24.336 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:24.336 Compiler for C supports arguments -Wundef: YES 00:02:24.336 Compiler for C supports arguments -Wwrite-strings: YES 00:02:24.336 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:24.336 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:24.336 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:24.336 Program objdump found: YES (/usr/bin/objdump) 00:02:24.336 Compiler for C supports arguments -mavx512f: YES 00:02:24.336 Checking if "AVX512 checking" compiles: YES 00:02:24.336 Fetching value of define "__SSE4_2__" : 1 00:02:24.336 Fetching value of define "__AES__" : 1 00:02:24.336 Fetching value of define "__AVX__" : 1 00:02:24.336 Fetching value of define "__AVX2__" : 1 00:02:24.336 Fetching value of define "__AVX512BW__" : 1 00:02:24.336 Fetching value of define "__AVX512CD__" : 1 00:02:24.336 Fetching value of define "__AVX512DQ__" : 1 00:02:24.336 Fetching value of define "__AVX512F__" : 1 00:02:24.336 Fetching value of define "__AVX512VL__" : 1 00:02:24.336 Fetching value of define "__PCLMUL__" : 1 00:02:24.336 Fetching value of define "__RDRND__" : 1 00:02:24.336 Fetching value of define "__RDSEED__" : 1 00:02:24.336 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:24.336 Fetching value of define "__znver1__" : (undefined) 00:02:24.336 Fetching value of define "__znver2__" : (undefined) 00:02:24.336 Fetching value of define "__znver3__" : (undefined) 00:02:24.336 Fetching value of define "__znver4__" : (undefined) 00:02:24.336 Library asan found: YES 00:02:24.336 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:24.336 Message: lib/log: Defining dependency "log" 00:02:24.336 Message: lib/kvargs: Defining dependency "kvargs" 00:02:24.336 Message: lib/telemetry: Defining dependency "telemetry" 00:02:24.336 Library rt found: YES 00:02:24.336 Checking for function "getentropy" : NO 00:02:24.336 Message: lib/eal: Defining dependency "eal" 00:02:24.336 Message: lib/ring: Defining dependency "ring" 00:02:24.336 Message: lib/rcu: Defining dependency "rcu" 00:02:24.336 Message: lib/mempool: Defining dependency "mempool" 00:02:24.336 Message: lib/mbuf: Defining dependency "mbuf" 00:02:24.336 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:24.336 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:24.336 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:24.336 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:24.336 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:24.336 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:24.336 Compiler for C supports arguments -mpclmul: YES 00:02:24.336 Compiler for C supports arguments -maes: YES 00:02:24.336 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:24.336 Compiler for C supports arguments -mavx512bw: YES 00:02:24.336 Compiler for C supports arguments -mavx512dq: YES 00:02:24.336 Compiler for C supports arguments -mavx512vl: YES 00:02:24.336 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:24.336 Compiler for C supports arguments -mavx2: YES 00:02:24.336 Compiler for C supports arguments -mavx: YES 00:02:24.336 Message: lib/net: Defining dependency "net" 00:02:24.336 Message: lib/meter: Defining dependency "meter" 00:02:24.336 Message: lib/ethdev: Defining dependency "ethdev" 00:02:24.336 Message: lib/pci: Defining dependency "pci" 00:02:24.336 Message: lib/cmdline: Defining dependency "cmdline" 00:02:24.336 Message: lib/hash: Defining dependency "hash" 00:02:24.336 Message: lib/timer: Defining dependency "timer" 00:02:24.336 Message: lib/compressdev: Defining dependency "compressdev" 00:02:24.336 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:24.336 Message: lib/dmadev: Defining dependency "dmadev" 00:02:24.336 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:24.336 Message: lib/power: Defining dependency "power" 00:02:24.336 Message: lib/reorder: Defining dependency "reorder" 00:02:24.336 Message: lib/security: Defining dependency "security" 00:02:24.336 Has header "linux/userfaultfd.h" : YES 00:02:24.336 Has header "linux/vduse.h" : NO 00:02:24.336 Message: lib/vhost: Defining dependency "vhost" 00:02:24.336 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:24.337 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:24.337 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:24.337 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:24.337 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:24.337 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:24.337 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:24.337 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:24.337 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:24.337 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:24.337 Program doxygen found: YES (/usr/bin/doxygen) 00:02:24.337 Configuring doxy-api-html.conf using configuration 00:02:24.337 Configuring doxy-api-man.conf using configuration 00:02:24.337 Program mandb found: YES (/usr/bin/mandb) 00:02:24.337 Program sphinx-build found: NO 00:02:24.337 Configuring rte_build_config.h using configuration 00:02:24.337 Message: 00:02:24.337 ================= 00:02:24.337 Applications Enabled 00:02:24.337 ================= 00:02:24.337 00:02:24.337 apps: 00:02:24.337 00:02:24.337 00:02:24.337 Message: 00:02:24.337 ================= 00:02:24.337 Libraries Enabled 00:02:24.337 ================= 00:02:24.337 00:02:24.337 libs: 00:02:24.337 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:24.337 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:24.337 cryptodev, dmadev, power, reorder, security, vhost, 00:02:24.337 00:02:24.337 Message: 00:02:24.337 =============== 00:02:24.337 Drivers Enabled 00:02:24.337 =============== 00:02:24.337 00:02:24.337 common: 00:02:24.337 00:02:24.337 bus: 00:02:24.337 pci, vdev, 00:02:24.337 mempool: 00:02:24.337 ring, 00:02:24.337 dma: 00:02:24.337 00:02:24.337 net: 00:02:24.337 00:02:24.337 crypto: 00:02:24.337 00:02:24.337 compress: 00:02:24.337 00:02:24.337 vdpa: 00:02:24.337 00:02:24.337 00:02:24.337 Message: 00:02:24.337 ================= 00:02:24.337 Content Skipped 00:02:24.337 ================= 00:02:24.337 00:02:24.337 apps: 00:02:24.337 dumpcap: explicitly disabled via build config 00:02:24.337 graph: explicitly disabled via build config 00:02:24.337 pdump: explicitly disabled via build config 00:02:24.337 proc-info: explicitly disabled via build config 00:02:24.337 test-acl: explicitly disabled via build config 00:02:24.337 test-bbdev: explicitly disabled via build config 00:02:24.337 test-cmdline: explicitly disabled via build config 00:02:24.337 test-compress-perf: explicitly disabled via build config 00:02:24.337 test-crypto-perf: explicitly disabled via build config 00:02:24.337 test-dma-perf: explicitly disabled via build config 00:02:24.337 test-eventdev: explicitly disabled via build config 00:02:24.337 test-fib: explicitly disabled via build config 00:02:24.337 test-flow-perf: explicitly disabled via build config 00:02:24.337 test-gpudev: explicitly disabled via build config 00:02:24.337 test-mldev: explicitly disabled via build config 00:02:24.337 test-pipeline: explicitly disabled via build config 00:02:24.337 test-pmd: explicitly disabled via build config 00:02:24.337 test-regex: explicitly disabled via build config 00:02:24.337 test-sad: explicitly disabled via build config 00:02:24.337 test-security-perf: explicitly disabled via build config 00:02:24.337 00:02:24.337 libs: 00:02:24.337 argparse: explicitly disabled via build config 00:02:24.337 metrics: explicitly disabled via build config 00:02:24.337 acl: explicitly disabled via build config 00:02:24.337 bbdev: explicitly disabled via build config 00:02:24.337 bitratestats: explicitly disabled via build config 00:02:24.337 bpf: explicitly disabled via build config 00:02:24.337 cfgfile: explicitly disabled via build config 00:02:24.337 distributor: explicitly disabled via build config 00:02:24.337 efd: explicitly disabled via build config 00:02:24.337 eventdev: explicitly disabled via build config 00:02:24.337 dispatcher: explicitly disabled via build config 00:02:24.337 gpudev: explicitly disabled via build config 00:02:24.337 gro: explicitly disabled via build config 00:02:24.337 gso: explicitly disabled via build config 00:02:24.337 ip_frag: explicitly disabled via build config 00:02:24.337 jobstats: explicitly disabled via build config 00:02:24.337 latencystats: explicitly disabled via build config 00:02:24.337 lpm: explicitly disabled via build config 00:02:24.337 member: explicitly disabled via build config 00:02:24.337 pcapng: explicitly disabled via build config 00:02:24.337 rawdev: explicitly disabled via build config 00:02:24.337 regexdev: explicitly disabled via build config 00:02:24.337 mldev: explicitly disabled via build config 00:02:24.337 rib: explicitly disabled via build config 00:02:24.337 sched: explicitly disabled via build config 00:02:24.337 stack: explicitly disabled via build config 00:02:24.337 ipsec: explicitly disabled via build config 00:02:24.337 pdcp: explicitly disabled via build config 00:02:24.337 fib: explicitly disabled via build config 00:02:24.337 port: explicitly disabled via build config 00:02:24.337 pdump: explicitly disabled via build config 00:02:24.337 table: explicitly disabled via build config 00:02:24.337 pipeline: explicitly disabled via build config 00:02:24.337 graph: explicitly disabled via build config 00:02:24.337 node: explicitly disabled via build config 00:02:24.337 00:02:24.337 drivers: 00:02:24.337 common/cpt: not in enabled drivers build config 00:02:24.337 common/dpaax: not in enabled drivers build config 00:02:24.337 common/iavf: not in enabled drivers build config 00:02:24.337 common/idpf: not in enabled drivers build config 00:02:24.337 common/ionic: not in enabled drivers build config 00:02:24.337 common/mvep: not in enabled drivers build config 00:02:24.337 common/octeontx: not in enabled drivers build config 00:02:24.337 bus/auxiliary: not in enabled drivers build config 00:02:24.337 bus/cdx: not in enabled drivers build config 00:02:24.337 bus/dpaa: not in enabled drivers build config 00:02:24.337 bus/fslmc: not in enabled drivers build config 00:02:24.337 bus/ifpga: not in enabled drivers build config 00:02:24.337 bus/platform: not in enabled drivers build config 00:02:24.337 bus/uacce: not in enabled drivers build config 00:02:24.337 bus/vmbus: not in enabled drivers build config 00:02:24.337 common/cnxk: not in enabled drivers build config 00:02:24.337 common/mlx5: not in enabled drivers build config 00:02:24.337 common/nfp: not in enabled drivers build config 00:02:24.337 common/nitrox: not in enabled drivers build config 00:02:24.337 common/qat: not in enabled drivers build config 00:02:24.337 common/sfc_efx: not in enabled drivers build config 00:02:24.337 mempool/bucket: not in enabled drivers build config 00:02:24.337 mempool/cnxk: not in enabled drivers build config 00:02:24.337 mempool/dpaa: not in enabled drivers build config 00:02:24.337 mempool/dpaa2: not in enabled drivers build config 00:02:24.337 mempool/octeontx: not in enabled drivers build config 00:02:24.337 mempool/stack: not in enabled drivers build config 00:02:24.337 dma/cnxk: not in enabled drivers build config 00:02:24.337 dma/dpaa: not in enabled drivers build config 00:02:24.337 dma/dpaa2: not in enabled drivers build config 00:02:24.337 dma/hisilicon: not in enabled drivers build config 00:02:24.337 dma/idxd: not in enabled drivers build config 00:02:24.337 dma/ioat: not in enabled drivers build config 00:02:24.337 dma/skeleton: not in enabled drivers build config 00:02:24.337 net/af_packet: not in enabled drivers build config 00:02:24.337 net/af_xdp: not in enabled drivers build config 00:02:24.337 net/ark: not in enabled drivers build config 00:02:24.337 net/atlantic: not in enabled drivers build config 00:02:24.337 net/avp: not in enabled drivers build config 00:02:24.337 net/axgbe: not in enabled drivers build config 00:02:24.337 net/bnx2x: not in enabled drivers build config 00:02:24.337 net/bnxt: not in enabled drivers build config 00:02:24.337 net/bonding: not in enabled drivers build config 00:02:24.338 net/cnxk: not in enabled drivers build config 00:02:24.338 net/cpfl: not in enabled drivers build config 00:02:24.338 net/cxgbe: not in enabled drivers build config 00:02:24.338 net/dpaa: not in enabled drivers build config 00:02:24.338 net/dpaa2: not in enabled drivers build config 00:02:24.338 net/e1000: not in enabled drivers build config 00:02:24.338 net/ena: not in enabled drivers build config 00:02:24.338 net/enetc: not in enabled drivers build config 00:02:24.338 net/enetfec: not in enabled drivers build config 00:02:24.338 net/enic: not in enabled drivers build config 00:02:24.338 net/failsafe: not in enabled drivers build config 00:02:24.338 net/fm10k: not in enabled drivers build config 00:02:24.338 net/gve: not in enabled drivers build config 00:02:24.338 net/hinic: not in enabled drivers build config 00:02:24.338 net/hns3: not in enabled drivers build config 00:02:24.338 net/i40e: not in enabled drivers build config 00:02:24.338 net/iavf: not in enabled drivers build config 00:02:24.338 net/ice: not in enabled drivers build config 00:02:24.338 net/idpf: not in enabled drivers build config 00:02:24.338 net/igc: not in enabled drivers build config 00:02:24.338 net/ionic: not in enabled drivers build config 00:02:24.338 net/ipn3ke: not in enabled drivers build config 00:02:24.338 net/ixgbe: not in enabled drivers build config 00:02:24.338 net/mana: not in enabled drivers build config 00:02:24.338 net/memif: not in enabled drivers build config 00:02:24.338 net/mlx4: not in enabled drivers build config 00:02:24.338 net/mlx5: not in enabled drivers build config 00:02:24.338 net/mvneta: not in enabled drivers build config 00:02:24.338 net/mvpp2: not in enabled drivers build config 00:02:24.338 net/netvsc: not in enabled drivers build config 00:02:24.338 net/nfb: not in enabled drivers build config 00:02:24.338 net/nfp: not in enabled drivers build config 00:02:24.338 net/ngbe: not in enabled drivers build config 00:02:24.338 net/null: not in enabled drivers build config 00:02:24.338 net/octeontx: not in enabled drivers build config 00:02:24.338 net/octeon_ep: not in enabled drivers build config 00:02:24.338 net/pcap: not in enabled drivers build config 00:02:24.338 net/pfe: not in enabled drivers build config 00:02:24.338 net/qede: not in enabled drivers build config 00:02:24.338 net/ring: not in enabled drivers build config 00:02:24.338 net/sfc: not in enabled drivers build config 00:02:24.338 net/softnic: not in enabled drivers build config 00:02:24.338 net/tap: not in enabled drivers build config 00:02:24.338 net/thunderx: not in enabled drivers build config 00:02:24.338 net/txgbe: not in enabled drivers build config 00:02:24.338 net/vdev_netvsc: not in enabled drivers build config 00:02:24.338 net/vhost: not in enabled drivers build config 00:02:24.338 net/virtio: not in enabled drivers build config 00:02:24.338 net/vmxnet3: not in enabled drivers build config 00:02:24.338 raw/*: missing internal dependency, "rawdev" 00:02:24.338 crypto/armv8: not in enabled drivers build config 00:02:24.338 crypto/bcmfs: not in enabled drivers build config 00:02:24.338 crypto/caam_jr: not in enabled drivers build config 00:02:24.338 crypto/ccp: not in enabled drivers build config 00:02:24.338 crypto/cnxk: not in enabled drivers build config 00:02:24.338 crypto/dpaa_sec: not in enabled drivers build config 00:02:24.338 crypto/dpaa2_sec: not in enabled drivers build config 00:02:24.338 crypto/ipsec_mb: not in enabled drivers build config 00:02:24.338 crypto/mlx5: not in enabled drivers build config 00:02:24.338 crypto/mvsam: not in enabled drivers build config 00:02:24.338 crypto/nitrox: not in enabled drivers build config 00:02:24.338 crypto/null: not in enabled drivers build config 00:02:24.338 crypto/octeontx: not in enabled drivers build config 00:02:24.338 crypto/openssl: not in enabled drivers build config 00:02:24.338 crypto/scheduler: not in enabled drivers build config 00:02:24.338 crypto/uadk: not in enabled drivers build config 00:02:24.338 crypto/virtio: not in enabled drivers build config 00:02:24.338 compress/isal: not in enabled drivers build config 00:02:24.338 compress/mlx5: not in enabled drivers build config 00:02:24.338 compress/nitrox: not in enabled drivers build config 00:02:24.338 compress/octeontx: not in enabled drivers build config 00:02:24.338 compress/zlib: not in enabled drivers build config 00:02:24.338 regex/*: missing internal dependency, "regexdev" 00:02:24.338 ml/*: missing internal dependency, "mldev" 00:02:24.338 vdpa/ifc: not in enabled drivers build config 00:02:24.338 vdpa/mlx5: not in enabled drivers build config 00:02:24.338 vdpa/nfp: not in enabled drivers build config 00:02:24.338 vdpa/sfc: not in enabled drivers build config 00:02:24.338 event/*: missing internal dependency, "eventdev" 00:02:24.338 baseband/*: missing internal dependency, "bbdev" 00:02:24.338 gpu/*: missing internal dependency, "gpudev" 00:02:24.338 00:02:24.338 00:02:24.598 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.598 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.598 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.598 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.856 Build targets in project: 85 00:02:24.856 00:02:24.856 DPDK 24.03.0 00:02:24.856 00:02:24.856 User defined options 00:02:24.856 buildtype : debug 00:02:24.856 default_library : static 00:02:24.856 libdir : lib 00:02:24.856 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:24.856 b_sanitize : address 00:02:24.856 c_args : -fPIC -Werror 00:02:24.856 c_link_args : 00:02:24.856 cpu_instruction_set: native 00:02:24.856 disable_apps : graph,dumpcap,test,test-gpudev,test-dma-perf,test-cmdline,test-compress-perf,pdump,test-fib,test-mldev,test-regex,proc-info,test-crypto-perf,test-pipeline,test-security-perf,test-acl,test-sad,test-pmd,test-flow-perf,test-bbdev,test-eventdev 00:02:24.856 disable_libs : gro,eventdev,lpm,efd,node,acl,bitratestats,port,graph,pipeline,pdcp,gpudev,ipsec,jobstats,dispatcher,mldev,pdump,gso,metrics,latencystats,bbdev,argparse,rawdev,stack,member,cfgfile,sched,pcapng,bpf,ip_frag,distributor,fib,regexdev,rib,table 00:02:24.856 enable_docs : false 00:02:24.856 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:24.856 enable_kmods : false 00:02:24.856 max_lcores : 128 00:02:24.856 tests : false 00:02:24.856 00:02:24.856 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:24.856 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.856 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.856 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.856 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.115 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.115 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.115 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.115 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:25.115 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.374 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:25.374 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:25.374 [3/267] Linking static target lib/librte_kvargs.a 00:02:25.374 [4/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:25.374 [5/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:25.374 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:25.374 [7/267] Linking static target lib/librte_log.a 00:02:25.374 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:25.632 [9/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:25.632 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:25.632 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:25.632 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.632 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.632 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:25.632 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:25.632 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:25.632 [15/267] Linking static target lib/librte_telemetry.a 00:02:25.632 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:25.632 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:25.891 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:25.891 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:25.891 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:25.891 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.891 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:25.891 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:25.891 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:25.891 [24/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.891 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:26.149 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:26.149 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:26.149 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:26.149 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:26.149 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.149 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:26.149 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.149 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.149 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:26.149 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.149 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:26.408 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:26.408 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:26.408 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:26.408 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:26.408 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:26.408 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.408 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:26.408 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.408 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.408 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:26.408 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:26.408 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:26.666 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:26.666 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:26.666 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:26.666 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:26.666 [46/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:26.666 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:26.666 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:26.666 [49/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.666 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:26.666 [51/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.666 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:26.666 [53/267] Linking target lib/librte_log.so.24.1 00:02:26.925 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:26.925 [55/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:26.925 [56/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:26.925 [57/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:26.925 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:26.925 [59/267] Linking target lib/librte_kvargs.so.24.1 00:02:26.925 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:26.925 [61/267] Linking target lib/librte_telemetry.so.24.1 00:02:26.925 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:26.925 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:26.925 [64/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:26.925 [65/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:26.925 [66/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:26.925 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:27.185 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:27.185 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:27.185 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:27.185 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:27.185 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:27.185 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:27.185 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:27.185 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:27.185 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:27.185 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:27.443 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:27.443 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:27.443 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:27.443 [81/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:27.443 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:27.443 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:27.443 [84/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:27.443 [85/267] Linking static target lib/librte_ring.a 00:02:27.443 [86/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:27.443 [87/267] Linking static target lib/librte_eal.a 00:02:27.703 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:27.703 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:27.703 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:27.703 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:27.703 [92/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:27.703 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:27.703 [94/267] Linking static target lib/librte_mempool.a 00:02:27.703 [95/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:27.703 [96/267] Linking static target lib/librte_rcu.a 00:02:27.703 [97/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.961 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:27.961 [99/267] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:27.961 [100/267] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:27.961 [101/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:27.961 [102/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:27.961 [103/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:27.961 [104/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.961 [105/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:28.220 [106/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:28.220 [107/267] Linking static target lib/librte_net.a 00:02:28.220 [108/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:28.220 [109/267] Linking static target lib/librte_meter.a 00:02:28.220 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:28.220 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:28.220 [112/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:28.220 [113/267] Linking static target lib/librte_mbuf.a 00:02:28.220 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:28.220 [115/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.479 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:28.479 [117/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.479 [118/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.737 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:28.737 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:28.737 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:28.737 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:28.998 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:28.998 [124/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.998 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:28.998 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:28.998 [127/267] Linking static target lib/librte_pci.a 00:02:28.998 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:28.998 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:28.998 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:28.998 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:28.998 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:28.998 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:28.998 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:29.259 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:29.259 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:29.259 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:29.259 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:29.259 [139/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.259 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:29.259 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:29.259 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:29.259 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:29.259 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:29.259 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:29.259 [146/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:29.259 [147/267] Linking static target lib/librte_cmdline.a 00:02:29.516 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:29.516 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:29.516 [150/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:29.516 [151/267] Linking static target lib/librte_timer.a 00:02:29.516 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:29.516 [153/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:29.774 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:29.774 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:29.774 [156/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:29.774 [157/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:29.774 [158/267] Linking static target lib/librte_compressdev.a 00:02:30.032 [159/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:30.032 [160/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.032 [161/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:30.032 [162/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:30.032 [163/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:30.032 [164/267] Linking static target lib/librte_dmadev.a 00:02:30.032 [165/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:30.032 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:30.032 [167/267] Linking static target lib/librte_hash.a 00:02:30.290 [168/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.290 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:30.290 [170/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:30.290 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:30.290 [172/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.290 [173/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:30.549 [174/267] Linking static target lib/librte_ethdev.a 00:02:30.549 [175/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:30.549 [176/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:30.549 [177/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:30.549 [178/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:30.549 [179/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.549 [180/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:30.807 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:30.807 [182/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:30.807 [183/267] Linking static target lib/librte_power.a 00:02:30.807 [184/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.066 [185/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:31.066 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:31.066 [187/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:31.066 [188/267] Linking static target lib/librte_reorder.a 00:02:31.066 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:31.066 [190/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:31.066 [191/267] Linking static target lib/librte_cryptodev.a 00:02:31.066 [192/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:31.066 [193/267] Linking static target lib/librte_security.a 00:02:31.325 [194/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.584 [195/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.584 [196/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.843 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:31.843 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:31.843 [199/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:31.843 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:31.843 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:32.102 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:32.102 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:32.102 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:32.361 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:32.361 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:32.361 [207/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:32.361 [208/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:32.361 [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:32.361 [210/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:32.361 [211/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:32.361 [212/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:32.361 [213/267] Linking static target drivers/librte_bus_vdev.a 00:02:32.620 [214/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:32.620 [215/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:32.620 [216/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:32.620 [217/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.620 [218/267] Linking static target drivers/librte_bus_pci.a 00:02:32.620 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:32.620 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:32.620 [221/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.904 [222/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:32.904 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:32.904 [224/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:32.904 [225/267] Linking static target drivers/librte_mempool_ring.a 00:02:33.162 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.538 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:35.107 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.366 [229/267] Linking target lib/librte_eal.so.24.1 00:02:35.366 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:35.366 [231/267] Linking target lib/librte_timer.so.24.1 00:02:35.366 [232/267] Linking target lib/librte_meter.so.24.1 00:02:35.366 [233/267] Linking target lib/librte_pci.so.24.1 00:02:35.624 [234/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:35.624 [235/267] Linking target lib/librte_dmadev.so.24.1 00:02:35.624 [236/267] Linking target lib/librte_ring.so.24.1 00:02:35.624 [237/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:35.624 [238/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:35.624 [239/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:35.624 [240/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:35.624 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:35.624 [242/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:35.624 [243/267] Linking target lib/librte_mempool.so.24.1 00:02:35.624 [244/267] Linking target lib/librte_rcu.so.24.1 00:02:35.624 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:35.881 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:35.881 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:35.881 [248/267] Linking target lib/librte_mbuf.so.24.1 00:02:35.881 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:35.881 [250/267] Linking target lib/librte_compressdev.so.24.1 00:02:35.881 [251/267] Linking target lib/librte_net.so.24.1 00:02:35.881 [252/267] Linking target lib/librte_reorder.so.24.1 00:02:35.881 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:02:36.139 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:36.139 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:36.139 [256/267] Linking target lib/librte_hash.so.24.1 00:02:36.139 [257/267] Linking target lib/librte_cmdline.so.24.1 00:02:36.139 [258/267] Linking target lib/librte_security.so.24.1 00:02:36.139 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:38.679 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.679 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:38.679 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:38.679 [263/267] Linking target lib/librte_power.so.24.1 00:02:39.245 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:39.245 [265/267] Linking static target lib/librte_vhost.a 00:02:41.777 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.777 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:41.777 INFO: autodetecting backend as ninja 00:02:41.777 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:42.712 CC lib/log/log.o 00:02:42.712 CC lib/log/log_deprecated.o 00:02:42.712 CC lib/log/log_flags.o 00:02:42.713 CC lib/ut/ut.o 00:02:42.713 CC lib/ut_mock/mock.o 00:02:42.713 LIB libspdk_log.a 00:02:42.713 LIB libspdk_ut_mock.a 00:02:42.971 LIB libspdk_ut.a 00:02:42.971 CC lib/ioat/ioat.o 00:02:42.971 CXX lib/trace_parser/trace.o 00:02:42.971 CC lib/dma/dma.o 00:02:42.971 CC lib/util/base64.o 00:02:42.971 CC lib/util/bit_array.o 00:02:42.971 CC lib/util/crc32.o 00:02:42.971 CC lib/util/cpuset.o 00:02:42.971 CC lib/util/crc16.o 00:02:42.971 CC lib/util/crc32c.o 00:02:43.230 CC lib/vfio_user/host/vfio_user_pci.o 00:02:43.230 CC lib/vfio_user/host/vfio_user.o 00:02:43.230 CC lib/util/crc32_ieee.o 00:02:43.230 CC lib/util/crc64.o 00:02:43.230 LIB libspdk_dma.a 00:02:43.230 CC lib/util/dif.o 00:02:43.230 CC lib/util/fd.o 00:02:43.230 CC lib/util/fd_group.o 00:02:43.230 CC lib/util/file.o 00:02:43.230 CC lib/util/hexlify.o 00:02:43.230 CC lib/util/iov.o 00:02:43.488 CC lib/util/math.o 00:02:43.488 LIB libspdk_ioat.a 00:02:43.488 LIB libspdk_vfio_user.a 00:02:43.488 CC lib/util/net.o 00:02:43.488 CC lib/util/pipe.o 00:02:43.488 CC lib/util/strerror_tls.o 00:02:43.488 CC lib/util/string.o 00:02:43.488 CC lib/util/uuid.o 00:02:43.488 CC lib/util/xor.o 00:02:43.488 CC lib/util/zipf.o 00:02:44.054 LIB libspdk_util.a 00:02:44.054 LIB libspdk_trace_parser.a 00:02:44.311 CC lib/vmd/vmd.o 00:02:44.311 CC lib/vmd/led.o 00:02:44.311 CC lib/rdma_utils/rdma_utils.o 00:02:44.311 CC lib/env_dpdk/memory.o 00:02:44.311 CC lib/env_dpdk/pci.o 00:02:44.311 CC lib/env_dpdk/env.o 00:02:44.311 CC lib/conf/conf.o 00:02:44.311 CC lib/rdma_provider/common.o 00:02:44.311 CC lib/idxd/idxd.o 00:02:44.311 CC lib/json/json_parse.o 00:02:44.311 CC lib/json/json_util.o 00:02:44.569 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:44.569 LIB libspdk_conf.a 00:02:44.569 CC lib/env_dpdk/init.o 00:02:44.569 CC lib/idxd/idxd_user.o 00:02:44.569 CC lib/env_dpdk/threads.o 00:02:44.569 LIB libspdk_rdma_utils.a 00:02:44.569 CC lib/json/json_write.o 00:02:44.569 CC lib/env_dpdk/pci_ioat.o 00:02:44.826 LIB libspdk_rdma_provider.a 00:02:44.826 CC lib/env_dpdk/pci_virtio.o 00:02:44.826 CC lib/env_dpdk/pci_vmd.o 00:02:44.826 CC lib/env_dpdk/pci_idxd.o 00:02:44.826 CC lib/env_dpdk/pci_event.o 00:02:44.826 CC lib/env_dpdk/sigbus_handler.o 00:02:44.826 LIB libspdk_idxd.a 00:02:44.826 CC lib/env_dpdk/pci_dpdk.o 00:02:44.826 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:44.826 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:45.084 LIB libspdk_json.a 00:02:45.084 CC lib/jsonrpc/jsonrpc_server.o 00:02:45.084 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:45.342 CC lib/jsonrpc/jsonrpc_client.o 00:02:45.342 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:45.342 LIB libspdk_vmd.a 00:02:45.599 LIB libspdk_jsonrpc.a 00:02:45.856 CC lib/rpc/rpc.o 00:02:45.856 LIB libspdk_env_dpdk.a 00:02:46.114 LIB libspdk_rpc.a 00:02:46.373 CC lib/keyring/keyring.o 00:02:46.373 CC lib/keyring/keyring_rpc.o 00:02:46.373 CC lib/trace/trace.o 00:02:46.373 CC lib/trace/trace_flags.o 00:02:46.373 CC lib/trace/trace_rpc.o 00:02:46.373 CC lib/notify/notify.o 00:02:46.373 CC lib/notify/notify_rpc.o 00:02:46.373 LIB libspdk_notify.a 00:02:46.631 LIB libspdk_keyring.a 00:02:46.631 LIB libspdk_trace.a 00:02:46.890 CC lib/thread/iobuf.o 00:02:46.890 CC lib/thread/thread.o 00:02:46.890 CC lib/sock/sock.o 00:02:46.890 CC lib/sock/sock_rpc.o 00:02:47.500 LIB libspdk_sock.a 00:02:47.500 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:47.500 CC lib/nvme/nvme_fabric.o 00:02:47.500 CC lib/nvme/nvme_ctrlr.o 00:02:47.500 CC lib/nvme/nvme_qpair.o 00:02:47.500 CC lib/nvme/nvme_ns_cmd.o 00:02:47.500 CC lib/nvme/nvme_ns.o 00:02:47.500 CC lib/nvme/nvme_pcie_common.o 00:02:47.500 CC lib/nvme/nvme_pcie.o 00:02:47.500 CC lib/nvme/nvme.o 00:02:48.067 CC lib/nvme/nvme_quirks.o 00:02:48.067 CC lib/nvme/nvme_transport.o 00:02:48.067 CC lib/nvme/nvme_discovery.o 00:02:48.067 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:48.325 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:48.325 CC lib/nvme/nvme_tcp.o 00:02:48.325 CC lib/nvme/nvme_opal.o 00:02:48.583 CC lib/nvme/nvme_io_msg.o 00:02:48.583 CC lib/nvme/nvme_poll_group.o 00:02:48.583 CC lib/nvme/nvme_zns.o 00:02:48.583 CC lib/nvme/nvme_stubs.o 00:02:48.583 LIB libspdk_thread.a 00:02:48.842 CC lib/nvme/nvme_auth.o 00:02:48.842 CC lib/nvme/nvme_cuse.o 00:02:48.842 CC lib/nvme/nvme_rdma.o 00:02:49.100 CC lib/accel/accel_rpc.o 00:02:49.100 CC lib/accel/accel.o 00:02:49.100 CC lib/blob/blobstore.o 00:02:49.100 CC lib/accel/accel_sw.o 00:02:49.100 CC lib/init/json_config.o 00:02:49.100 CC lib/virtio/virtio.o 00:02:49.358 CC lib/virtio/virtio_vhost_user.o 00:02:49.358 CC lib/init/subsystem.o 00:02:49.358 CC lib/init/subsystem_rpc.o 00:02:49.617 CC lib/init/rpc.o 00:02:49.617 CC lib/virtio/virtio_vfio_user.o 00:02:49.617 CC lib/blob/request.o 00:02:49.617 CC lib/blob/zeroes.o 00:02:49.617 CC lib/virtio/virtio_pci.o 00:02:49.882 LIB libspdk_init.a 00:02:49.882 CC lib/blob/blob_bs_dev.o 00:02:49.882 CC lib/event/app.o 00:02:49.882 CC lib/event/reactor.o 00:02:49.882 CC lib/event/log_rpc.o 00:02:50.181 LIB libspdk_virtio.a 00:02:50.181 CC lib/event/app_rpc.o 00:02:50.181 CC lib/event/scheduler_static.o 00:02:50.438 LIB libspdk_accel.a 00:02:50.438 LIB libspdk_nvme.a 00:02:50.438 LIB libspdk_event.a 00:02:50.695 CC lib/bdev/bdev_zone.o 00:02:50.695 CC lib/bdev/bdev.o 00:02:50.695 CC lib/bdev/bdev_rpc.o 00:02:50.695 CC lib/bdev/part.o 00:02:50.695 CC lib/bdev/scsi_nvme.o 00:02:53.224 LIB libspdk_blob.a 00:02:53.483 CC lib/blobfs/tree.o 00:02:53.483 CC lib/blobfs/blobfs.o 00:02:53.483 CC lib/lvol/lvol.o 00:02:54.050 LIB libspdk_bdev.a 00:02:54.050 CC lib/scsi/dev.o 00:02:54.050 CC lib/scsi/lun.o 00:02:54.050 CC lib/scsi/port.o 00:02:54.050 CC lib/scsi/scsi.o 00:02:54.050 CC lib/scsi/scsi_bdev.o 00:02:54.050 CC lib/nvmf/ctrlr.o 00:02:54.050 CC lib/nbd/nbd.o 00:02:54.050 CC lib/ftl/ftl_core.o 00:02:54.309 CC lib/scsi/scsi_pr.o 00:02:54.309 LIB libspdk_blobfs.a 00:02:54.309 CC lib/scsi/scsi_rpc.o 00:02:54.309 CC lib/nbd/nbd_rpc.o 00:02:54.309 CC lib/scsi/task.o 00:02:54.568 CC lib/nvmf/ctrlr_discovery.o 00:02:54.568 CC lib/nvmf/ctrlr_bdev.o 00:02:54.568 CC lib/nvmf/subsystem.o 00:02:54.568 LIB libspdk_lvol.a 00:02:54.568 CC lib/nvmf/nvmf.o 00:02:54.568 CC lib/ftl/ftl_init.o 00:02:54.568 CC lib/ftl/ftl_layout.o 00:02:54.568 CC lib/ftl/ftl_debug.o 00:02:54.829 LIB libspdk_nbd.a 00:02:54.829 CC lib/ftl/ftl_io.o 00:02:54.829 CC lib/ftl/ftl_sb.o 00:02:54.829 LIB libspdk_scsi.a 00:02:55.088 CC lib/ftl/ftl_l2p.o 00:02:55.088 CC lib/ftl/ftl_l2p_flat.o 00:02:55.088 CC lib/ftl/ftl_nv_cache.o 00:02:55.088 CC lib/ftl/ftl_band.o 00:02:55.347 CC lib/ftl/ftl_band_ops.o 00:02:55.347 CC lib/ftl/ftl_writer.o 00:02:55.347 CC lib/vhost/vhost.o 00:02:55.347 CC lib/iscsi/conn.o 00:02:55.347 CC lib/iscsi/init_grp.o 00:02:55.605 CC lib/ftl/ftl_rq.o 00:02:55.605 CC lib/ftl/ftl_reloc.o 00:02:55.605 CC lib/iscsi/iscsi.o 00:02:55.605 CC lib/ftl/ftl_l2p_cache.o 00:02:55.605 CC lib/ftl/ftl_p2l.o 00:02:55.862 CC lib/vhost/vhost_rpc.o 00:02:55.862 CC lib/ftl/mngt/ftl_mngt.o 00:02:56.121 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:56.121 CC lib/iscsi/md5.o 00:02:56.121 CC lib/nvmf/nvmf_rpc.o 00:02:56.121 CC lib/nvmf/transport.o 00:02:56.121 CC lib/vhost/vhost_scsi.o 00:02:56.121 CC lib/vhost/vhost_blk.o 00:02:56.381 CC lib/vhost/rte_vhost_user.o 00:02:56.381 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:56.381 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:56.381 CC lib/nvmf/tcp.o 00:02:56.640 CC lib/nvmf/stubs.o 00:02:56.640 CC lib/nvmf/mdns_server.o 00:02:56.640 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:56.900 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:56.900 CC lib/nvmf/rdma.o 00:02:57.159 CC lib/iscsi/param.o 00:02:57.159 CC lib/iscsi/portal_grp.o 00:02:57.159 CC lib/iscsi/tgt_node.o 00:02:57.159 CC lib/iscsi/iscsi_subsystem.o 00:02:57.159 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:57.159 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:57.419 LIB libspdk_vhost.a 00:02:57.419 CC lib/iscsi/iscsi_rpc.o 00:02:57.419 CC lib/iscsi/task.o 00:02:57.419 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:57.419 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:57.419 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:57.419 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:57.678 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:57.678 CC lib/ftl/utils/ftl_conf.o 00:02:57.678 CC lib/ftl/utils/ftl_md.o 00:02:57.678 CC lib/ftl/utils/ftl_mempool.o 00:02:57.937 CC lib/ftl/utils/ftl_bitmap.o 00:02:57.937 CC lib/ftl/utils/ftl_property.o 00:02:57.937 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:57.937 LIB libspdk_iscsi.a 00:02:57.937 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:57.937 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:57.937 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:57.937 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:58.195 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:58.195 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:58.195 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:58.195 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:58.195 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:58.195 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:58.195 CC lib/ftl/base/ftl_base_dev.o 00:02:58.195 CC lib/ftl/base/ftl_base_bdev.o 00:02:58.455 CC lib/ftl/ftl_trace.o 00:02:58.714 LIB libspdk_ftl.a 00:02:59.649 LIB libspdk_nvmf.a 00:02:59.908 CC module/env_dpdk/env_dpdk_rpc.o 00:02:59.908 CC module/accel/dsa/accel_dsa.o 00:02:59.908 CC module/accel/iaa/accel_iaa.o 00:02:59.908 CC module/keyring/file/keyring.o 00:02:59.908 CC module/accel/ioat/accel_ioat.o 00:03:00.167 CC module/accel/error/accel_error.o 00:03:00.167 CC module/sock/posix/posix.o 00:03:00.167 CC module/blob/bdev/blob_bdev.o 00:03:00.167 CC module/keyring/linux/keyring.o 00:03:00.167 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:00.167 LIB libspdk_env_dpdk_rpc.a 00:03:00.167 CC module/keyring/linux/keyring_rpc.o 00:03:00.167 CC module/keyring/file/keyring_rpc.o 00:03:00.167 CC module/accel/error/accel_error_rpc.o 00:03:00.167 CC module/accel/ioat/accel_ioat_rpc.o 00:03:00.167 CC module/accel/iaa/accel_iaa_rpc.o 00:03:00.167 LIB libspdk_scheduler_dynamic.a 00:03:00.167 CC module/accel/dsa/accel_dsa_rpc.o 00:03:00.167 LIB libspdk_keyring_linux.a 00:03:00.424 LIB libspdk_blob_bdev.a 00:03:00.424 LIB libspdk_keyring_file.a 00:03:00.424 LIB libspdk_accel_error.a 00:03:00.424 LIB libspdk_accel_iaa.a 00:03:00.424 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:00.424 LIB libspdk_accel_ioat.a 00:03:00.424 LIB libspdk_accel_dsa.a 00:03:00.424 CC module/scheduler/gscheduler/gscheduler.o 00:03:00.424 CC module/blobfs/bdev/blobfs_bdev.o 00:03:00.424 CC module/bdev/gpt/gpt.o 00:03:00.424 CC module/bdev/error/vbdev_error.o 00:03:00.424 CC module/bdev/delay/vbdev_delay.o 00:03:00.424 CC module/bdev/lvol/vbdev_lvol.o 00:03:00.424 LIB libspdk_scheduler_dpdk_governor.a 00:03:00.424 CC module/bdev/malloc/bdev_malloc.o 00:03:00.424 LIB libspdk_scheduler_gscheduler.a 00:03:00.424 CC module/bdev/null/bdev_null.o 00:03:00.424 CC module/bdev/null/bdev_null_rpc.o 00:03:00.682 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:00.682 CC module/bdev/gpt/vbdev_gpt.o 00:03:00.682 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:00.682 CC module/bdev/error/vbdev_error_rpc.o 00:03:00.682 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:00.682 LIB libspdk_blobfs_bdev.a 00:03:00.682 LIB libspdk_bdev_null.a 00:03:00.682 LIB libspdk_sock_posix.a 00:03:00.682 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:00.940 LIB libspdk_bdev_error.a 00:03:00.940 LIB libspdk_bdev_malloc.a 00:03:00.940 LIB libspdk_bdev_gpt.a 00:03:00.940 CC module/bdev/nvme/bdev_nvme.o 00:03:00.940 CC module/bdev/passthru/vbdev_passthru.o 00:03:00.940 CC module/bdev/raid/bdev_raid.o 00:03:00.940 CC module/bdev/raid/bdev_raid_rpc.o 00:03:00.940 LIB libspdk_bdev_delay.a 00:03:00.940 CC module/bdev/split/vbdev_split.o 00:03:00.940 CC module/bdev/split/vbdev_split_rpc.o 00:03:00.940 LIB libspdk_bdev_lvol.a 00:03:00.940 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:00.940 CC module/bdev/raid/bdev_raid_sb.o 00:03:00.940 CC module/bdev/aio/bdev_aio.o 00:03:01.197 CC module/bdev/raid/raid0.o 00:03:01.197 CC module/bdev/raid/raid1.o 00:03:01.197 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:01.197 LIB libspdk_bdev_split.a 00:03:01.197 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:01.455 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:01.455 LIB libspdk_bdev_passthru.a 00:03:01.455 CC module/bdev/iscsi/bdev_iscsi.o 00:03:01.455 CC module/bdev/aio/bdev_aio_rpc.o 00:03:01.455 CC module/bdev/ftl/bdev_ftl.o 00:03:01.455 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:01.455 CC module/bdev/raid/concat.o 00:03:01.455 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:01.455 LIB libspdk_bdev_zone_block.a 00:03:01.455 LIB libspdk_bdev_aio.a 00:03:01.455 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:01.713 CC module/bdev/nvme/nvme_rpc.o 00:03:01.713 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:01.713 CC module/bdev/nvme/bdev_mdns_client.o 00:03:01.713 LIB libspdk_bdev_ftl.a 00:03:01.713 CC module/bdev/nvme/vbdev_opal.o 00:03:01.713 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:01.971 CC module/bdev/raid/raid5f.o 00:03:01.971 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:01.971 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:01.971 LIB libspdk_bdev_iscsi.a 00:03:01.971 LIB libspdk_bdev_virtio.a 00:03:02.537 LIB libspdk_bdev_raid.a 00:03:03.469 LIB libspdk_bdev_nvme.a 00:03:03.726 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:03.726 CC module/event/subsystems/keyring/keyring.o 00:03:03.726 CC module/event/subsystems/vmd/vmd.o 00:03:03.726 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:03.726 CC module/event/subsystems/iobuf/iobuf.o 00:03:03.726 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:03.726 CC module/event/subsystems/sock/sock.o 00:03:03.726 CC module/event/subsystems/scheduler/scheduler.o 00:03:03.726 LIB libspdk_event_keyring.a 00:03:03.726 LIB libspdk_event_sock.a 00:03:03.726 LIB libspdk_event_vhost_blk.a 00:03:04.039 LIB libspdk_event_scheduler.a 00:03:04.039 LIB libspdk_event_vmd.a 00:03:04.039 LIB libspdk_event_iobuf.a 00:03:04.039 CC module/event/subsystems/accel/accel.o 00:03:04.298 LIB libspdk_event_accel.a 00:03:04.556 CC module/event/subsystems/bdev/bdev.o 00:03:04.556 LIB libspdk_event_bdev.a 00:03:04.814 CC module/event/subsystems/nbd/nbd.o 00:03:04.814 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:04.814 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:04.814 CC module/event/subsystems/scsi/scsi.o 00:03:05.072 LIB libspdk_event_nbd.a 00:03:05.072 LIB libspdk_event_scsi.a 00:03:05.072 LIB libspdk_event_nvmf.a 00:03:05.330 CC module/event/subsystems/iscsi/iscsi.o 00:03:05.330 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:05.587 LIB libspdk_event_vhost_scsi.a 00:03:05.587 LIB libspdk_event_iscsi.a 00:03:05.587 CC app/spdk_lspci/spdk_lspci.o 00:03:05.587 CXX app/trace/trace.o 00:03:05.587 CC app/trace_record/trace_record.o 00:03:05.844 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:05.844 CC app/nvmf_tgt/nvmf_main.o 00:03:05.844 CC app/iscsi_tgt/iscsi_tgt.o 00:03:05.844 CC app/spdk_tgt/spdk_tgt.o 00:03:05.844 CC examples/util/zipf/zipf.o 00:03:05.844 CC examples/ioat/perf/perf.o 00:03:05.844 LINK spdk_lspci 00:03:05.844 CC test/thread/poller_perf/poller_perf.o 00:03:05.844 LINK interrupt_tgt 00:03:05.844 LINK nvmf_tgt 00:03:05.844 LINK zipf 00:03:06.101 LINK spdk_trace_record 00:03:06.101 LINK spdk_tgt 00:03:06.101 LINK iscsi_tgt 00:03:06.101 LINK poller_perf 00:03:06.101 LINK ioat_perf 00:03:06.101 LINK spdk_trace 00:03:06.667 CC test/thread/lock/spdk_lock.o 00:03:06.667 CC app/spdk_nvme_perf/perf.o 00:03:06.667 CC examples/ioat/verify/verify.o 00:03:06.667 CC test/dma/test_dma/test_dma.o 00:03:06.933 CC app/spdk_nvme_identify/identify.o 00:03:06.933 LINK verify 00:03:07.201 CC app/spdk_nvme_discover/discovery_aer.o 00:03:07.201 LINK test_dma 00:03:07.458 LINK spdk_nvme_discover 00:03:07.716 LINK spdk_nvme_perf 00:03:07.716 CC app/spdk_top/spdk_top.o 00:03:07.716 LINK spdk_nvme_identify 00:03:07.975 CC examples/thread/thread/thread_ex.o 00:03:08.232 LINK thread 00:03:08.489 CC examples/sock/hello_world/hello_sock.o 00:03:08.489 LINK spdk_lock 00:03:08.746 LINK spdk_top 00:03:08.746 LINK hello_sock 00:03:09.004 CC examples/vmd/lsvmd/lsvmd.o 00:03:09.004 CC examples/vmd/led/led.o 00:03:09.004 LINK lsvmd 00:03:09.262 CC app/vhost/vhost.o 00:03:09.262 LINK led 00:03:09.262 CC app/fio/nvme/fio_plugin.o 00:03:09.262 CC app/spdk_dd/spdk_dd.o 00:03:09.262 LINK vhost 00:03:09.540 CC app/fio/bdev/fio_plugin.o 00:03:09.540 CC test/app/bdev_svc/bdev_svc.o 00:03:09.540 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:09.798 LINK bdev_svc 00:03:09.798 LINK spdk_dd 00:03:10.054 LINK spdk_nvme 00:03:10.054 LINK spdk_bdev 00:03:10.054 LINK nvme_fuzz 00:03:10.311 CC examples/idxd/perf/perf.o 00:03:10.311 TEST_HEADER include/spdk/blobfs.h 00:03:10.311 TEST_HEADER include/spdk/notify.h 00:03:10.311 TEST_HEADER include/spdk/pipe.h 00:03:10.311 TEST_HEADER include/spdk/accel.h 00:03:10.311 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:10.311 TEST_HEADER include/spdk/file.h 00:03:10.311 TEST_HEADER include/spdk/version.h 00:03:10.311 TEST_HEADER include/spdk/trace_parser.h 00:03:10.311 TEST_HEADER include/spdk/opal_spec.h 00:03:10.311 TEST_HEADER include/spdk/uuid.h 00:03:10.311 TEST_HEADER include/spdk/likely.h 00:03:10.311 TEST_HEADER include/spdk/dif.h 00:03:10.311 TEST_HEADER include/spdk/net.h 00:03:10.311 TEST_HEADER include/spdk/keyring_module.h 00:03:10.311 TEST_HEADER include/spdk/memory.h 00:03:10.311 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:10.311 TEST_HEADER include/spdk/dma.h 00:03:10.311 TEST_HEADER include/spdk/nbd.h 00:03:10.311 TEST_HEADER include/spdk/conf.h 00:03:10.311 TEST_HEADER include/spdk/env_dpdk.h 00:03:10.311 TEST_HEADER include/spdk/nvmf_spec.h 00:03:10.311 TEST_HEADER include/spdk/iscsi_spec.h 00:03:10.311 TEST_HEADER include/spdk/mmio.h 00:03:10.311 TEST_HEADER include/spdk/json.h 00:03:10.311 TEST_HEADER include/spdk/opal.h 00:03:10.311 TEST_HEADER include/spdk/bdev.h 00:03:10.311 TEST_HEADER include/spdk/keyring.h 00:03:10.311 TEST_HEADER include/spdk/base64.h 00:03:10.311 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:10.311 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:10.311 TEST_HEADER include/spdk/fd.h 00:03:10.311 TEST_HEADER include/spdk/barrier.h 00:03:10.311 TEST_HEADER include/spdk/scsi_spec.h 00:03:10.311 TEST_HEADER include/spdk/zipf.h 00:03:10.311 TEST_HEADER include/spdk/nvmf.h 00:03:10.311 TEST_HEADER include/spdk/queue.h 00:03:10.311 TEST_HEADER include/spdk/xor.h 00:03:10.311 TEST_HEADER include/spdk/cpuset.h 00:03:10.311 TEST_HEADER include/spdk/thread.h 00:03:10.311 TEST_HEADER include/spdk/bdev_zone.h 00:03:10.311 TEST_HEADER include/spdk/fd_group.h 00:03:10.311 TEST_HEADER include/spdk/tree.h 00:03:10.311 TEST_HEADER include/spdk/blob_bdev.h 00:03:10.311 TEST_HEADER include/spdk/crc64.h 00:03:10.311 TEST_HEADER include/spdk/assert.h 00:03:10.311 TEST_HEADER include/spdk/nvme_spec.h 00:03:10.311 TEST_HEADER include/spdk/endian.h 00:03:10.311 TEST_HEADER include/spdk/pci_ids.h 00:03:10.311 TEST_HEADER include/spdk/log.h 00:03:10.311 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:10.311 TEST_HEADER include/spdk/ftl.h 00:03:10.311 TEST_HEADER include/spdk/config.h 00:03:10.311 TEST_HEADER include/spdk/vhost.h 00:03:10.569 TEST_HEADER include/spdk/bdev_module.h 00:03:10.569 TEST_HEADER include/spdk/nvme_intel.h 00:03:10.569 TEST_HEADER include/spdk/idxd_spec.h 00:03:10.569 TEST_HEADER include/spdk/crc16.h 00:03:10.569 TEST_HEADER include/spdk/nvme.h 00:03:10.569 TEST_HEADER include/spdk/stdinc.h 00:03:10.569 TEST_HEADER include/spdk/scsi.h 00:03:10.569 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:10.569 TEST_HEADER include/spdk/idxd.h 00:03:10.569 TEST_HEADER include/spdk/hexlify.h 00:03:10.569 TEST_HEADER include/spdk/reduce.h 00:03:10.569 TEST_HEADER include/spdk/crc32.h 00:03:10.569 TEST_HEADER include/spdk/init.h 00:03:10.569 TEST_HEADER include/spdk/nvmf_transport.h 00:03:10.569 TEST_HEADER include/spdk/nvme_zns.h 00:03:10.569 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:10.569 TEST_HEADER include/spdk/util.h 00:03:10.569 TEST_HEADER include/spdk/jsonrpc.h 00:03:10.569 TEST_HEADER include/spdk/env.h 00:03:10.569 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:10.569 TEST_HEADER include/spdk/lvol.h 00:03:10.569 TEST_HEADER include/spdk/histogram_data.h 00:03:10.569 TEST_HEADER include/spdk/event.h 00:03:10.569 TEST_HEADER include/spdk/trace.h 00:03:10.569 TEST_HEADER include/spdk/ioat_spec.h 00:03:10.569 TEST_HEADER include/spdk/string.h 00:03:10.569 TEST_HEADER include/spdk/ublk.h 00:03:10.569 TEST_HEADER include/spdk/bit_array.h 00:03:10.569 TEST_HEADER include/spdk/scheduler.h 00:03:10.569 TEST_HEADER include/spdk/blob.h 00:03:10.569 TEST_HEADER include/spdk/gpt_spec.h 00:03:10.569 TEST_HEADER include/spdk/sock.h 00:03:10.569 TEST_HEADER include/spdk/vmd.h 00:03:10.569 TEST_HEADER include/spdk/rpc.h 00:03:10.569 TEST_HEADER include/spdk/accel_module.h 00:03:10.569 TEST_HEADER include/spdk/bit_pool.h 00:03:10.569 TEST_HEADER include/spdk/ioat.h 00:03:10.569 CXX test/cpp_headers/blobfs.o 00:03:10.569 LINK idxd_perf 00:03:10.828 CXX test/cpp_headers/notify.o 00:03:10.828 CXX test/cpp_headers/pipe.o 00:03:11.086 CXX test/cpp_headers/accel.o 00:03:11.086 CXX test/cpp_headers/file.o 00:03:11.086 CXX test/cpp_headers/version.o 00:03:11.086 CXX test/cpp_headers/trace_parser.o 00:03:11.086 CXX test/cpp_headers/opal_spec.o 00:03:11.344 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:11.344 CC examples/nvme/hello_world/hello_world.o 00:03:11.344 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:11.344 CXX test/cpp_headers/uuid.o 00:03:11.344 CXX test/cpp_headers/likely.o 00:03:11.602 CC examples/accel/perf/accel_perf.o 00:03:11.602 LINK hello_world 00:03:11.602 CXX test/cpp_headers/dif.o 00:03:11.860 LINK vhost_fuzz 00:03:11.860 CC examples/blob/hello_world/hello_blob.o 00:03:11.860 CXX test/cpp_headers/net.o 00:03:12.135 LINK iscsi_fuzz 00:03:12.393 CXX test/cpp_headers/keyring_module.o 00:03:12.393 CXX test/cpp_headers/memory.o 00:03:12.393 CXX test/cpp_headers/vfio_user_pci.o 00:03:12.651 LINK hello_blob 00:03:12.651 LINK accel_perf 00:03:12.651 CXX test/cpp_headers/dma.o 00:03:12.909 CC examples/blob/cli/blobcli.o 00:03:12.909 CXX test/cpp_headers/nbd.o 00:03:12.909 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:12.909 CC examples/nvme/hotplug/hotplug.o 00:03:12.909 CC examples/nvme/arbitration/arbitration.o 00:03:12.909 CC examples/nvme/reconnect/reconnect.o 00:03:12.909 CXX test/cpp_headers/conf.o 00:03:13.167 CXX test/cpp_headers/env_dpdk.o 00:03:13.167 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:13.425 LINK hotplug 00:03:14.356 LINK reconnect 00:03:14.356 CXX test/cpp_headers/nvmf_spec.o 00:03:14.356 CC test/app/histogram_perf/histogram_perf.o 00:03:14.613 CC examples/nvme/abort/abort.o 00:03:14.613 LINK cmb_copy 00:03:14.613 LINK arbitration 00:03:14.613 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:14.613 LINK nvme_manage 00:03:14.613 LINK blobcli 00:03:14.613 LINK histogram_perf 00:03:14.613 CXX test/cpp_headers/iscsi_spec.o 00:03:14.613 LINK pmr_persistence 00:03:14.871 CXX test/cpp_headers/mmio.o 00:03:14.871 LINK abort 00:03:14.871 CXX test/cpp_headers/json.o 00:03:15.130 CXX test/cpp_headers/opal.o 00:03:15.389 CC test/app/jsoncat/jsoncat.o 00:03:15.389 CXX test/cpp_headers/bdev.o 00:03:15.389 CXX test/cpp_headers/keyring.o 00:03:15.389 LINK jsoncat 00:03:15.389 CXX test/cpp_headers/base64.o 00:03:15.648 CC test/env/mem_callbacks/mem_callbacks.o 00:03:15.648 CXX test/cpp_headers/blobfs_bdev.o 00:03:15.648 CC test/rpc_client/rpc_client_test.o 00:03:15.648 CC test/event/event_perf/event_perf.o 00:03:15.908 CC test/event/reactor/reactor.o 00:03:15.908 CC test/nvme/aer/aer.o 00:03:15.908 CC test/nvme/reset/reset.o 00:03:15.908 CXX test/cpp_headers/nvme_ocssd.o 00:03:15.908 LINK event_perf 00:03:15.908 LINK rpc_client_test 00:03:15.908 LINK reactor 00:03:15.908 CC test/app/stub/stub.o 00:03:15.908 CXX test/cpp_headers/fd.o 00:03:15.908 LINK mem_callbacks 00:03:16.167 LINK aer 00:03:16.167 LINK reset 00:03:16.167 CXX test/cpp_headers/barrier.o 00:03:16.167 LINK stub 00:03:16.167 CC examples/bdev/hello_world/hello_bdev.o 00:03:16.167 CXX test/cpp_headers/scsi_spec.o 00:03:16.425 CC test/env/vtophys/vtophys.o 00:03:16.425 CXX test/cpp_headers/zipf.o 00:03:16.425 LINK hello_bdev 00:03:16.425 LINK vtophys 00:03:16.684 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:16.684 CXX test/cpp_headers/nvmf.o 00:03:16.684 CC test/env/memory/memory_ut.o 00:03:16.684 LINK env_dpdk_post_init 00:03:16.684 CC test/event/reactor_perf/reactor_perf.o 00:03:16.684 CXX test/cpp_headers/queue.o 00:03:16.943 CXX test/cpp_headers/xor.o 00:03:16.943 LINK reactor_perf 00:03:16.943 CXX test/cpp_headers/cpuset.o 00:03:17.202 CXX test/cpp_headers/thread.o 00:03:17.202 CXX test/cpp_headers/bdev_zone.o 00:03:17.202 CC test/env/pci/pci_ut.o 00:03:17.202 CXX test/cpp_headers/fd_group.o 00:03:17.202 CXX test/cpp_headers/tree.o 00:03:17.459 CXX test/cpp_headers/blob_bdev.o 00:03:17.459 CC test/nvme/sgl/sgl.o 00:03:17.459 CC examples/bdev/bdevperf/bdevperf.o 00:03:17.459 CXX test/cpp_headers/crc64.o 00:03:17.717 LINK memory_ut 00:03:17.717 CC test/event/app_repeat/app_repeat.o 00:03:17.717 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:03:17.717 CC test/accel/dif/dif.o 00:03:17.717 LINK sgl 00:03:17.717 CXX test/cpp_headers/assert.o 00:03:17.717 LINK pci_ut 00:03:17.717 LINK app_repeat 00:03:17.717 LINK histogram_ut 00:03:17.975 CC test/nvme/e2edp/nvme_dp.o 00:03:17.975 CXX test/cpp_headers/nvme_spec.o 00:03:17.975 CC test/nvme/overhead/overhead.o 00:03:17.975 CXX test/cpp_headers/endian.o 00:03:18.233 LINK dif 00:03:18.233 CXX test/cpp_headers/pci_ids.o 00:03:18.233 LINK nvme_dp 00:03:18.233 CC test/unit/lib/log/log.c/log_ut.o 00:03:18.233 LINK overhead 00:03:18.233 CXX test/cpp_headers/log.o 00:03:18.233 LINK bdevperf 00:03:18.491 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:18.491 CC test/unit/lib/rdma/common.c/common_ut.o 00:03:18.491 LINK log_ut 00:03:18.748 CXX test/cpp_headers/ftl.o 00:03:18.748 CXX test/cpp_headers/config.o 00:03:18.748 CXX test/cpp_headers/vhost.o 00:03:19.005 CC test/unit/lib/util/base64.c/base64_ut.o 00:03:19.005 CXX test/cpp_headers/bdev_module.o 00:03:19.005 CC test/unit/lib/dma/dma.c/dma_ut.o 00:03:19.005 CC test/event/scheduler/scheduler.o 00:03:19.262 LINK base64_ut 00:03:19.262 CXX test/cpp_headers/nvme_intel.o 00:03:19.262 LINK common_ut 00:03:19.262 CXX test/cpp_headers/idxd_spec.o 00:03:19.262 LINK scheduler 00:03:19.520 CC test/nvme/err_injection/err_injection.o 00:03:19.520 CXX test/cpp_headers/crc16.o 00:03:19.520 CC test/nvme/startup/startup.o 00:03:19.520 CXX test/cpp_headers/nvme.o 00:03:19.520 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:03:19.779 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:03:19.779 LINK err_injection 00:03:19.779 LINK startup 00:03:19.779 CXX test/cpp_headers/stdinc.o 00:03:19.779 CC test/nvme/reserve/reserve.o 00:03:20.043 CXX test/cpp_headers/scsi.o 00:03:20.043 LINK reserve 00:03:20.043 LINK dma_ut 00:03:20.307 LINK bit_array_ut 00:03:20.307 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:20.307 LINK ioat_ut 00:03:20.566 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:03:20.566 CXX test/cpp_headers/idxd.o 00:03:20.566 CXX test/cpp_headers/hexlify.o 00:03:20.566 CC test/blobfs/mkfs/mkfs.o 00:03:20.566 CXX test/cpp_headers/reduce.o 00:03:20.824 LINK cpuset_ut 00:03:20.824 CXX test/cpp_headers/crc32.o 00:03:20.824 LINK mkfs 00:03:21.082 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:03:21.082 CC test/nvme/simple_copy/simple_copy.o 00:03:21.082 CC test/nvme/connect_stress/connect_stress.o 00:03:21.082 CXX test/cpp_headers/init.o 00:03:21.082 CC test/lvol/esnap/esnap.o 00:03:21.082 LINK crc16_ut 00:03:21.082 CC test/bdev/bdevio/bdevio.o 00:03:21.341 CXX test/cpp_headers/nvmf_transport.o 00:03:21.341 LINK simple_copy 00:03:21.341 LINK connect_stress 00:03:21.341 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:03:21.341 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:03:21.341 CXX test/cpp_headers/nvme_zns.o 00:03:21.598 LINK crc32_ieee_ut 00:03:21.598 LINK crc32c_ut 00:03:21.598 CC examples/nvmf/nvmf/nvmf.o 00:03:21.598 LINK bdevio 00:03:21.598 CXX test/cpp_headers/vfio_user_spec.o 00:03:21.854 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:03:21.855 CC test/unit/lib/util/dif.c/dif_ut.o 00:03:21.855 CXX test/cpp_headers/util.o 00:03:21.855 LINK crc64_ut 00:03:21.855 LINK nvmf 00:03:22.110 CXX test/cpp_headers/jsonrpc.o 00:03:22.110 CXX test/cpp_headers/env.o 00:03:22.110 CC test/unit/lib/util/file.c/file_ut.o 00:03:22.367 CXX test/cpp_headers/nvmf_cmd.o 00:03:22.367 LINK file_ut 00:03:22.367 CC test/nvme/boot_partition/boot_partition.o 00:03:22.367 CXX test/cpp_headers/lvol.o 00:03:22.624 CC test/nvme/compliance/nvme_compliance.o 00:03:22.625 LINK boot_partition 00:03:22.625 CXX test/cpp_headers/histogram_data.o 00:03:22.625 CC test/nvme/fused_ordering/fused_ordering.o 00:03:22.625 CC test/unit/lib/util/iov.c/iov_ut.o 00:03:22.882 CXX test/cpp_headers/event.o 00:03:22.882 LINK fused_ordering 00:03:22.882 LINK nvme_compliance 00:03:22.883 CXX test/cpp_headers/trace.o 00:03:22.883 LINK iov_ut 00:03:23.145 CXX test/cpp_headers/ioat_spec.o 00:03:23.145 LINK dif_ut 00:03:23.145 CXX test/cpp_headers/string.o 00:03:23.417 CC test/unit/lib/util/math.c/math_ut.o 00:03:24.375 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:03:24.375 LINK math_ut 00:03:24.375 CC test/unit/lib/util/string.c/string_ut.o 00:03:24.375 CC test/unit/lib/util/xor.c/xor_ut.o 00:03:24.375 CC test/unit/lib/util/net.c/net_ut.o 00:03:24.375 CXX test/cpp_headers/ublk.o 00:03:24.375 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:24.375 CXX test/cpp_headers/bit_array.o 00:03:24.375 LINK net_ut 00:03:24.633 CC test/nvme/fdp/fdp.o 00:03:24.633 CXX test/cpp_headers/scheduler.o 00:03:24.633 CC test/nvme/cuse/cuse.o 00:03:24.633 LINK string_ut 00:03:24.633 CXX test/cpp_headers/blob.o 00:03:24.633 LINK doorbell_aers 00:03:24.633 CXX test/cpp_headers/gpt_spec.o 00:03:24.891 LINK xor_ut 00:03:24.891 CXX test/cpp_headers/sock.o 00:03:24.891 CXX test/cpp_headers/vmd.o 00:03:24.891 LINK pipe_ut 00:03:24.891 CXX test/cpp_headers/rpc.o 00:03:24.891 CXX test/cpp_headers/accel_module.o 00:03:25.150 CXX test/cpp_headers/bit_pool.o 00:03:25.150 LINK fdp 00:03:25.150 CXX test/cpp_headers/ioat.o 00:03:25.718 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:03:25.718 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:03:25.718 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:03:25.977 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:03:25.977 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:03:25.977 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:03:26.237 LINK cuse 00:03:26.811 LINK pci_event_ut 00:03:26.811 LINK json_util_ut 00:03:26.811 LINK idxd_user_ut 00:03:27.069 LINK json_write_ut 00:03:27.069 LINK idxd_ut 00:03:28.005 LINK esnap 00:03:28.942 LINK json_parse_ut 00:03:29.208 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:03:29.509 LINK jsonrpc_server_ut 00:03:30.076 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:03:31.008 LINK rpc_ut 00:03:31.266 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:03:31.267 CC test/unit/lib/sock/sock.c/sock_ut.o 00:03:31.267 CC test/unit/lib/sock/posix.c/posix_ut.o 00:03:31.267 CC test/unit/lib/thread/thread.c/thread_ut.o 00:03:31.267 CC test/unit/lib/keyring/keyring.c/keyring_ut.o 00:03:31.267 CC test/unit/lib/notify/notify.c/notify_ut.o 00:03:31.834 LINK keyring_ut 00:03:32.404 LINK notify_ut 00:03:32.662 LINK iobuf_ut 00:03:32.662 LINK posix_ut 00:03:33.231 LINK sock_ut 00:03:33.798 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:03:33.798 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:03:33.798 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:03:33.798 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:03:33.798 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:03:33.798 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:03:33.798 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:03:33.798 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:03:33.798 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:03:34.057 LINK thread_ut 00:03:34.315 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:03:34.574 LINK nvme_ns_ut 00:03:34.838 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:03:34.838 LINK nvme_poll_group_ut 00:03:35.101 LINK nvme_ctrlr_ocssd_cmd_ut 00:03:35.361 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:03:35.361 LINK nvme_quirks_ut 00:03:35.361 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:03:35.361 LINK nvme_ctrlr_cmd_ut 00:03:35.620 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:03:35.879 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:03:35.879 LINK nvme_ns_ocssd_cmd_ut 00:03:36.138 LINK nvme_qpair_ut 00:03:36.138 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:03:36.138 LINK nvme_ut 00:03:36.397 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:03:36.397 LINK nvme_io_msg_ut 00:03:36.657 LINK nvme_ns_cmd_ut 00:03:36.657 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:03:36.657 LINK nvme_transport_ut 00:03:36.657 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:03:37.224 CC test/unit/lib/accel/accel.c/accel_ut.o 00:03:37.224 LINK nvme_pcie_ut 00:03:37.224 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:03:37.224 LINK nvme_opal_ut 00:03:37.484 CC test/unit/lib/blob/blob.c/blob_ut.o 00:03:37.484 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:03:37.484 LINK nvme_pcie_common_ut 00:03:38.143 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:03:38.143 LINK nvme_fabric_ut 00:03:38.712 LINK subsystem_ut 00:03:38.712 LINK rpc_ut 00:03:38.712 LINK blob_bdev_ut 00:03:38.971 LINK nvme_cuse_ut 00:03:39.230 LINK nvme_ctrlr_ut 00:03:39.489 LINK nvme_tcp_ut 00:03:39.489 CC test/unit/lib/event/app.c/app_ut.o 00:03:39.489 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:03:40.055 LINK nvme_rdma_ut 00:03:40.619 LINK app_ut 00:03:40.931 LINK accel_ut 00:03:40.931 LINK reactor_ut 00:03:41.187 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:03:41.187 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:03:41.187 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:03:41.187 CC test/unit/lib/bdev/part.c/part_ut.o 00:03:41.187 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:03:41.187 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:03:41.444 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:03:41.444 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:03:41.444 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:03:41.701 LINK bdev_zone_ut 00:03:41.701 LINK scsi_nvme_ut 00:03:41.701 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:03:41.958 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:03:41.958 LINK gpt_ut 00:03:42.215 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:03:42.787 LINK vbdev_zone_block_ut 00:03:42.787 LINK vbdev_lvol_ut 00:03:43.045 LINK bdev_raid_sb_ut 00:03:43.045 LINK concat_ut 00:03:43.045 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:03:43.303 CC test/unit/lib/bdev/raid/raid0.c/raid0_ut.o 00:03:43.303 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:03:44.240 LINK raid1_ut 00:03:44.240 LINK bdev_raid_ut 00:03:44.240 LINK raid0_ut 00:03:44.808 LINK raid5f_ut 00:03:45.745 LINK part_ut 00:03:45.745 LINK bdev_ut 00:03:46.313 LINK blob_ut 00:03:46.880 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:03:46.880 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:03:46.880 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:03:46.880 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:03:46.880 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:03:47.139 LINK blobfs_bdev_ut 00:03:47.139 LINK bdev_nvme_ut 00:03:47.139 LINK tree_ut 00:03:47.706 LINK bdev_ut 00:03:47.965 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:03:47.965 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:03:47.965 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:03:47.965 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:03:47.965 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:03:47.965 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:03:48.224 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:03:48.482 LINK blobfs_async_ut 00:03:48.482 LINK blobfs_sync_ut 00:03:48.739 LINK scsi_ut 00:03:48.740 LINK ftl_l2p_ut 00:03:48.740 LINK dev_ut 00:03:48.998 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:03:48.998 LINK scsi_pr_ut 00:03:48.998 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:03:48.998 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:03:48.998 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:03:48.998 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:03:49.257 LINK lun_ut 00:03:49.257 CC test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut.o 00:03:49.257 LINK lvol_ut 00:03:49.516 LINK scsi_bdev_ut 00:03:49.516 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:03:49.773 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:03:49.773 LINK ftl_bitmap_ut 00:03:50.047 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:03:50.047 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:03:50.047 LINK ftl_io_ut 00:03:50.319 LINK ftl_mempool_ut 00:03:50.319 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:03:50.577 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:03:50.577 LINK ftl_p2l_ut 00:03:50.835 LINK ftl_mngt_ut 00:03:50.835 LINK ftl_band_ut 00:03:51.094 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:03:51.094 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:03:51.094 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:03:51.352 LINK ctrlr_discovery_ut 00:03:51.611 LINK subsystem_ut 00:03:51.611 LINK conn_ut 00:03:51.611 LINK init_grp_ut 00:03:51.869 CC test/unit/lib/iscsi/param.c/param_ut.o 00:03:51.869 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:03:52.127 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:03:52.127 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:03:52.127 LINK ftl_sb_ut 00:03:52.386 LINK ctrlr_ut 00:03:52.386 LINK param_ut 00:03:52.645 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:03:52.645 LINK ftl_layout_upgrade_ut 00:03:52.645 CC test/unit/lib/nvmf/auth.c/auth_ut.o 00:03:52.645 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:03:52.903 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:03:53.162 LINK vhost_ut 00:03:53.162 LINK portal_grp_ut 00:03:53.162 LINK ctrlr_bdev_ut 00:03:53.421 LINK tgt_node_ut 00:03:53.680 LINK tcp_ut 00:03:54.248 LINK nvmf_ut 00:03:54.508 LINK iscsi_ut 00:03:54.508 LINK auth_ut 00:03:57.037 LINK transport_ut 00:03:57.037 LINK rdma_ut 00:03:57.037 00:03:57.037 real 2m20.581s 00:03:57.037 user 10m47.258s 00:03:57.037 sys 1m56.267s 00:03:57.037 21:17:30 unittest_build -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:57.037 21:17:30 unittest_build -- common/autotest_common.sh@10 -- $ set +x 00:03:57.037 ************************************ 00:03:57.037 END TEST unittest_build 00:03:57.037 ************************************ 00:03:57.296 21:17:30 -- common/autotest_common.sh@1142 -- $ return 0 00:03:57.296 21:17:30 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:57.296 21:17:30 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:57.296 21:17:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:57.296 21:17:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.296 21:17:30 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:57.296 21:17:30 -- pm/common@44 -- $ pid=2480 00:03:57.296 21:17:30 -- pm/common@50 -- $ kill -TERM 2480 00:03:57.296 21:17:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.296 21:17:30 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:57.296 21:17:30 -- pm/common@44 -- $ pid=2481 00:03:57.296 21:17:30 -- pm/common@50 -- $ kill -TERM 2481 00:03:57.296 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:03:57.296 21:17:30 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:57.296 21:17:30 -- nvmf/common.sh@7 -- # uname -s 00:03:57.296 21:17:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:57.296 21:17:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:57.296 21:17:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:57.296 21:17:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:57.296 21:17:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:57.296 21:17:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:57.296 21:17:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:57.296 21:17:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:57.296 21:17:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:57.296 21:17:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:57.296 21:17:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3fae022e-ba69-499b-af9a-0101d3eff93b 00:03:57.296 21:17:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=3fae022e-ba69-499b-af9a-0101d3eff93b 00:03:57.296 21:17:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:57.296 21:17:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:57.296 21:17:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:57.296 21:17:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:57.296 21:17:30 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:57.296 21:17:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:57.296 21:17:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:57.296 21:17:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:57.296 21:17:30 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:57.296 21:17:30 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:57.296 21:17:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:57.296 21:17:30 -- paths/export.sh@5 -- # export PATH 00:03:57.296 21:17:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:57.296 21:17:30 -- nvmf/common.sh@47 -- # : 0 00:03:57.296 21:17:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:57.296 21:17:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:57.296 21:17:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:57.296 21:17:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:57.296 21:17:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:57.296 21:17:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:57.296 21:17:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:57.296 21:17:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:57.296 21:17:30 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:57.296 21:17:30 -- spdk/autotest.sh@32 -- # uname -s 00:03:57.296 21:17:30 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:57.296 21:17:30 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:03:57.296 21:17:30 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:57.296 21:17:30 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:57.296 21:17:30 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:57.296 21:17:30 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:57.864 21:17:31 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:57.864 21:17:31 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:03:57.864 21:17:31 -- spdk/autotest.sh@48 -- # udevadm_pid=99274 00:03:57.864 21:17:31 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:57.864 21:17:31 -- pm/common@17 -- # local monitor 00:03:57.864 21:17:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.864 21:17:31 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:03:57.864 21:17:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.864 21:17:31 -- pm/common@25 -- # sleep 1 00:03:57.864 21:17:31 -- pm/common@21 -- # date +%s 00:03:57.864 21:17:31 -- pm/common@21 -- # date +%s 00:03:57.864 21:17:31 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721078251 00:03:57.864 21:17:31 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721078251 00:03:57.864 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721078251_collect-vmstat.pm.log 00:03:57.864 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721078251_collect-cpu-load.pm.log 00:03:58.863 21:17:32 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:58.863 21:17:32 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:58.863 21:17:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:58.863 21:17:32 -- common/autotest_common.sh@10 -- # set +x 00:03:58.863 21:17:32 -- spdk/autotest.sh@59 -- # create_test_list 00:03:58.863 21:17:32 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:58.863 21:17:32 -- common/autotest_common.sh@10 -- # set +x 00:03:58.863 21:17:32 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:58.863 21:17:32 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:58.863 21:17:32 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:58.863 21:17:32 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:58.863 21:17:32 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:58.863 21:17:32 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:58.863 21:17:32 -- common/autotest_common.sh@1455 -- # uname 00:03:58.863 21:17:32 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:58.863 21:17:32 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:58.863 21:17:32 -- common/autotest_common.sh@1475 -- # uname 00:03:58.863 21:17:32 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:58.863 21:17:32 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:58.863 21:17:32 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:58.863 21:17:32 -- spdk/autotest.sh@72 -- # hash lcov 00:03:58.863 21:17:32 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:58.863 21:17:32 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:58.863 --rc lcov_branch_coverage=1 00:03:58.863 --rc lcov_function_coverage=1 00:03:58.863 --rc genhtml_branch_coverage=1 00:03:58.863 --rc genhtml_function_coverage=1 00:03:58.863 --rc genhtml_legend=1 00:03:58.863 --rc geninfo_all_blocks=1 00:03:58.863 ' 00:03:58.863 21:17:32 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:58.863 --rc lcov_branch_coverage=1 00:03:58.863 --rc lcov_function_coverage=1 00:03:58.863 --rc genhtml_branch_coverage=1 00:03:58.863 --rc genhtml_function_coverage=1 00:03:58.863 --rc genhtml_legend=1 00:03:58.863 --rc geninfo_all_blocks=1 00:03:58.863 ' 00:03:58.863 21:17:32 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:58.863 --rc lcov_branch_coverage=1 00:03:58.863 --rc lcov_function_coverage=1 00:03:58.863 --rc genhtml_branch_coverage=1 00:03:58.863 --rc genhtml_function_coverage=1 00:03:58.863 --rc genhtml_legend=1 00:03:58.863 --rc geninfo_all_blocks=1 00:03:58.863 --no-external' 00:03:58.863 21:17:32 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:58.863 --rc lcov_branch_coverage=1 00:03:58.863 --rc lcov_function_coverage=1 00:03:58.863 --rc genhtml_branch_coverage=1 00:03:58.863 --rc genhtml_function_coverage=1 00:03:58.863 --rc genhtml_legend=1 00:03:58.863 --rc geninfo_all_blocks=1 00:03:58.863 --no-external' 00:03:58.863 21:17:32 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:59.122 lcov: LCOV version 1.15 00:03:59.122 21:17:32 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:01.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:01.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:01.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:01.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:01.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:01.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:01.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:01.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:01.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:01.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:01.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:01.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:01.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:01.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:01.286 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:01.286 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:01.286 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:01.286 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:01.286 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:01.286 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:01.286 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:04:01.286 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:04:01.286 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:01.286 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:01.286 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:01.286 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:01.286 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:01.286 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:01.286 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:01.286 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:01.286 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:01.286 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:01.286 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:01.286 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:01.286 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:01.286 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:01.286 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:01.286 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:01.286 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:01.286 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:01.286 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:01.286 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:01.286 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:01.286 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:01.286 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:01.286 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:01.286 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:01.286 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:01.286 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:01.287 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:01.287 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:01.287 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:01.287 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:01.287 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:01.287 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:01.287 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:01.287 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:01.287 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:01.287 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:01.287 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:01.287 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:01.287 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:01.287 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:01.287 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:01.287 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:01.287 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:01.287 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:01.287 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:01.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:01.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:01.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:01.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:01.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:01.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:01.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:01.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:01.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:01.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:01.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:01.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:01.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:01.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:01.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:01.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:01.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:01.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:01.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:01.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:01.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:01.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:57.833 21:18:24 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:57.833 21:18:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:57.833 21:18:24 -- common/autotest_common.sh@10 -- # set +x 00:04:57.833 21:18:24 -- spdk/autotest.sh@91 -- # rm -f 00:04:57.833 21:18:24 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:57.833 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:57.833 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:57.833 21:18:25 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:57.833 21:18:25 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:57.833 21:18:25 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:57.833 21:18:25 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:57.833 21:18:25 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:57.833 21:18:25 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:57.833 21:18:25 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:57.833 21:18:25 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:57.833 21:18:25 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:57.833 21:18:25 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:57.833 21:18:25 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:57.833 21:18:25 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:57.833 21:18:25 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:57.833 21:18:25 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:57.833 21:18:25 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:57.833 No valid GPT data, bailing 00:04:57.833 21:18:25 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:57.833 21:18:25 -- scripts/common.sh@391 -- # pt= 00:04:57.833 21:18:25 -- scripts/common.sh@392 -- # return 1 00:04:57.833 21:18:25 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:57.833 1+0 records in 00:04:57.833 1+0 records out 00:04:57.833 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255014 s, 41.1 MB/s 00:04:57.833 21:18:25 -- spdk/autotest.sh@118 -- # sync 00:04:57.833 21:18:25 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:57.833 21:18:25 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:57.833 21:18:25 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:57.833 21:18:26 -- spdk/autotest.sh@124 -- # uname -s 00:04:57.833 21:18:26 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:57.833 21:18:26 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:57.833 21:18:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.833 21:18:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.833 21:18:26 -- common/autotest_common.sh@10 -- # set +x 00:04:57.833 ************************************ 00:04:57.833 START TEST setup.sh 00:04:57.833 ************************************ 00:04:57.833 21:18:26 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:57.833 * Looking for test storage... 00:04:57.833 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:57.833 21:18:26 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:57.833 21:18:26 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:57.833 21:18:26 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:57.833 21:18:26 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.833 21:18:26 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.833 21:18:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:57.833 ************************************ 00:04:57.833 START TEST acl 00:04:57.833 ************************************ 00:04:57.833 21:18:26 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:57.833 * Looking for test storage... 00:04:57.833 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:57.833 21:18:26 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:57.833 21:18:26 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:57.833 21:18:26 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:57.833 21:18:26 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:57.833 21:18:26 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:57.833 21:18:26 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:57.833 21:18:26 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:57.833 21:18:26 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:57.833 21:18:26 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:57.833 21:18:26 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:57.833 21:18:26 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:57.833 21:18:26 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:57.833 21:18:26 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:57.833 21:18:26 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:57.833 21:18:26 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:57.833 21:18:26 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:57.833 21:18:27 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:57.833 21:18:27 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:57.833 21:18:27 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:57.833 21:18:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:57.833 21:18:27 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.833 21:18:27 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:57.833 21:18:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:57.833 21:18:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:57.833 21:18:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:57.833 Hugepages 00:04:57.833 node hugesize free / total 00:04:57.833 21:18:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:57.833 21:18:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:57.833 21:18:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:57.833 21:18:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:57.834 21:18:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:57.834 21:18:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:57.834 00:04:57.834 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:57.834 21:18:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:57.834 21:18:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:57.834 21:18:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:57.834 21:18:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:57.834 21:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:57.834 21:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:57.834 21:18:28 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:57.834 21:18:28 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:57.834 21:18:28 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:57.834 21:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:57.834 21:18:28 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:57.834 21:18:28 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:57.834 21:18:28 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.834 21:18:28 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.834 21:18:28 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:57.834 ************************************ 00:04:57.834 START TEST denied 00:04:57.834 ************************************ 00:04:57.834 21:18:28 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:57.834 21:18:28 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:57.834 21:18:28 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:57.834 21:18:28 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:57.834 21:18:28 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.834 21:18:28 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:57.834 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:57.834 21:18:29 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:57.834 21:18:29 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:57.834 21:18:29 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:57.834 21:18:29 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:57.834 21:18:29 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:57.834 21:18:29 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:57.834 21:18:29 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:57.834 21:18:29 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:57.834 21:18:29 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:57.834 21:18:29 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:57.834 00:04:57.834 real 0m1.825s 00:04:57.834 user 0m0.515s 00:04:57.834 sys 0m1.371s 00:04:57.834 21:18:29 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.834 21:18:29 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:57.834 ************************************ 00:04:57.834 END TEST denied 00:04:57.834 ************************************ 00:04:57.834 21:18:29 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:57.834 21:18:29 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:57.834 21:18:29 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.834 21:18:29 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.834 21:18:29 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:57.834 ************************************ 00:04:57.834 START TEST allowed 00:04:57.834 ************************************ 00:04:57.834 21:18:29 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:57.834 21:18:29 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:57.834 21:18:29 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:57.834 21:18:29 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:57.834 21:18:29 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.834 21:18:29 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:58.093 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:58.093 21:18:31 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:58.093 21:18:31 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:58.093 21:18:31 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:58.093 21:18:31 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:58.093 21:18:31 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:58.660 00:04:58.660 real 0m2.004s 00:04:58.660 user 0m0.492s 00:04:58.660 sys 0m1.495s 00:04:58.660 21:18:31 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.661 21:18:31 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:58.661 ************************************ 00:04:58.661 END TEST allowed 00:04:58.661 ************************************ 00:04:58.921 21:18:32 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:58.921 00:04:58.921 real 0m5.267s 00:04:58.921 user 0m1.759s 00:04:58.921 sys 0m3.607s 00:04:58.921 21:18:32 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.921 21:18:32 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:58.921 ************************************ 00:04:58.921 END TEST acl 00:04:58.921 ************************************ 00:04:58.921 21:18:32 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:58.921 21:18:32 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:58.921 21:18:32 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.921 21:18:32 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.921 21:18:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:58.921 ************************************ 00:04:58.921 START TEST hugepages 00:04:58.921 ************************************ 00:04:58.921 21:18:32 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:58.921 * Looking for test storage... 00:04:58.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 2775604 kB' 'MemAvailable: 7408152 kB' 'Buffers: 37808 kB' 'Cached: 4710472 kB' 'SwapCached: 0 kB' 'Active: 1241836 kB' 'Inactive: 3634948 kB' 'Active(anon): 137520 kB' 'Inactive(anon): 1808 kB' 'Active(file): 1104316 kB' 'Inactive(file): 3633140 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 720 kB' 'Writeback: 0 kB' 'AnonPages: 147216 kB' 'Mapped: 73172 kB' 'Shmem: 2624 kB' 'KReclaimable: 217164 kB' 'Slab: 308908 kB' 'SReclaimable: 217164 kB' 'SUnreclaim: 91744 kB' 'KernelStack: 4600 kB' 'PageTables: 3464 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4028396 kB' 'Committed_AS: 632168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14288 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 2977792 kB' 'DirectMap1G: 11534336 kB' 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.921 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.922 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:58.923 21:18:32 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:58.923 21:18:32 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.923 21:18:32 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.923 21:18:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:58.923 ************************************ 00:04:58.923 START TEST default_setup 00:04:58.923 ************************************ 00:04:58.923 21:18:32 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:58.923 21:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:58.923 21:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:58.923 21:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:58.923 21:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:58.923 21:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:04:58.923 21:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:58.923 21:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:58.923 21:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:58.923 21:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:58.923 21:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:58.923 21:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:58.923 21:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:58.923 21:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:58.923 21:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:58.923 21:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:58.923 21:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:58.923 21:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:58.923 21:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:58.923 21:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:58.923 21:18:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:58.923 21:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.923 21:18:32 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:59.490 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:59.490 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:00.072 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:00.072 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:00.072 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:00.072 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:00.072 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:00.072 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:00.072 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:00.072 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:00.072 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:00.072 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4874468 kB' 'MemAvailable: 9507104 kB' 'Buffers: 37808 kB' 'Cached: 4710508 kB' 'SwapCached: 0 kB' 'Active: 1242280 kB' 'Inactive: 3634904 kB' 'Active(anon): 137888 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1104392 kB' 'Inactive(file): 3633104 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 728 kB' 'Writeback: 0 kB' 'AnonPages: 147468 kB' 'Mapped: 73116 kB' 'Shmem: 2616 kB' 'KReclaimable: 217212 kB' 'Slab: 308980 kB' 'SReclaimable: 217212 kB' 'SUnreclaim: 91768 kB' 'KernelStack: 4464 kB' 'PageTables: 3460 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 643548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14304 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 2977792 kB' 'DirectMap1G: 11534336 kB' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.073 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4874728 kB' 'MemAvailable: 9507364 kB' 'Buffers: 37808 kB' 'Cached: 4710508 kB' 'SwapCached: 0 kB' 'Active: 1242540 kB' 'Inactive: 3634904 kB' 'Active(anon): 138148 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1104392 kB' 'Inactive(file): 3633104 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 728 kB' 'Writeback: 0 kB' 'AnonPages: 147728 kB' 'Mapped: 73116 kB' 'Shmem: 2616 kB' 'KReclaimable: 217212 kB' 'Slab: 308980 kB' 'SReclaimable: 217212 kB' 'SUnreclaim: 91768 kB' 'KernelStack: 4464 kB' 'PageTables: 3460 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 643548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14304 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 2977792 kB' 'DirectMap1G: 11534336 kB' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.074 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4874728 kB' 'MemAvailable: 9507364 kB' 'Buffers: 37808 kB' 'Cached: 4710508 kB' 'SwapCached: 0 kB' 'Active: 1242800 kB' 'Inactive: 3634904 kB' 'Active(anon): 138408 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1104392 kB' 'Inactive(file): 3633104 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 732 kB' 'Writeback: 0 kB' 'AnonPages: 147988 kB' 'Mapped: 73116 kB' 'Shmem: 2616 kB' 'KReclaimable: 217212 kB' 'Slab: 308980 kB' 'SReclaimable: 217212 kB' 'SUnreclaim: 91768 kB' 'KernelStack: 4464 kB' 'PageTables: 3460 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 649500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14320 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 2977792 kB' 'DirectMap1G: 11534336 kB' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.075 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:00.076 nr_hugepages=1024 00:05:00.076 resv_hugepages=0 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:00.076 surplus_hugepages=0 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:00.076 anon_hugepages=0 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4874744 kB' 'MemAvailable: 9507380 kB' 'Buffers: 37808 kB' 'Cached: 4710508 kB' 'SwapCached: 0 kB' 'Active: 1242364 kB' 'Inactive: 3634904 kB' 'Active(anon): 137972 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1104392 kB' 'Inactive(file): 3633104 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 732 kB' 'Writeback: 0 kB' 'AnonPages: 147396 kB' 'Mapped: 73116 kB' 'Shmem: 2616 kB' 'KReclaimable: 217212 kB' 'Slab: 308980 kB' 'SReclaimable: 217212 kB' 'SUnreclaim: 91768 kB' 'KernelStack: 4484 kB' 'PageTables: 3400 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 648224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14336 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 2977792 kB' 'DirectMap1G: 11534336 kB' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.076 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.077 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4874744 kB' 'MemUsed: 7376352 kB' 'Active: 1242624 kB' 'Inactive: 3634904 kB' 'Active(anon): 138232 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1104392 kB' 'Inactive(file): 3633104 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'Dirty: 732 kB' 'Writeback: 0 kB' 'FilePages: 4748316 kB' 'Mapped: 73116 kB' 'AnonPages: 147656 kB' 'Shmem: 2616 kB' 'KernelStack: 4552 kB' 'PageTables: 3400 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 217212 kB' 'Slab: 308980 kB' 'SReclaimable: 217212 kB' 'SUnreclaim: 91768 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.101 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:00.102 node0=1024 expecting 1024 00:05:00.102 21:18:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:00.102 00:05:00.102 real 0m1.151s 00:05:00.102 user 0m0.298s 00:05:00.102 sys 0m0.827s 00:05:00.103 21:18:33 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.103 21:18:33 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:00.103 ************************************ 00:05:00.103 END TEST default_setup 00:05:00.103 ************************************ 00:05:00.384 21:18:33 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:00.384 21:18:33 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:00.384 21:18:33 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.384 21:18:33 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.384 21:18:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:00.384 ************************************ 00:05:00.384 START TEST per_node_1G_alloc 00:05:00.384 ************************************ 00:05:00.384 21:18:33 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:05:00.384 21:18:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:00.384 21:18:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:00.384 21:18:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:00.384 21:18:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:00.384 21:18:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:00.384 21:18:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:05:00.384 21:18:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:00.384 21:18:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:00.384 21:18:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:00.384 21:18:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:00.384 21:18:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:05:00.384 21:18:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:00.384 21:18:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:00.384 21:18:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:00.384 21:18:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:00.384 21:18:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:00.384 21:18:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:00.384 21:18:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:00.384 21:18:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:00.384 21:18:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:00.384 21:18:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:00.384 21:18:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:00.384 21:18:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:00.384 21:18:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.384 21:18:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:00.642 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:00.642 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5922664 kB' 'MemAvailable: 10555304 kB' 'Buffers: 37808 kB' 'Cached: 4710508 kB' 'SwapCached: 0 kB' 'Active: 1243016 kB' 'Inactive: 3634912 kB' 'Active(anon): 138628 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1104388 kB' 'Inactive(file): 3633112 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 740 kB' 'Writeback: 0 kB' 'AnonPages: 148204 kB' 'Mapped: 73352 kB' 'Shmem: 2616 kB' 'KReclaimable: 217212 kB' 'Slab: 309188 kB' 'SReclaimable: 217212 kB' 'SUnreclaim: 91976 kB' 'KernelStack: 4576 kB' 'PageTables: 3708 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 641968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14304 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 2977792 kB' 'DirectMap1G: 11534336 kB' 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.906 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.907 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5922676 kB' 'MemAvailable: 10555316 kB' 'Buffers: 37808 kB' 'Cached: 4710508 kB' 'SwapCached: 0 kB' 'Active: 1242752 kB' 'Inactive: 3634880 kB' 'Active(anon): 138332 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1104420 kB' 'Inactive(file): 3633080 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 744 kB' 'Writeback: 0 kB' 'AnonPages: 148152 kB' 'Mapped: 73304 kB' 'Shmem: 2616 kB' 'KReclaimable: 217212 kB' 'Slab: 309232 kB' 'SReclaimable: 217212 kB' 'SUnreclaim: 92020 kB' 'KernelStack: 4532 kB' 'PageTables: 3424 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 641968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14320 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 2977792 kB' 'DirectMap1G: 11534336 kB' 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.908 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.909 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:00.910 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5922976 kB' 'MemAvailable: 10555616 kB' 'Buffers: 37808 kB' 'Cached: 4710508 kB' 'SwapCached: 0 kB' 'Active: 1242604 kB' 'Inactive: 3634880 kB' 'Active(anon): 138184 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1104420 kB' 'Inactive(file): 3633080 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 744 kB' 'Writeback: 0 kB' 'AnonPages: 147848 kB' 'Mapped: 73304 kB' 'Shmem: 2616 kB' 'KReclaimable: 217212 kB' 'Slab: 309200 kB' 'SReclaimable: 217212 kB' 'SUnreclaim: 91988 kB' 'KernelStack: 4516 kB' 'PageTables: 3392 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 646796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14336 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 2977792 kB' 'DirectMap1G: 11534336 kB' 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.911 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.912 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:00.913 nr_hugepages=512 00:05:00.913 resv_hugepages=0 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:00.913 surplus_hugepages=0 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:00.913 anon_hugepages=0 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5923048 kB' 'MemAvailable: 10555688 kB' 'Buffers: 37808 kB' 'Cached: 4710508 kB' 'SwapCached: 0 kB' 'Active: 1242480 kB' 'Inactive: 3634880 kB' 'Active(anon): 138060 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1104420 kB' 'Inactive(file): 3633080 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 744 kB' 'Writeback: 0 kB' 'AnonPages: 147612 kB' 'Mapped: 73116 kB' 'Shmem: 2616 kB' 'KReclaimable: 217212 kB' 'Slab: 309200 kB' 'SReclaimable: 217212 kB' 'SUnreclaim: 91988 kB' 'KernelStack: 4500 kB' 'PageTables: 3400 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 646796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14336 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 2977792 kB' 'DirectMap1G: 11534336 kB' 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.913 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.914 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.915 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5923316 kB' 'MemUsed: 6327780 kB' 'Active: 1242408 kB' 'Inactive: 3634880 kB' 'Active(anon): 137988 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1104420 kB' 'Inactive(file): 3633080 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'Dirty: 744 kB' 'Writeback: 0 kB' 'FilePages: 4748316 kB' 'Mapped: 73116 kB' 'AnonPages: 147780 kB' 'Shmem: 2616 kB' 'KernelStack: 4452 kB' 'PageTables: 3332 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 217212 kB' 'Slab: 309200 kB' 'SReclaimable: 217212 kB' 'SUnreclaim: 91988 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.916 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:00.917 node0=512 expecting 512 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:00.917 00:05:00.917 real 0m0.768s 00:05:00.917 user 0m0.296s 00:05:00.917 sys 0m0.515s 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.917 21:18:34 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:00.917 ************************************ 00:05:00.917 END TEST per_node_1G_alloc 00:05:00.917 ************************************ 00:05:01.176 21:18:34 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:01.176 21:18:34 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:01.176 21:18:34 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.176 21:18:34 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.176 21:18:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:01.176 ************************************ 00:05:01.176 START TEST even_2G_alloc 00:05:01.176 ************************************ 00:05:01.176 21:18:34 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:05:01.176 21:18:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:01.176 21:18:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:01.177 21:18:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:01.177 21:18:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:01.177 21:18:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:01.177 21:18:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:01.177 21:18:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:05:01.177 21:18:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:01.177 21:18:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:01.177 21:18:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:01.177 21:18:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:01.177 21:18:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:01.177 21:18:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:01.177 21:18:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:01.177 21:18:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:01.177 21:18:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:01.177 21:18:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:01.177 21:18:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:01.177 21:18:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:01.177 21:18:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:01.177 21:18:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:01.177 21:18:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:01.177 21:18:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.177 21:18:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:01.436 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:01.436 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4873756 kB' 'MemAvailable: 9506396 kB' 'Buffers: 37808 kB' 'Cached: 4710508 kB' 'SwapCached: 0 kB' 'Active: 1242788 kB' 'Inactive: 3634812 kB' 'Active(anon): 138300 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1104488 kB' 'Inactive(file): 3633012 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 748 kB' 'Writeback: 0 kB' 'AnonPages: 147820 kB' 'Mapped: 73424 kB' 'Shmem: 2616 kB' 'KReclaimable: 217212 kB' 'Slab: 308756 kB' 'SReclaimable: 217212 kB' 'SUnreclaim: 91544 kB' 'KernelStack: 4528 kB' 'PageTables: 3664 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 635820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14304 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 2977792 kB' 'DirectMap1G: 11534336 kB' 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.008 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.009 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4873644 kB' 'MemAvailable: 9506300 kB' 'Buffers: 37808 kB' 'Cached: 4710508 kB' 'SwapCached: 0 kB' 'Active: 1243220 kB' 'Inactive: 3634812 kB' 'Active(anon): 138732 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1104488 kB' 'Inactive(file): 3633012 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 748 kB' 'Writeback: 0 kB' 'AnonPages: 148252 kB' 'Mapped: 73328 kB' 'Shmem: 2616 kB' 'KReclaimable: 217228 kB' 'Slab: 308772 kB' 'SReclaimable: 217228 kB' 'SUnreclaim: 91544 kB' 'KernelStack: 4564 kB' 'PageTables: 3640 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 640648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14304 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 2977792 kB' 'DirectMap1G: 11534336 kB' 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.010 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4874424 kB' 'MemAvailable: 9507080 kB' 'Buffers: 37808 kB' 'Cached: 4710508 kB' 'SwapCached: 0 kB' 'Active: 1242652 kB' 'Inactive: 3634812 kB' 'Active(anon): 138164 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1104488 kB' 'Inactive(file): 3633012 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 752 kB' 'Writeback: 0 kB' 'AnonPages: 147860 kB' 'Mapped: 73116 kB' 'Shmem: 2616 kB' 'KReclaimable: 217228 kB' 'Slab: 308920 kB' 'SReclaimable: 217228 kB' 'SUnreclaim: 91692 kB' 'KernelStack: 4512 kB' 'PageTables: 3520 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 640648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14304 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 2977792 kB' 'DirectMap1G: 11534336 kB' 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.011 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.012 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:02.013 nr_hugepages=1024 00:05:02.013 resv_hugepages=0 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:02.013 surplus_hugepages=0 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:02.013 anon_hugepages=0 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4874684 kB' 'MemAvailable: 9507340 kB' 'Buffers: 37808 kB' 'Cached: 4710508 kB' 'SwapCached: 0 kB' 'Active: 1242508 kB' 'Inactive: 3634812 kB' 'Active(anon): 138020 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1104488 kB' 'Inactive(file): 3633012 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 752 kB' 'Writeback: 0 kB' 'AnonPages: 147444 kB' 'Mapped: 73116 kB' 'Shmem: 2616 kB' 'KReclaimable: 217228 kB' 'Slab: 308912 kB' 'SReclaimable: 217228 kB' 'SUnreclaim: 91684 kB' 'KernelStack: 4532 kB' 'PageTables: 3448 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 646600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14320 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 2977792 kB' 'DirectMap1G: 11534336 kB' 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.013 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.014 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4874644 kB' 'MemUsed: 7376452 kB' 'Active: 1242768 kB' 'Inactive: 3634812 kB' 'Active(anon): 138280 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1104488 kB' 'Inactive(file): 3633012 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'Dirty: 752 kB' 'Writeback: 0 kB' 'FilePages: 4748316 kB' 'Mapped: 73116 kB' 'AnonPages: 147704 kB' 'Shmem: 2616 kB' 'KernelStack: 4532 kB' 'PageTables: 3448 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 217228 kB' 'Slab: 308912 kB' 'SReclaimable: 217228 kB' 'SUnreclaim: 91684 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.015 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:02.016 node0=1024 expecting 1024 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:02.016 00:05:02.016 real 0m0.986s 00:05:02.016 user 0m0.258s 00:05:02.016 sys 0m0.770s 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.016 21:18:35 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:02.016 ************************************ 00:05:02.016 END TEST even_2G_alloc 00:05:02.016 ************************************ 00:05:02.016 21:18:35 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:02.016 21:18:35 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:02.016 21:18:35 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.016 21:18:35 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.016 21:18:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:02.016 ************************************ 00:05:02.016 START TEST odd_alloc 00:05:02.016 ************************************ 00:05:02.016 21:18:35 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:05:02.016 21:18:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:02.016 21:18:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:02.016 21:18:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:02.016 21:18:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:02.016 21:18:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:02.016 21:18:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:02.016 21:18:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:05:02.016 21:18:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:02.016 21:18:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:02.016 21:18:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:02.016 21:18:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:02.016 21:18:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:02.016 21:18:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:02.016 21:18:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:02.016 21:18:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:02.016 21:18:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:02.016 21:18:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:02.016 21:18:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:02.016 21:18:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:02.016 21:18:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:02.016 21:18:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:02.016 21:18:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:02.016 21:18:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.016 21:18:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:02.583 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:02.583 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:03.157 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:03.157 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:03.157 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:03.157 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:03.157 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:03.157 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:03.157 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:03.157 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:03.157 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:03.157 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:03.157 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:03.157 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:03.157 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.157 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.157 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.157 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.157 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.157 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.157 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.157 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.157 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4883940 kB' 'MemAvailable: 9516604 kB' 'Buffers: 37816 kB' 'Cached: 4710508 kB' 'SwapCached: 0 kB' 'Active: 1230716 kB' 'Inactive: 3634776 kB' 'Active(anon): 126184 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1104532 kB' 'Inactive(file): 3632976 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 796 kB' 'Writeback: 0 kB' 'AnonPages: 135112 kB' 'Mapped: 72612 kB' 'Shmem: 2616 kB' 'KReclaimable: 217228 kB' 'Slab: 309180 kB' 'SReclaimable: 217228 kB' 'SUnreclaim: 91952 kB' 'KernelStack: 4364 kB' 'PageTables: 2992 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075948 kB' 'Committed_AS: 602912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14096 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 2977792 kB' 'DirectMap1G: 11534336 kB' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.158 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4884420 kB' 'MemAvailable: 9517084 kB' 'Buffers: 37816 kB' 'Cached: 4710508 kB' 'SwapCached: 0 kB' 'Active: 1230580 kB' 'Inactive: 3634776 kB' 'Active(anon): 126048 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1104532 kB' 'Inactive(file): 3632976 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 804 kB' 'Writeback: 0 kB' 'AnonPages: 135456 kB' 'Mapped: 72612 kB' 'Shmem: 2616 kB' 'KReclaimable: 217228 kB' 'Slab: 309056 kB' 'SReclaimable: 217228 kB' 'SUnreclaim: 91828 kB' 'KernelStack: 4400 kB' 'PageTables: 2912 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075948 kB' 'Committed_AS: 608864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14096 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 2977792 kB' 'DirectMap1G: 11534336 kB' 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.159 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4884680 kB' 'MemAvailable: 9517344 kB' 'Buffers: 37816 kB' 'Cached: 4710508 kB' 'SwapCached: 0 kB' 'Active: 1230580 kB' 'Inactive: 3634776 kB' 'Active(anon): 126048 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1104532 kB' 'Inactive(file): 3632976 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 804 kB' 'Writeback: 0 kB' 'AnonPages: 135328 kB' 'Mapped: 72612 kB' 'Shmem: 2616 kB' 'KReclaimable: 217228 kB' 'Slab: 309056 kB' 'SReclaimable: 217228 kB' 'SUnreclaim: 91828 kB' 'KernelStack: 4400 kB' 'PageTables: 2912 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075948 kB' 'Committed_AS: 608864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14096 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 2977792 kB' 'DirectMap1G: 11534336 kB' 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.160 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.161 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:03.162 nr_hugepages=1025 00:05:03.162 resv_hugepages=0 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:03.162 surplus_hugepages=0 00:05:03.162 anon_hugepages=0 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4884592 kB' 'MemAvailable: 9517256 kB' 'Buffers: 37816 kB' 'Cached: 4710508 kB' 'SwapCached: 0 kB' 'Active: 1230840 kB' 'Inactive: 3634776 kB' 'Active(anon): 126308 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1104532 kB' 'Inactive(file): 3632976 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 804 kB' 'Writeback: 0 kB' 'AnonPages: 135588 kB' 'Mapped: 72612 kB' 'Shmem: 2616 kB' 'KReclaimable: 217228 kB' 'Slab: 309056 kB' 'SReclaimable: 217228 kB' 'SUnreclaim: 91828 kB' 'KernelStack: 4468 kB' 'PageTables: 3300 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075948 kB' 'Committed_AS: 600636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14096 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 2977792 kB' 'DirectMap1G: 11534336 kB' 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.162 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4884592 kB' 'MemUsed: 7366504 kB' 'Active: 1231100 kB' 'Inactive: 3634776 kB' 'Active(anon): 126568 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1104532 kB' 'Inactive(file): 3632976 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'Dirty: 804 kB' 'Writeback: 0 kB' 'FilePages: 4748324 kB' 'Mapped: 72612 kB' 'AnonPages: 136368 kB' 'Shmem: 2616 kB' 'KernelStack: 4400 kB' 'PageTables: 3300 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 217228 kB' 'Slab: 309056 kB' 'SReclaimable: 217228 kB' 'SUnreclaim: 91828 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.163 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:03.164 node0=1025 expecting 1025 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:03.164 00:05:03.164 real 0m1.007s 00:05:03.164 user 0m0.263s 00:05:03.164 sys 0m0.782s 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.164 21:18:36 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:03.164 ************************************ 00:05:03.164 END TEST odd_alloc 00:05:03.164 ************************************ 00:05:03.164 21:18:36 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:03.164 21:18:36 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:03.164 21:18:36 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.164 21:18:36 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.164 21:18:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:03.164 ************************************ 00:05:03.164 START TEST custom_alloc 00:05:03.164 ************************************ 00:05:03.164 21:18:36 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:05:03.164 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:03.164 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:03.164 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:03.164 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:03.164 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:03.164 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:03.164 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:03.164 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:03.164 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:03.164 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:03.164 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:03.164 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:05:03.164 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:03.164 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:03.165 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:03.165 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:03.165 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:03.165 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:03.165 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:03.165 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:03.165 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:03.165 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:03.165 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:03.165 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:03.165 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:03.165 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:03.165 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:03.165 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:03.165 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:03.165 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:03.165 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:05:03.165 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:03.165 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:03.165 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:03.165 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:03.165 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:03.165 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:03.165 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:03.165 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:03.165 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:03.165 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:03.165 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:03.165 21:18:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:03.165 21:18:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.165 21:18:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:03.424 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:03.683 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5935808 kB' 'MemAvailable: 10568476 kB' 'Buffers: 37816 kB' 'Cached: 4710512 kB' 'SwapCached: 0 kB' 'Active: 1230724 kB' 'Inactive: 3634776 kB' 'Active(anon): 126188 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1104536 kB' 'Inactive(file): 3632976 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 804 kB' 'Writeback: 0 kB' 'AnonPages: 136144 kB' 'Mapped: 72768 kB' 'Shmem: 2616 kB' 'KReclaimable: 217228 kB' 'Slab: 309232 kB' 'SReclaimable: 217228 kB' 'SUnreclaim: 92004 kB' 'KernelStack: 4388 kB' 'PageTables: 3364 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 594588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14048 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 2977792 kB' 'DirectMap1G: 11534336 kB' 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.959 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5936116 kB' 'MemAvailable: 10568784 kB' 'Buffers: 37816 kB' 'Cached: 4710512 kB' 'SwapCached: 0 kB' 'Active: 1230880 kB' 'Inactive: 3634776 kB' 'Active(anon): 126344 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1104536 kB' 'Inactive(file): 3632976 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 804 kB' 'Writeback: 0 kB' 'AnonPages: 135648 kB' 'Mapped: 72608 kB' 'Shmem: 2616 kB' 'KReclaimable: 217228 kB' 'Slab: 309240 kB' 'SReclaimable: 217228 kB' 'SUnreclaim: 92012 kB' 'KernelStack: 4340 kB' 'PageTables: 3316 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 592684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14048 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 2977792 kB' 'DirectMap1G: 11534336 kB' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5936116 kB' 'MemAvailable: 10568784 kB' 'Buffers: 37816 kB' 'Cached: 4710512 kB' 'SwapCached: 0 kB' 'Active: 1231140 kB' 'Inactive: 3634776 kB' 'Active(anon): 126604 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1104536 kB' 'Inactive(file): 3632976 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 804 kB' 'Writeback: 0 kB' 'AnonPages: 136168 kB' 'Mapped: 72608 kB' 'Shmem: 2616 kB' 'KReclaimable: 217228 kB' 'Slab: 309240 kB' 'SReclaimable: 217228 kB' 'SUnreclaim: 92012 kB' 'KernelStack: 4340 kB' 'PageTables: 3316 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 592684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14048 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 2977792 kB' 'DirectMap1G: 11534336 kB' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:03.962 nr_hugepages=512 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:03.962 resv_hugepages=0 00:05:03.962 surplus_hugepages=0 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:03.962 anon_hugepages=0 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5936304 kB' 'MemAvailable: 10568972 kB' 'Buffers: 37816 kB' 'Cached: 4710512 kB' 'SwapCached: 0 kB' 'Active: 1230812 kB' 'Inactive: 3634776 kB' 'Active(anon): 126276 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1104536 kB' 'Inactive(file): 3632976 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 804 kB' 'Writeback: 0 kB' 'AnonPages: 135748 kB' 'Mapped: 72376 kB' 'Shmem: 2616 kB' 'KReclaimable: 217228 kB' 'Slab: 309336 kB' 'SReclaimable: 217228 kB' 'SUnreclaim: 92108 kB' 'KernelStack: 4340 kB' 'PageTables: 3304 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 598636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14048 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 2977792 kB' 'DirectMap1G: 11534336 kB' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5936564 kB' 'MemUsed: 6314532 kB' 'Active: 1230812 kB' 'Inactive: 3634776 kB' 'Active(anon): 126276 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1104536 kB' 'Inactive(file): 3632976 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'Dirty: 804 kB' 'Writeback: 0 kB' 'FilePages: 4748328 kB' 'Mapped: 72376 kB' 'AnonPages: 135748 kB' 'Shmem: 2616 kB' 'KernelStack: 4408 kB' 'PageTables: 3304 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 217228 kB' 'Slab: 309336 kB' 'SReclaimable: 217228 kB' 'SUnreclaim: 92108 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:03.963 node0=512 expecting 512 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:03.963 00:05:03.963 real 0m0.783s 00:05:03.963 user 0m0.293s 00:05:03.963 sys 0m0.530s 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.963 21:18:37 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:03.963 ************************************ 00:05:03.963 END TEST custom_alloc 00:05:03.963 ************************************ 00:05:03.963 21:18:37 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:03.963 21:18:37 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:03.963 21:18:37 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.963 21:18:37 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.963 21:18:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:03.963 ************************************ 00:05:03.963 START TEST no_shrink_alloc 00:05:03.963 ************************************ 00:05:03.963 21:18:37 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:05:03.963 21:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:03.963 21:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:03.963 21:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:03.963 21:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:03.963 21:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:05:03.963 21:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:03.963 21:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:03.963 21:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:03.963 21:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:03.963 21:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:05:03.963 21:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:03.963 21:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:03.963 21:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:03.963 21:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:03.963 21:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:03.963 21:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:03.963 21:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:03.963 21:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:03.963 21:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:03.963 21:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:03.963 21:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.963 21:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:04.529 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:04.529 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:04.789 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:04.789 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:04.789 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:04.789 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:04.789 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:04.789 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4877736 kB' 'MemAvailable: 9510420 kB' 'Buffers: 37816 kB' 'Cached: 4710520 kB' 'SwapCached: 0 kB' 'Active: 1243412 kB' 'Inactive: 3634768 kB' 'Active(anon): 138872 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1104540 kB' 'Inactive(file): 3632976 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 824 kB' 'Writeback: 0 kB' 'AnonPages: 148500 kB' 'Mapped: 72448 kB' 'Shmem: 2608 kB' 'KReclaimable: 217240 kB' 'Slab: 308804 kB' 'SReclaimable: 217240 kB' 'SUnreclaim: 91564 kB' 'KernelStack: 4580 kB' 'PageTables: 3516 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 630772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14304 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 2977792 kB' 'DirectMap1G: 11534336 kB' 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.790 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.791 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.054 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4877736 kB' 'MemAvailable: 9510420 kB' 'Buffers: 37816 kB' 'Cached: 4710520 kB' 'SwapCached: 0 kB' 'Active: 1243672 kB' 'Inactive: 3634768 kB' 'Active(anon): 139132 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1104540 kB' 'Inactive(file): 3632976 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 824 kB' 'Writeback: 0 kB' 'AnonPages: 148500 kB' 'Mapped: 72448 kB' 'Shmem: 2608 kB' 'KReclaimable: 217240 kB' 'Slab: 308804 kB' 'SReclaimable: 217240 kB' 'SUnreclaim: 91564 kB' 'KernelStack: 4580 kB' 'PageTables: 3516 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 630772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14304 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 2977792 kB' 'DirectMap1G: 11534336 kB' 00:05:05.054 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.054 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.054 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.054 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.054 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.055 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4877936 kB' 'MemAvailable: 9510644 kB' 'Buffers: 37816 kB' 'Cached: 4710540 kB' 'SwapCached: 0 kB' 'Active: 1243672 kB' 'Inactive: 3634800 kB' 'Active(anon): 139132 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1104540 kB' 'Inactive(file): 3633000 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 924 kB' 'Writeback: 0 kB' 'AnonPages: 148328 kB' 'Mapped: 72392 kB' 'Shmem: 2616 kB' 'KReclaimable: 217240 kB' 'Slab: 308756 kB' 'SReclaimable: 217240 kB' 'SUnreclaim: 91516 kB' 'KernelStack: 4576 kB' 'PageTables: 3684 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 636492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14304 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 2977792 kB' 'DirectMap1G: 11534336 kB' 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.056 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.057 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:05.058 nr_hugepages=1024 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:05.058 resv_hugepages=0 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:05.058 surplus_hugepages=0 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:05.058 anon_hugepages=0 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4877732 kB' 'MemAvailable: 9510440 kB' 'Buffers: 37816 kB' 'Cached: 4710540 kB' 'SwapCached: 0 kB' 'Active: 1243196 kB' 'Inactive: 3634800 kB' 'Active(anon): 138656 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1104540 kB' 'Inactive(file): 3633000 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 924 kB' 'Writeback: 0 kB' 'AnonPages: 148228 kB' 'Mapped: 72292 kB' 'Shmem: 2616 kB' 'KReclaimable: 217240 kB' 'Slab: 308840 kB' 'SReclaimable: 217240 kB' 'SUnreclaim: 91600 kB' 'KernelStack: 4628 kB' 'PageTables: 3652 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 641320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14320 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 2977792 kB' 'DirectMap1G: 11534336 kB' 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.058 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.059 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4877384 kB' 'MemUsed: 7373712 kB' 'Active: 1243172 kB' 'Inactive: 3634800 kB' 'Active(anon): 138632 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1104540 kB' 'Inactive(file): 3633000 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'Dirty: 924 kB' 'Writeback: 0 kB' 'FilePages: 4748356 kB' 'Mapped: 72292 kB' 'AnonPages: 148136 kB' 'Shmem: 2616 kB' 'KernelStack: 4564 kB' 'PageTables: 3512 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 217292 kB' 'Slab: 309000 kB' 'SReclaimable: 217292 kB' 'SUnreclaim: 91708 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.060 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:05.061 node0=1024 expecting 1024 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.061 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:05.321 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:05.321 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:05.321 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4874756 kB' 'MemAvailable: 9507552 kB' 'Buffers: 37816 kB' 'Cached: 4710540 kB' 'SwapCached: 0 kB' 'Active: 1243708 kB' 'Inactive: 3634784 kB' 'Active(anon): 139152 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1104556 kB' 'Inactive(file): 3632984 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 928 kB' 'Writeback: 0 kB' 'AnonPages: 149216 kB' 'Mapped: 72600 kB' 'Shmem: 2616 kB' 'KReclaimable: 217328 kB' 'Slab: 309280 kB' 'SReclaimable: 217328 kB' 'SUnreclaim: 91952 kB' 'KernelStack: 4672 kB' 'PageTables: 3748 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 628624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14304 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 2977792 kB' 'DirectMap1G: 11534336 kB' 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.321 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.322 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4875016 kB' 'MemAvailable: 9507812 kB' 'Buffers: 37816 kB' 'Cached: 4710540 kB' 'SwapCached: 0 kB' 'Active: 1243708 kB' 'Inactive: 3634784 kB' 'Active(anon): 139152 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1104556 kB' 'Inactive(file): 3632984 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 928 kB' 'Writeback: 0 kB' 'AnonPages: 148960 kB' 'Mapped: 72600 kB' 'Shmem: 2616 kB' 'KReclaimable: 217328 kB' 'Slab: 309280 kB' 'SReclaimable: 217328 kB' 'SUnreclaim: 91952 kB' 'KernelStack: 4672 kB' 'PageTables: 3748 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 628624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14288 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 2977792 kB' 'DirectMap1G: 11534336 kB' 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.586 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.587 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4875144 kB' 'MemAvailable: 9507940 kB' 'Buffers: 37816 kB' 'Cached: 4710540 kB' 'SwapCached: 0 kB' 'Active: 1243536 kB' 'Inactive: 3634784 kB' 'Active(anon): 138980 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1104556 kB' 'Inactive(file): 3632984 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 928 kB' 'Writeback: 0 kB' 'AnonPages: 148108 kB' 'Mapped: 72600 kB' 'Shmem: 2616 kB' 'KReclaimable: 217328 kB' 'Slab: 309000 kB' 'SReclaimable: 217328 kB' 'SUnreclaim: 91672 kB' 'KernelStack: 4568 kB' 'PageTables: 3416 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 634344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14304 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 2977792 kB' 'DirectMap1G: 11534336 kB' 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.588 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:05.589 nr_hugepages=1024 00:05:05.589 resv_hugepages=0 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:05.589 surplus_hugepages=0 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:05.589 anon_hugepages=0 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4875660 kB' 'MemAvailable: 9508456 kB' 'Buffers: 37816 kB' 'Cached: 4710540 kB' 'SwapCached: 0 kB' 'Active: 1243112 kB' 'Inactive: 3634784 kB' 'Active(anon): 138556 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1104556 kB' 'Inactive(file): 3632984 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 928 kB' 'Writeback: 0 kB' 'AnonPages: 148460 kB' 'Mapped: 72544 kB' 'Shmem: 2616 kB' 'KReclaimable: 217328 kB' 'Slab: 308628 kB' 'SReclaimable: 217328 kB' 'SUnreclaim: 91300 kB' 'KernelStack: 4532 kB' 'PageTables: 3444 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 633272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14320 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 2977792 kB' 'DirectMap1G: 11534336 kB' 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.589 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.590 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4875692 kB' 'MemUsed: 7375404 kB' 'Active: 1243372 kB' 'Inactive: 3634784 kB' 'Active(anon): 138816 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1104556 kB' 'Inactive(file): 3632984 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'Dirty: 928 kB' 'Writeback: 0 kB' 'FilePages: 4748356 kB' 'Mapped: 72544 kB' 'AnonPages: 148720 kB' 'Shmem: 2616 kB' 'KernelStack: 4532 kB' 'PageTables: 3444 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 217328 kB' 'Slab: 308628 kB' 'SReclaimable: 217328 kB' 'SUnreclaim: 91300 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.591 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:05.592 node0=1024 expecting 1024 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:05.592 00:05:05.592 real 0m1.511s 00:05:05.592 user 0m0.575s 00:05:05.592 sys 0m1.016s 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.592 21:18:38 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:05.592 ************************************ 00:05:05.592 END TEST no_shrink_alloc 00:05:05.592 ************************************ 00:05:05.592 21:18:38 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:05.592 21:18:38 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:05.592 21:18:38 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:05.592 21:18:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:05.592 21:18:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:05.592 21:18:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:05.592 21:18:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:05.592 21:18:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:05.592 21:18:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:05.592 21:18:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:05.592 00:05:05.592 real 0m6.735s 00:05:05.592 user 0m2.249s 00:05:05.592 sys 0m4.736s 00:05:05.592 21:18:38 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.592 21:18:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:05.592 ************************************ 00:05:05.592 END TEST hugepages 00:05:05.592 ************************************ 00:05:05.592 21:18:38 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:05.592 21:18:38 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:05.592 21:18:38 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.592 21:18:38 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.592 21:18:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:05.592 ************************************ 00:05:05.592 START TEST driver 00:05:05.592 ************************************ 00:05:05.592 21:18:38 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:05.851 * Looking for test storage... 00:05:05.851 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:05.851 21:18:38 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:05.851 21:18:38 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:05.851 21:18:38 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:06.419 21:18:39 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:06.419 21:18:39 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.419 21:18:39 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.419 21:18:39 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:06.419 ************************************ 00:05:06.419 START TEST guess_driver 00:05:06.419 ************************************ 00:05:06.419 21:18:39 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:06.419 21:18:39 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:06.419 21:18:39 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:06.419 21:18:39 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:06.419 21:18:39 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:06.419 21:18:39 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:06.419 21:18:39 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:06.419 21:18:39 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:06.419 21:18:39 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:06.419 21:18:39 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:06.419 21:18:39 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:06.419 21:18:39 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ N == Y ]] 00:05:06.419 21:18:39 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:06.419 21:18:39 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:06.419 21:18:39 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:06.419 21:18:39 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:06.419 21:18:39 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:06.419 21:18:39 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:06.419 21:18:39 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.4.0-176-generic/kernel/drivers/uio/uio.ko 00:05:06.419 insmod /lib/modules/5.4.0-176-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:05:06.419 21:18:39 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:06.419 Looking for driver=uio_pci_generic 00:05:06.419 21:18:39 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:06.419 21:18:39 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:06.419 21:18:39 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:06.419 21:18:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.419 21:18:39 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:06.419 21:18:39 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.419 21:18:39 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:06.986 21:18:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:06.986 21:18:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:06.986 21:18:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.986 21:18:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.986 21:18:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:06.986 21:18:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.362 21:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:08.362 21:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:08.362 21:18:41 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:08.362 21:18:41 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:08.927 ************************************ 00:05:08.927 END TEST guess_driver 00:05:08.927 ************************************ 00:05:08.927 00:05:08.927 real 0m2.455s 00:05:08.927 user 0m0.484s 00:05:08.927 sys 0m1.968s 00:05:08.927 21:18:42 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.927 21:18:42 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:08.927 21:18:42 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:08.927 00:05:08.927 real 0m3.188s 00:05:08.927 user 0m0.802s 00:05:08.927 sys 0m2.411s 00:05:08.927 21:18:42 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.927 21:18:42 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:08.927 ************************************ 00:05:08.927 END TEST driver 00:05:08.927 ************************************ 00:05:08.927 21:18:42 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:08.927 21:18:42 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:08.927 21:18:42 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.927 21:18:42 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.927 21:18:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:08.927 ************************************ 00:05:08.927 START TEST devices 00:05:08.927 ************************************ 00:05:08.927 21:18:42 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:08.927 * Looking for test storage... 00:05:08.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:08.927 21:18:42 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:08.928 21:18:42 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:08.928 21:18:42 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:08.928 21:18:42 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:09.492 21:18:42 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:09.492 21:18:42 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:09.492 21:18:42 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:09.492 21:18:42 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:09.492 21:18:42 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:09.492 21:18:42 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:09.492 21:18:42 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:09.492 21:18:42 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:09.492 21:18:42 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:09.492 21:18:42 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:09.492 21:18:42 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:09.492 21:18:42 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:09.493 21:18:42 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:09.493 21:18:42 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:09.493 21:18:42 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:09.493 21:18:42 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:09.493 21:18:42 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:09.493 21:18:42 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:09.493 21:18:42 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:09.493 21:18:42 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:09.493 21:18:42 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:09.493 21:18:42 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:09.493 No valid GPT data, bailing 00:05:09.750 21:18:42 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:09.750 21:18:42 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:09.750 21:18:42 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:09.750 21:18:42 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:09.750 21:18:42 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:09.750 21:18:42 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:09.750 21:18:42 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:05:09.750 21:18:42 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:09.750 21:18:42 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:09.750 21:18:42 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:09.750 21:18:42 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:09.750 21:18:42 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:09.750 21:18:42 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:09.750 21:18:42 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.750 21:18:42 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.750 21:18:42 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:09.750 ************************************ 00:05:09.750 START TEST nvme_mount 00:05:09.750 ************************************ 00:05:09.750 21:18:42 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:09.750 21:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:09.750 21:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:09.750 21:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:09.750 21:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:09.750 21:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:09.750 21:18:42 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:09.750 21:18:42 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:09.750 21:18:42 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:09.750 21:18:42 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:09.750 21:18:42 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:09.750 21:18:42 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:09.750 21:18:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:09.750 21:18:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:09.750 21:18:42 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:09.750 21:18:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:09.750 21:18:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:09.750 21:18:42 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:09.750 21:18:42 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:09.750 21:18:42 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:10.683 Creating new GPT entries in memory. 00:05:10.683 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:10.683 other utilities. 00:05:10.683 21:18:43 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:10.683 21:18:43 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:10.684 21:18:43 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:10.684 21:18:43 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:10.684 21:18:43 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:12.062 Creating new GPT entries in memory. 00:05:12.062 The operation has completed successfully. 00:05:12.062 21:18:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:12.062 21:18:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:12.062 21:18:45 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 103933 00:05:12.062 21:18:45 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:12.062 21:18:45 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:12.062 21:18:45 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:12.062 21:18:45 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:12.062 21:18:45 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:12.062 21:18:45 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:12.062 21:18:45 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:10.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:12.062 21:18:45 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:12.062 21:18:45 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:12.062 21:18:45 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:12.062 21:18:45 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:12.062 21:18:45 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:12.062 21:18:45 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:12.062 21:18:45 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:12.062 21:18:45 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:12.062 21:18:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.062 21:18:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:12.062 21:18:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:12.062 21:18:45 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.062 21:18:45 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:12.062 21:18:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:12.062 21:18:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:12.062 21:18:45 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:12.062 21:18:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.062 21:18:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:12.062 21:18:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.321 21:18:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:12.321 21:18:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.256 21:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:13.256 21:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:13.256 21:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.256 21:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:13.256 21:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:13.256 21:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:13.256 21:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.256 21:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.256 21:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:13.256 21:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:13.256 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:13.256 21:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:13.256 21:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:13.256 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:13.256 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:13.256 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:13.256 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:13.256 21:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:13.256 21:18:46 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:13.256 21:18:46 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.256 21:18:46 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:13.256 21:18:46 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:13.256 21:18:46 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.256 21:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:10.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:13.256 21:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:13.256 21:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:13.256 21:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.256 21:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:13.256 21:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:13.256 21:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:13.256 21:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:13.256 21:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:13.256 21:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.256 21:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:13.256 21:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:13.256 21:18:46 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.256 21:18:46 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:13.824 21:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:13.824 21:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:13.824 21:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:13.824 21:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.824 21:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:13.824 21:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.824 21:18:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:13.824 21:18:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.761 21:18:47 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:14.761 21:18:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:14.761 21:18:47 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.762 21:18:47 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:14.762 21:18:47 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:14.762 21:18:47 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.762 21:18:47 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:10.0 data@nvme0n1 '' '' 00:05:14.762 21:18:47 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:14.762 21:18:47 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:14.762 21:18:47 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:14.762 21:18:47 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:14.762 21:18:47 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:14.762 21:18:47 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:14.762 21:18:47 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:14.762 21:18:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:14.762 21:18:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.762 21:18:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:14.762 21:18:47 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.762 21:18:47 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:15.020 21:18:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:15.020 21:18:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:15.020 21:18:48 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:15.020 21:18:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.020 21:18:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:15.021 21:18:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.279 21:18:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:15.279 21:18:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.217 21:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:16.217 21:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:16.217 21:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:16.217 21:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:16.217 21:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.217 21:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:16.217 21:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:16.217 21:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:16.217 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:16.217 ************************************ 00:05:16.217 END TEST nvme_mount 00:05:16.217 ************************************ 00:05:16.217 00:05:16.217 real 0m6.544s 00:05:16.217 user 0m0.794s 00:05:16.217 sys 0m3.674s 00:05:16.217 21:18:49 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.217 21:18:49 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:16.217 21:18:49 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:16.217 21:18:49 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:16.217 21:18:49 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.217 21:18:49 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.217 21:18:49 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:16.217 ************************************ 00:05:16.217 START TEST dm_mount 00:05:16.217 ************************************ 00:05:16.217 21:18:49 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:16.217 21:18:49 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:16.217 21:18:49 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:16.217 21:18:49 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:16.217 21:18:49 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:16.217 21:18:49 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:16.217 21:18:49 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:16.217 21:18:49 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:16.217 21:18:49 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:16.217 21:18:49 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:16.217 21:18:49 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:16.217 21:18:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:16.217 21:18:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:16.217 21:18:49 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:16.217 21:18:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:16.217 21:18:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:16.217 21:18:49 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:16.217 21:18:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:16.217 21:18:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:16.217 21:18:49 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:16.217 21:18:49 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:16.217 21:18:49 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:17.596 Creating new GPT entries in memory. 00:05:17.596 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:17.596 other utilities. 00:05:17.596 21:18:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:17.596 21:18:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:17.596 21:18:50 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:17.596 21:18:50 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:17.596 21:18:50 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:18.534 Creating new GPT entries in memory. 00:05:18.534 The operation has completed successfully. 00:05:18.534 21:18:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:18.534 21:18:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:18.534 21:18:51 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:18.534 21:18:51 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:18.534 21:18:51 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:19.469 The operation has completed successfully. 00:05:19.469 21:18:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:19.469 21:18:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:19.469 21:18:52 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 104425 00:05:19.469 21:18:52 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:19.469 21:18:52 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:19.469 21:18:52 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:19.469 21:18:52 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:19.469 21:18:52 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:19.469 21:18:52 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:19.469 21:18:52 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:19.469 21:18:52 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:19.469 21:18:52 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:19.469 21:18:52 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:19.469 21:18:52 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:19.469 21:18:52 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:19.470 21:18:52 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:19.470 21:18:52 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:19.470 21:18:52 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:19.470 21:18:52 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:19.470 21:18:52 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:19.470 21:18:52 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:19.470 21:18:52 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:19.470 21:18:52 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:10.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:19.470 21:18:52 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:19.470 21:18:52 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:19.470 21:18:52 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:19.470 21:18:52 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:19.470 21:18:52 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:19.470 21:18:52 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:19.470 21:18:52 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:19.470 21:18:52 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:19.470 21:18:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.470 21:18:52 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:19.470 21:18:52 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:19.470 21:18:52 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.470 21:18:52 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:20.044 21:18:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:20.044 21:18:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:20.044 21:18:53 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:20.044 21:18:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.044 21:18:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:20.044 21:18:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.044 21:18:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:20.044 21:18:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.980 21:18:54 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:20.980 21:18:54 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:20.980 21:18:54 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:20.980 21:18:54 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:20.980 21:18:54 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:20.980 21:18:54 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:20.980 21:18:54 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:10.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:20.980 21:18:54 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:20.980 21:18:54 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:20.980 21:18:54 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:20.980 21:18:54 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:20.980 21:18:54 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:20.980 21:18:54 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:20.980 21:18:54 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:20.980 21:18:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.980 21:18:54 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:20.980 21:18:54 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:20.980 21:18:54 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.980 21:18:54 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:21.239 21:18:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:21.239 21:18:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:21.239 21:18:54 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:21.240 21:18:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.240 21:18:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:21.240 21:18:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.240 21:18:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:21.240 21:18:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.182 21:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:22.182 21:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:22.182 21:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:22.182 21:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:22.182 21:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:22.182 21:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:22.182 21:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:22.443 21:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:22.443 21:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:22.443 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:22.443 21:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:22.443 21:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:22.443 00:05:22.443 real 0m6.070s 00:05:22.443 user 0m0.506s 00:05:22.443 sys 0m2.379s 00:05:22.443 21:18:55 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.443 21:18:55 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:22.443 ************************************ 00:05:22.443 END TEST dm_mount 00:05:22.443 ************************************ 00:05:22.443 21:18:55 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:22.443 21:18:55 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:22.443 21:18:55 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:22.443 21:18:55 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:22.443 21:18:55 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:22.443 21:18:55 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:22.443 21:18:55 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:22.443 21:18:55 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:22.443 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:22.443 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:22.443 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:22.443 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:22.443 21:18:55 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:22.443 21:18:55 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:22.443 21:18:55 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:22.443 21:18:55 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:22.443 21:18:55 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:22.443 21:18:55 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:22.443 21:18:55 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:22.443 00:05:22.443 real 0m13.618s 00:05:22.443 user 0m1.778s 00:05:22.443 sys 0m6.546s 00:05:22.443 21:18:55 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.443 21:18:55 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:22.443 ************************************ 00:05:22.443 END TEST devices 00:05:22.443 ************************************ 00:05:22.443 21:18:55 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:22.443 00:05:22.443 real 0m29.156s 00:05:22.443 user 0m6.793s 00:05:22.443 sys 0m17.462s 00:05:22.443 21:18:55 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.443 21:18:55 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:22.443 ************************************ 00:05:22.443 END TEST setup.sh 00:05:22.443 ************************************ 00:05:22.702 21:18:55 -- common/autotest_common.sh@1142 -- # return 0 00:05:22.702 21:18:55 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:22.961 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:22.961 Hugepages 00:05:22.961 node hugesize free / total 00:05:22.961 node0 1048576kB 0 / 0 00:05:22.961 node0 2048kB 2048 / 2048 00:05:22.961 00:05:22.961 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:23.220 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:23.220 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:23.220 21:18:56 -- spdk/autotest.sh@130 -- # uname -s 00:05:23.220 21:18:56 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:23.220 21:18:56 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:23.220 21:18:56 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:23.789 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:23.789 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:24.726 21:18:57 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:25.665 21:18:58 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:25.665 21:18:58 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:25.665 21:18:58 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:25.665 21:18:58 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:25.665 21:18:58 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:25.665 21:18:58 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:25.665 21:18:58 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:25.665 21:18:58 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:25.665 21:18:58 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:25.927 21:18:59 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:25.927 21:18:59 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:05:25.927 21:18:59 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:26.186 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:26.186 Waiting for block devices as requested 00:05:26.186 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:26.446 21:18:59 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:26.446 21:18:59 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:26.446 21:18:59 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:26.446 21:18:59 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:05:26.446 21:18:59 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:05:26.447 21:18:59 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 ]] 00:05:26.447 21:18:59 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:05:26.447 21:18:59 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:26.447 21:18:59 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:26.447 21:18:59 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:26.447 21:18:59 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:26.447 21:18:59 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:26.447 21:18:59 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:26.447 21:18:59 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:26.447 21:18:59 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:26.447 21:18:59 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:26.447 21:18:59 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:26.447 21:18:59 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:26.447 21:18:59 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:26.447 21:18:59 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:26.447 21:18:59 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:26.447 21:18:59 -- common/autotest_common.sh@1557 -- # continue 00:05:26.447 21:18:59 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:26.447 21:18:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:26.447 21:18:59 -- common/autotest_common.sh@10 -- # set +x 00:05:26.447 21:18:59 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:26.447 21:18:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:26.447 21:18:59 -- common/autotest_common.sh@10 -- # set +x 00:05:26.447 21:18:59 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:27.023 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:27.023 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:27.965 21:19:01 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:27.965 21:19:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:27.965 21:19:01 -- common/autotest_common.sh@10 -- # set +x 00:05:27.965 21:19:01 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:27.965 21:19:01 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:27.965 21:19:01 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:27.965 21:19:01 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:27.965 21:19:01 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:27.965 21:19:01 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:27.965 21:19:01 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:27.965 21:19:01 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:27.965 21:19:01 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:27.965 21:19:01 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:27.965 21:19:01 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:28.227 21:19:01 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:28.227 21:19:01 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:05:28.227 21:19:01 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:28.227 21:19:01 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:28.227 21:19:01 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:28.227 21:19:01 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:28.227 21:19:01 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:28.227 21:19:01 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:28.227 21:19:01 -- common/autotest_common.sh@1593 -- # return 0 00:05:28.227 21:19:01 -- spdk/autotest.sh@150 -- # '[' 1 -eq 1 ']' 00:05:28.227 21:19:01 -- spdk/autotest.sh@151 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:28.227 21:19:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.227 21:19:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.227 21:19:01 -- common/autotest_common.sh@10 -- # set +x 00:05:28.227 ************************************ 00:05:28.227 START TEST unittest 00:05:28.227 ************************************ 00:05:28.227 21:19:01 unittest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:28.227 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:28.227 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:05:28.227 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:05:28.227 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:28.227 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:05:28.227 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:28.227 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:05:28.227 ++ rpc_py=rpc_cmd 00:05:28.227 ++ set -e 00:05:28.227 ++ shopt -s nullglob 00:05:28.227 ++ shopt -s extglob 00:05:28.227 ++ shopt -s inherit_errexit 00:05:28.227 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:05:28.227 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:28.227 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:28.227 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:28.227 +++ CONFIG_FIO_PLUGIN=y 00:05:28.227 +++ CONFIG_NVME_CUSE=y 00:05:28.227 +++ CONFIG_RAID5F=y 00:05:28.227 +++ CONFIG_LTO=n 00:05:28.227 +++ CONFIG_SMA=n 00:05:28.227 +++ CONFIG_ISAL=y 00:05:28.227 +++ CONFIG_OPENSSL_PATH= 00:05:28.227 +++ CONFIG_IDXD_KERNEL=n 00:05:28.227 +++ CONFIG_URING_PATH= 00:05:28.227 +++ CONFIG_DAOS=n 00:05:28.227 +++ CONFIG_DPDK_LIB_DIR= 00:05:28.227 +++ CONFIG_OCF=n 00:05:28.227 +++ CONFIG_EXAMPLES=y 00:05:28.227 +++ CONFIG_RDMA_PROV=verbs 00:05:28.227 +++ CONFIG_ISCSI_INITIATOR=y 00:05:28.227 +++ CONFIG_VTUNE=n 00:05:28.227 +++ CONFIG_DPDK_INC_DIR= 00:05:28.227 +++ CONFIG_CET=n 00:05:28.227 +++ CONFIG_TESTS=y 00:05:28.227 +++ CONFIG_APPS=y 00:05:28.227 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:28.227 +++ CONFIG_DAOS_DIR= 00:05:28.227 +++ CONFIG_CRYPTO_MLX5=n 00:05:28.227 +++ CONFIG_XNVME=n 00:05:28.227 +++ CONFIG_UNIT_TESTS=y 00:05:28.227 +++ CONFIG_FUSE=n 00:05:28.227 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:28.227 +++ CONFIG_OCF_PATH= 00:05:28.227 +++ CONFIG_WPDK_DIR= 00:05:28.227 +++ CONFIG_VFIO_USER=n 00:05:28.227 +++ CONFIG_MAX_LCORES=128 00:05:28.227 +++ CONFIG_ARCH=native 00:05:28.227 +++ CONFIG_TSAN=n 00:05:28.227 +++ CONFIG_VIRTIO=y 00:05:28.227 +++ CONFIG_HAVE_EVP_MAC=n 00:05:28.227 +++ CONFIG_IPSEC_MB=n 00:05:28.227 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:28.227 +++ CONFIG_DPDK_UADK=n 00:05:28.227 +++ CONFIG_ASAN=y 00:05:28.227 +++ CONFIG_SHARED=n 00:05:28.227 +++ CONFIG_VTUNE_DIR= 00:05:28.227 +++ CONFIG_RDMA_SET_TOS=y 00:05:28.227 +++ CONFIG_VBDEV_COMPRESS=n 00:05:28.227 +++ CONFIG_VFIO_USER_DIR= 00:05:28.227 +++ CONFIG_PGO_DIR= 00:05:28.227 +++ CONFIG_FUZZER_LIB= 00:05:28.227 +++ CONFIG_HAVE_EXECINFO_H=y 00:05:28.227 +++ CONFIG_USDT=n 00:05:28.227 +++ CONFIG_HAVE_KEYUTILS=y 00:05:28.227 +++ CONFIG_URING_ZNS=n 00:05:28.227 +++ CONFIG_FC_PATH= 00:05:28.227 +++ CONFIG_COVERAGE=y 00:05:28.227 +++ CONFIG_CUSTOMOCF=n 00:05:28.227 +++ CONFIG_DPDK_PKG_CONFIG=n 00:05:28.227 +++ CONFIG_WERROR=y 00:05:28.227 +++ CONFIG_DEBUG=y 00:05:28.227 +++ CONFIG_RDMA=y 00:05:28.227 +++ CONFIG_HAVE_ARC4RANDOM=n 00:05:28.227 +++ CONFIG_FUZZER=n 00:05:28.227 +++ CONFIG_FC=n 00:05:28.227 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:28.227 +++ CONFIG_HAVE_LIBARCHIVE=n 00:05:28.227 +++ CONFIG_DPDK_COMPRESSDEV=n 00:05:28.227 +++ CONFIG_CROSS_PREFIX= 00:05:28.227 +++ CONFIG_PREFIX=/usr/local 00:05:28.227 +++ CONFIG_HAVE_LIBBSD=n 00:05:28.227 +++ CONFIG_UBSAN=y 00:05:28.227 +++ CONFIG_PGO_CAPTURE=n 00:05:28.227 +++ CONFIG_UBLK=n 00:05:28.227 +++ CONFIG_ISAL_CRYPTO=y 00:05:28.227 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:28.227 +++ CONFIG_CRYPTO=n 00:05:28.227 +++ CONFIG_RBD=n 00:05:28.227 +++ CONFIG_LIBDIR= 00:05:28.227 +++ CONFIG_IPSEC_MB_DIR= 00:05:28.227 +++ CONFIG_PGO_USE=n 00:05:28.227 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:28.227 +++ CONFIG_GOLANG=n 00:05:28.227 +++ CONFIG_VHOST=y 00:05:28.227 +++ CONFIG_IDXD=y 00:05:28.227 +++ CONFIG_AVAHI=n 00:05:28.227 +++ CONFIG_URING=n 00:05:28.227 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:28.227 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:28.227 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:05:28.227 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:05:28.227 +++ _root=/home/vagrant/spdk_repo/spdk 00:05:28.227 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:05:28.227 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:05:28.227 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:05:28.227 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:28.227 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:28.227 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:28.227 +++ VHOST_APP=("$_app_dir/vhost") 00:05:28.227 +++ DD_APP=("$_app_dir/spdk_dd") 00:05:28.227 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:05:28.227 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:05:28.227 +++ [[ #ifndef SPDK_CONFIG_H 00:05:28.227 #define SPDK_CONFIG_H 00:05:28.227 #define SPDK_CONFIG_APPS 1 00:05:28.227 #define SPDK_CONFIG_ARCH native 00:05:28.227 #define SPDK_CONFIG_ASAN 1 00:05:28.227 #undef SPDK_CONFIG_AVAHI 00:05:28.227 #undef SPDK_CONFIG_CET 00:05:28.227 #define SPDK_CONFIG_COVERAGE 1 00:05:28.227 #define SPDK_CONFIG_CROSS_PREFIX 00:05:28.227 #undef SPDK_CONFIG_CRYPTO 00:05:28.227 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:28.227 #undef SPDK_CONFIG_CUSTOMOCF 00:05:28.227 #undef SPDK_CONFIG_DAOS 00:05:28.227 #define SPDK_CONFIG_DAOS_DIR 00:05:28.227 #define SPDK_CONFIG_DEBUG 1 00:05:28.227 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:28.227 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:28.227 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:28.227 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:28.227 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:28.227 #undef SPDK_CONFIG_DPDK_UADK 00:05:28.227 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:28.227 #define SPDK_CONFIG_EXAMPLES 1 00:05:28.227 #undef SPDK_CONFIG_FC 00:05:28.227 #define SPDK_CONFIG_FC_PATH 00:05:28.227 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:28.227 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:28.227 #undef SPDK_CONFIG_FUSE 00:05:28.227 #undef SPDK_CONFIG_FUZZER 00:05:28.227 #define SPDK_CONFIG_FUZZER_LIB 00:05:28.227 #undef SPDK_CONFIG_GOLANG 00:05:28.227 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:05:28.227 #undef SPDK_CONFIG_HAVE_EVP_MAC 00:05:28.227 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:28.227 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:05:28.227 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:28.227 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:28.227 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:28.227 #define SPDK_CONFIG_IDXD 1 00:05:28.227 #undef SPDK_CONFIG_IDXD_KERNEL 00:05:28.227 #undef SPDK_CONFIG_IPSEC_MB 00:05:28.227 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:28.227 #define SPDK_CONFIG_ISAL 1 00:05:28.227 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:28.227 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:28.227 #define SPDK_CONFIG_LIBDIR 00:05:28.227 #undef SPDK_CONFIG_LTO 00:05:28.227 #define SPDK_CONFIG_MAX_LCORES 128 00:05:28.227 #define SPDK_CONFIG_NVME_CUSE 1 00:05:28.227 #undef SPDK_CONFIG_OCF 00:05:28.227 #define SPDK_CONFIG_OCF_PATH 00:05:28.227 #define SPDK_CONFIG_OPENSSL_PATH 00:05:28.227 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:28.227 #define SPDK_CONFIG_PGO_DIR 00:05:28.227 #undef SPDK_CONFIG_PGO_USE 00:05:28.227 #define SPDK_CONFIG_PREFIX /usr/local 00:05:28.227 #define SPDK_CONFIG_RAID5F 1 00:05:28.227 #undef SPDK_CONFIG_RBD 00:05:28.227 #define SPDK_CONFIG_RDMA 1 00:05:28.227 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:28.227 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:28.227 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:28.227 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:28.227 #undef SPDK_CONFIG_SHARED 00:05:28.227 #undef SPDK_CONFIG_SMA 00:05:28.227 #define SPDK_CONFIG_TESTS 1 00:05:28.227 #undef SPDK_CONFIG_TSAN 00:05:28.227 #undef SPDK_CONFIG_UBLK 00:05:28.227 #define SPDK_CONFIG_UBSAN 1 00:05:28.227 #define SPDK_CONFIG_UNIT_TESTS 1 00:05:28.227 #undef SPDK_CONFIG_URING 00:05:28.227 #define SPDK_CONFIG_URING_PATH 00:05:28.227 #undef SPDK_CONFIG_URING_ZNS 00:05:28.227 #undef SPDK_CONFIG_USDT 00:05:28.227 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:28.227 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:28.227 #undef SPDK_CONFIG_VFIO_USER 00:05:28.227 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:28.227 #define SPDK_CONFIG_VHOST 1 00:05:28.227 #define SPDK_CONFIG_VIRTIO 1 00:05:28.227 #undef SPDK_CONFIG_VTUNE 00:05:28.227 #define SPDK_CONFIG_VTUNE_DIR 00:05:28.227 #define SPDK_CONFIG_WERROR 1 00:05:28.227 #define SPDK_CONFIG_WPDK_DIR 00:05:28.227 #undef SPDK_CONFIG_XNVME 00:05:28.227 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:28.227 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:28.227 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:28.227 +++ [[ -e /bin/wpdk_common.sh ]] 00:05:28.227 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:28.227 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:28.227 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:28.227 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:28.227 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:28.227 ++++ export PATH 00:05:28.228 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:28.228 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:28.228 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:28.228 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:28.228 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:28.228 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:05:28.228 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:05:28.228 +++ TEST_TAG=N/A 00:05:28.228 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:05:28.228 +++ PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:05:28.228 ++++ uname -s 00:05:28.228 +++ PM_OS=Linux 00:05:28.228 +++ MONITOR_RESOURCES_SUDO=() 00:05:28.228 +++ declare -A MONITOR_RESOURCES_SUDO 00:05:28.228 +++ MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:05:28.228 +++ MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:05:28.228 +++ MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:05:28.228 +++ MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:05:28.228 +++ SUDO[0]= 00:05:28.228 +++ SUDO[1]='sudo -E' 00:05:28.228 +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:05:28.228 +++ [[ Linux == FreeBSD ]] 00:05:28.228 +++ [[ Linux == Linux ]] 00:05:28.228 +++ [[ QEMU != QEMU ]] 00:05:28.228 +++ [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:05:28.228 ++ : 0 00:05:28.228 ++ export RUN_NIGHTLY 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_RUN_VALGRIND 00:05:28.228 ++ : 1 00:05:28.228 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:05:28.228 ++ : 1 00:05:28.228 ++ export SPDK_TEST_UNITTEST 00:05:28.228 ++ : 00:05:28.228 ++ export SPDK_TEST_AUTOBUILD 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_RELEASE_BUILD 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_ISAL 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_ISCSI 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_ISCSI_INITIATOR 00:05:28.228 ++ : 1 00:05:28.228 ++ export SPDK_TEST_NVME 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_NVME_PMR 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_NVME_BP 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_NVME_CLI 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_NVME_CUSE 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_NVME_FDP 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_NVMF 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_VFIOUSER 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_VFIOUSER_QEMU 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_FUZZER 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_FUZZER_SHORT 00:05:28.228 ++ : rdma 00:05:28.228 ++ export SPDK_TEST_NVMF_TRANSPORT 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_RBD 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_VHOST 00:05:28.228 ++ : 1 00:05:28.228 ++ export SPDK_TEST_BLOCKDEV 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_IOAT 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_BLOBFS 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_VHOST_INIT 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_LVOL 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_VBDEV_COMPRESS 00:05:28.228 ++ : 1 00:05:28.228 ++ export SPDK_RUN_ASAN 00:05:28.228 ++ : 1 00:05:28.228 ++ export SPDK_RUN_UBSAN 00:05:28.228 ++ : 00:05:28.228 ++ export SPDK_RUN_EXTERNAL_DPDK 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_RUN_NON_ROOT 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_CRYPTO 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_FTL 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_OCF 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_VMD 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_OPAL 00:05:28.228 ++ : 00:05:28.228 ++ export SPDK_TEST_NATIVE_DPDK 00:05:28.228 ++ : true 00:05:28.228 ++ export SPDK_AUTOTEST_X 00:05:28.228 ++ : 1 00:05:28.228 ++ export SPDK_TEST_RAID5 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_URING 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_USDT 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_USE_IGB_UIO 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_SCHEDULER 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_SCANBUILD 00:05:28.228 ++ : 00:05:28.228 ++ export SPDK_TEST_NVMF_NICS 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_SMA 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_DAOS 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_XNVME 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_ACCEL_DSA 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_ACCEL_IAA 00:05:28.228 ++ : 00:05:28.228 ++ export SPDK_TEST_FUZZER_TARGET 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_TEST_NVMF_MDNS 00:05:28.228 ++ : 0 00:05:28.228 ++ export SPDK_JSONRPC_GO_CLIENT 00:05:28.228 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:28.228 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:28.228 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:05:28.228 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:05:28.228 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:28.228 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:28.228 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:28.228 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:28.228 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:28.228 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:05:28.228 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:28.228 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:28.228 ++ export PYTHONDONTWRITEBYTECODE=1 00:05:28.228 ++ PYTHONDONTWRITEBYTECODE=1 00:05:28.228 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:28.228 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:28.228 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:28.228 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:28.228 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:05:28.228 ++ rm -rf /var/tmp/asan_suppression_file 00:05:28.228 ++ cat 00:05:28.228 ++ echo leak:libfuse3.so 00:05:28.228 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:28.228 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:28.228 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:28.228 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:28.228 ++ '[' -z /var/spdk/dependencies ']' 00:05:28.228 ++ export DEPENDENCY_DIR 00:05:28.228 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:28.228 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:28.228 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:28.228 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:28.228 ++ export QEMU_BIN= 00:05:28.228 ++ QEMU_BIN= 00:05:28.228 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:28.228 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:28.228 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:28.228 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:28.228 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:28.228 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:28.228 ++ '[' 0 -eq 0 ']' 00:05:28.228 ++ export valgrind= 00:05:28.228 ++ valgrind= 00:05:28.228 +++ uname -s 00:05:28.228 ++ '[' Linux = Linux ']' 00:05:28.228 ++ HUGEMEM=4096 00:05:28.228 ++ export CLEAR_HUGE=yes 00:05:28.228 ++ CLEAR_HUGE=yes 00:05:28.228 ++ [[ 0 -eq 1 ]] 00:05:28.228 ++ [[ 0 -eq 1 ]] 00:05:28.228 ++ MAKE=make 00:05:28.228 +++ nproc 00:05:28.228 ++ MAKEFLAGS=-j10 00:05:28.228 ++ export HUGEMEM=4096 00:05:28.228 ++ HUGEMEM=4096 00:05:28.228 ++ NO_HUGE=() 00:05:28.228 ++ TEST_MODE= 00:05:28.228 ++ [[ -z '' ]] 00:05:28.228 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:28.228 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:28.228 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:05:28.228 ++ exec 00:05:28.228 ++ set_test_storage 2147483648 00:05:28.228 ++ [[ -v testdir ]] 00:05:28.228 ++ local requested_size=2147483648 00:05:28.228 ++ local mount target_dir 00:05:28.228 ++ local -A mounts fss sizes avails uses 00:05:28.228 ++ local source fs size avail mount use 00:05:28.228 ++ local storage_fallback storage_candidates 00:05:28.228 +++ mktemp -udt spdk.XXXXXX 00:05:28.228 ++ storage_fallback=/tmp/spdk.CQXDeu 00:05:28.228 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:28.228 ++ [[ -n '' ]] 00:05:28.228 ++ [[ -n '' ]] 00:05:28.228 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.CQXDeu/tests/unit /tmp/spdk.CQXDeu 00:05:28.228 ++ requested_size=2214592512 00:05:28.228 ++ read -r source fs size use avail _ mount 00:05:28.228 +++ df -T 00:05:28.228 +++ grep -v Filesystem 00:05:28.228 ++ mounts["$mount"]=udev 00:05:28.228 ++ fss["$mount"]=devtmpfs 00:05:28.228 ++ avails["$mount"]=6224461824 00:05:28.228 ++ sizes["$mount"]=6224461824 00:05:28.228 ++ uses["$mount"]=0 00:05:28.228 ++ read -r source fs size use avail _ mount 00:05:28.228 ++ mounts["$mount"]=tmpfs 00:05:28.228 ++ fss["$mount"]=tmpfs 00:05:28.228 ++ avails["$mount"]=1253408768 00:05:28.228 ++ sizes["$mount"]=1254514688 00:05:28.228 ++ uses["$mount"]=1105920 00:05:28.228 ++ read -r source fs size use avail _ mount 00:05:28.228 ++ mounts["$mount"]=/dev/vda1 00:05:28.228 ++ fss["$mount"]=ext4 00:05:28.228 ++ avails["$mount"]=10424877056 00:05:28.228 ++ sizes["$mount"]=20616794112 00:05:28.228 ++ uses["$mount"]=10175139840 00:05:28.228 ++ read -r source fs size use avail _ mount 00:05:28.228 ++ mounts["$mount"]=tmpfs 00:05:28.228 ++ fss["$mount"]=tmpfs 00:05:28.228 ++ avails["$mount"]=6272561152 00:05:28.228 ++ sizes["$mount"]=6272561152 00:05:28.228 ++ uses["$mount"]=0 00:05:28.228 ++ read -r source fs size use avail _ mount 00:05:28.228 ++ mounts["$mount"]=tmpfs 00:05:28.228 ++ fss["$mount"]=tmpfs 00:05:28.228 ++ avails["$mount"]=5242880 00:05:28.228 ++ sizes["$mount"]=5242880 00:05:28.228 ++ uses["$mount"]=0 00:05:28.228 ++ read -r source fs size use avail _ mount 00:05:28.228 ++ mounts["$mount"]=tmpfs 00:05:28.228 ++ fss["$mount"]=tmpfs 00:05:28.228 ++ avails["$mount"]=6272561152 00:05:28.229 ++ sizes["$mount"]=6272561152 00:05:28.229 ++ uses["$mount"]=0 00:05:28.229 ++ read -r source fs size use avail _ mount 00:05:28.229 ++ mounts["$mount"]=/dev/loop1 00:05:28.229 ++ fss["$mount"]=squashfs 00:05:28.229 ++ avails["$mount"]=0 00:05:28.229 ++ sizes["$mount"]=41025536 00:05:28.229 ++ uses["$mount"]=41025536 00:05:28.229 ++ read -r source fs size use avail _ mount 00:05:28.229 ++ mounts["$mount"]=/dev/loop0 00:05:28.229 ++ fss["$mount"]=squashfs 00:05:28.229 ++ avails["$mount"]=0 00:05:28.229 ++ sizes["$mount"]=67108864 00:05:28.229 ++ uses["$mount"]=67108864 00:05:28.229 ++ read -r source fs size use avail _ mount 00:05:28.229 ++ mounts["$mount"]=/dev/loop2 00:05:28.229 ++ fss["$mount"]=squashfs 00:05:28.229 ++ avails["$mount"]=0 00:05:28.229 ++ sizes["$mount"]=96337920 00:05:28.229 ++ uses["$mount"]=96337920 00:05:28.229 ++ read -r source fs size use avail _ mount 00:05:28.229 ++ mounts["$mount"]=/dev/vda15 00:05:28.229 ++ fss["$mount"]=vfat 00:05:28.229 ++ avails["$mount"]=103089152 00:05:28.229 ++ sizes["$mount"]=109422592 00:05:28.229 ++ uses["$mount"]=6334464 00:05:28.229 ++ read -r source fs size use avail _ mount 00:05:28.229 ++ mounts["$mount"]=tmpfs 00:05:28.229 ++ fss["$mount"]=tmpfs 00:05:28.229 ++ avails["$mount"]=1254510592 00:05:28.229 ++ sizes["$mount"]=1254510592 00:05:28.229 ++ uses["$mount"]=0 00:05:28.229 ++ read -r source fs size use avail _ mount 00:05:28.229 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt/output 00:05:28.229 ++ fss["$mount"]=fuse.sshfs 00:05:28.229 ++ avails["$mount"]=93131182080 00:05:28.229 ++ sizes["$mount"]=105088212992 00:05:28.229 ++ uses["$mount"]=6571597824 00:05:28.229 ++ read -r source fs size use avail _ mount 00:05:28.229 ++ printf '* Looking for test storage...\n' 00:05:28.229 * Looking for test storage... 00:05:28.229 ++ local target_space new_size 00:05:28.229 ++ for target_dir in "${storage_candidates[@]}" 00:05:28.229 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:05:28.229 +++ awk '$1 !~ /Filesystem/{print $6}' 00:05:28.229 ++ mount=/ 00:05:28.229 ++ target_space=10424877056 00:05:28.229 ++ (( target_space == 0 || target_space < requested_size )) 00:05:28.229 ++ (( target_space >= requested_size )) 00:05:28.229 ++ [[ ext4 == tmpfs ]] 00:05:28.229 ++ [[ ext4 == ramfs ]] 00:05:28.229 ++ [[ / == / ]] 00:05:28.229 ++ new_size=12389732352 00:05:28.229 ++ (( new_size * 100 / sizes[/] > 95 )) 00:05:28.229 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:28.229 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:28.229 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:05:28.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:05:28.229 ++ return 0 00:05:28.229 ++ set -o errtrace 00:05:28.229 ++ shopt -s extdebug 00:05:28.229 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:05:28.229 ++ PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:28.229 21:19:01 unittest -- common/autotest_common.sh@1687 -- # true 00:05:28.229 21:19:01 unittest -- common/autotest_common.sh@1689 -- # xtrace_fd 00:05:28.229 21:19:01 unittest -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:05:28.229 21:19:01 unittest -- common/autotest_common.sh@29 -- # exec 00:05:28.229 21:19:01 unittest -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:28.229 21:19:01 unittest -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:28.229 21:19:01 unittest -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:28.229 21:19:01 unittest -- common/autotest_common.sh@18 -- # set -x 00:05:28.229 21:19:01 unittest -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:05:28.229 21:19:01 unittest -- unit/unittest.sh@153 -- # '[' 0 -eq 1 ']' 00:05:28.229 21:19:01 unittest -- unit/unittest.sh@160 -- # '[' -z x ']' 00:05:28.229 21:19:01 unittest -- unit/unittest.sh@167 -- # '[' 0 -eq 1 ']' 00:05:28.229 21:19:01 unittest -- unit/unittest.sh@180 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:05:28.229 21:19:01 unittest -- unit/unittest.sh@180 -- # CC_TYPE=CC_TYPE=gcc 00:05:28.229 21:19:01 unittest -- unit/unittest.sh@181 -- # hash lcov 00:05:28.229 21:19:01 unittest -- unit/unittest.sh@181 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:28.229 21:19:01 unittest -- unit/unittest.sh@181 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:05:28.229 21:19:01 unittest -- unit/unittest.sh@182 -- # cov_avail=yes 00:05:28.229 21:19:01 unittest -- unit/unittest.sh@186 -- # '[' yes = yes ']' 00:05:28.229 21:19:01 unittest -- unit/unittest.sh@188 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:05:28.229 21:19:01 unittest -- unit/unittest.sh@191 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:28.229 21:19:01 unittest -- unit/unittest.sh@193 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:28.229 21:19:01 unittest -- unit/unittest.sh@201 -- # export 'LCOV_OPTS= 00:05:28.229 --rc lcov_branch_coverage=1 00:05:28.229 --rc lcov_function_coverage=1 00:05:28.229 --rc genhtml_branch_coverage=1 00:05:28.229 --rc genhtml_function_coverage=1 00:05:28.229 --rc genhtml_legend=1 00:05:28.229 --rc geninfo_all_blocks=1 00:05:28.229 ' 00:05:28.229 21:19:01 unittest -- unit/unittest.sh@201 -- # LCOV_OPTS=' 00:05:28.229 --rc lcov_branch_coverage=1 00:05:28.229 --rc lcov_function_coverage=1 00:05:28.229 --rc genhtml_branch_coverage=1 00:05:28.229 --rc genhtml_function_coverage=1 00:05:28.229 --rc genhtml_legend=1 00:05:28.229 --rc geninfo_all_blocks=1 00:05:28.229 ' 00:05:28.229 21:19:01 unittest -- unit/unittest.sh@202 -- # export 'LCOV=lcov 00:05:28.229 --rc lcov_branch_coverage=1 00:05:28.229 --rc lcov_function_coverage=1 00:05:28.229 --rc genhtml_branch_coverage=1 00:05:28.229 --rc genhtml_function_coverage=1 00:05:28.229 --rc genhtml_legend=1 00:05:28.229 --rc geninfo_all_blocks=1 00:05:28.229 --no-external' 00:05:28.229 21:19:01 unittest -- unit/unittest.sh@202 -- # LCOV='lcov 00:05:28.229 --rc lcov_branch_coverage=1 00:05:28.229 --rc lcov_function_coverage=1 00:05:28.229 --rc genhtml_branch_coverage=1 00:05:28.229 --rc genhtml_function_coverage=1 00:05:28.229 --rc genhtml_legend=1 00:05:28.229 --rc geninfo_all_blocks=1 00:05:28.229 --no-external' 00:05:28.229 21:19:01 unittest -- unit/unittest.sh@204 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:30.170 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:30.170 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:30.428 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:30.428 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:30.428 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:30.428 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:30.428 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:05:30.428 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:05:30.428 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:30.428 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:30.428 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:30.428 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:30.428 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:30.428 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:30.428 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:30.428 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:30.428 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:30.428 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:30.428 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:30.428 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:30.428 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:30.428 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:30.428 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:30.428 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:30.428 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:30.428 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:30.428 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:30.428 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:30.428 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:30.428 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:30.428 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:30.428 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:30.428 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:30.428 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:30.428 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:30.428 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:30.428 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:30.428 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:30.428 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:30.428 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:30.428 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:30.428 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:30.428 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:30.428 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:30.428 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:30.428 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:30.428 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:30.428 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:30.428 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:30.428 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:30.428 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:30.428 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:30.428 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:30.429 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:30.429 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:30.429 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:30.429 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:30.429 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:30.429 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:30.429 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:30.429 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:30.429 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:30.429 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:30.429 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:30.429 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:30.429 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:30.686 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:30.686 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:30.686 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:30.686 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:30.686 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:30.686 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:30.686 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:30.686 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:30.686 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:30.686 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:06:26.902 21:19:53 unittest -- unit/unittest.sh@208 -- # uname -m 00:06:26.902 21:19:53 unittest -- unit/unittest.sh@208 -- # '[' x86_64 = aarch64 ']' 00:06:26.902 21:19:53 unittest -- unit/unittest.sh@212 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:26.902 21:19:53 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:26.902 21:19:53 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.902 21:19:53 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:26.902 ************************************ 00:06:26.902 START TEST unittest_pci_event 00:06:26.902 ************************************ 00:06:26.902 21:19:53 unittest.unittest_pci_event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:26.902 00:06:26.902 00:06:26.902 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.902 http://cunit.sourceforge.net/ 00:06:26.902 00:06:26.902 00:06:26.902 Suite: pci_event 00:06:26.902 Test: test_pci_parse_event ...[2024-07-15 21:19:53.752088] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:06:26.902 passed 00:06:26.902 00:06:26.902 [2024-07-15 21:19:53.752433] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:06:26.902 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.902 suites 1 1 n/a 0 0 00:06:26.902 tests 1 1 1 0 0 00:06:26.902 asserts 15 15 15 0 n/a 00:06:26.902 00:06:26.902 Elapsed time = 0.001 seconds 00:06:26.902 00:06:26.902 real 0m0.034s 00:06:26.902 user 0m0.017s 00:06:26.902 sys 0m0.015s 00:06:26.902 21:19:53 unittest.unittest_pci_event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.902 21:19:53 unittest.unittest_pci_event -- common/autotest_common.sh@10 -- # set +x 00:06:26.902 ************************************ 00:06:26.902 END TEST unittest_pci_event 00:06:26.902 ************************************ 00:06:26.902 21:19:53 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:26.902 21:19:53 unittest -- unit/unittest.sh@213 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:26.902 21:19:53 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:26.902 21:19:53 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.902 21:19:53 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:26.902 ************************************ 00:06:26.902 START TEST unittest_include 00:06:26.902 ************************************ 00:06:26.902 21:19:53 unittest.unittest_include -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:26.902 00:06:26.902 00:06:26.902 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.902 http://cunit.sourceforge.net/ 00:06:26.902 00:06:26.902 00:06:26.902 Suite: histogram 00:06:26.902 Test: histogram_test ...passed 00:06:26.902 Test: histogram_merge ...passed 00:06:26.902 00:06:26.902 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.902 suites 1 1 n/a 0 0 00:06:26.902 tests 2 2 2 0 0 00:06:26.902 asserts 50 50 50 0 n/a 00:06:26.902 00:06:26.902 Elapsed time = 0.008 seconds 00:06:26.902 00:06:26.902 real 0m0.044s 00:06:26.902 user 0m0.030s 00:06:26.902 sys 0m0.012s 00:06:26.902 21:19:53 unittest.unittest_include -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.902 21:19:53 unittest.unittest_include -- common/autotest_common.sh@10 -- # set +x 00:06:26.902 ************************************ 00:06:26.902 END TEST unittest_include 00:06:26.902 ************************************ 00:06:26.902 21:19:53 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:26.902 21:19:53 unittest -- unit/unittest.sh@214 -- # run_test unittest_bdev unittest_bdev 00:06:26.902 21:19:53 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:26.902 21:19:53 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.902 21:19:53 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:26.902 ************************************ 00:06:26.902 START TEST unittest_bdev 00:06:26.902 ************************************ 00:06:26.902 21:19:53 unittest.unittest_bdev -- common/autotest_common.sh@1123 -- # unittest_bdev 00:06:26.902 21:19:53 unittest.unittest_bdev -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:06:26.902 00:06:26.902 00:06:26.902 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.902 http://cunit.sourceforge.net/ 00:06:26.902 00:06:26.902 00:06:26.902 Suite: bdev 00:06:26.902 Test: bytes_to_blocks_test ...passed 00:06:26.902 Test: num_blocks_test ...passed 00:06:26.902 Test: io_valid_test ...passed 00:06:26.902 Test: open_write_test ...[2024-07-15 21:19:53.993139] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:06:26.902 [2024-07-15 21:19:53.993405] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:06:26.902 [2024-07-15 21:19:53.993503] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:06:26.902 passed 00:06:26.902 Test: claim_test ...passed 00:06:26.902 Test: alias_add_del_test ...[2024-07-15 21:19:54.072052] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:06:26.902 [2024-07-15 21:19:54.072180] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4643:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:06:26.902 [2024-07-15 21:19:54.072220] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:06:26.902 passed 00:06:26.902 Test: get_device_stat_test ...passed 00:06:26.902 Test: bdev_io_types_test ...passed 00:06:26.902 Test: bdev_io_wait_test ...passed 00:06:26.902 Test: bdev_io_spans_split_test ...passed 00:06:26.902 Test: bdev_io_boundary_split_test ...passed 00:06:26.902 Test: bdev_io_max_size_and_segment_split_test ...[2024-07-15 21:19:54.230405] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3208:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:06:26.902 passed 00:06:26.902 Test: bdev_io_mix_split_test ...passed 00:06:26.902 Test: bdev_io_split_with_io_wait ...passed 00:06:26.902 Test: bdev_io_write_unit_split_test ...[2024-07-15 21:19:54.350876] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:26.902 [2024-07-15 21:19:54.350974] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:26.902 [2024-07-15 21:19:54.351000] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:06:26.902 [2024-07-15 21:19:54.351049] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:06:26.902 passed 00:06:26.902 Test: bdev_io_alignment_with_boundary ...passed 00:06:26.903 Test: bdev_io_alignment ...passed 00:06:26.903 Test: bdev_histograms ...passed 00:06:26.903 Test: bdev_write_zeroes ...passed 00:06:26.903 Test: bdev_compare_and_write ...passed 00:06:26.903 Test: bdev_compare ...passed 00:06:26.903 Test: bdev_compare_emulated ...passed 00:06:26.903 Test: bdev_zcopy_write ...passed 00:06:26.903 Test: bdev_zcopy_read ...passed 00:06:26.903 Test: bdev_open_while_hotremove ...passed 00:06:26.903 Test: bdev_close_while_hotremove ...passed 00:06:26.903 Test: bdev_open_ext_test ...[2024-07-15 21:19:54.802646] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8184:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:26.903 passed 00:06:26.903 Test: bdev_open_ext_unregister ...[2024-07-15 21:19:54.802862] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8184:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:26.903 passed 00:06:26.903 Test: bdev_set_io_timeout ...passed 00:06:26.903 Test: bdev_set_qd_sampling ...passed 00:06:26.903 Test: lba_range_overlap ...passed 00:06:26.903 Test: lock_lba_range_check_ranges ...passed 00:06:26.903 Test: lock_lba_range_with_io_outstanding ...passed 00:06:26.903 Test: lock_lba_range_overlapped ...passed 00:06:26.903 Test: bdev_quiesce ...[2024-07-15 21:19:55.018747] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10107:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:06:26.903 passed 00:06:26.903 Test: bdev_io_abort ...passed 00:06:26.903 Test: bdev_unmap ...passed 00:06:26.903 Test: bdev_write_zeroes_split_test ...passed 00:06:26.903 Test: bdev_set_options_test ...passed 00:06:26.903 Test: bdev_get_memory_domains ...passed 00:06:26.903 Test: bdev_io_ext ...[2024-07-15 21:19:55.154448] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 502:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:06:26.903 passed 00:06:26.903 Test: bdev_io_ext_no_opts ...passed 00:06:26.903 Test: bdev_io_ext_invalid_opts ...passed 00:06:26.903 Test: bdev_io_ext_split ...passed 00:06:26.903 Test: bdev_io_ext_bounce_buffer ...passed 00:06:26.903 Test: bdev_register_uuid_alias ...[2024-07-15 21:19:55.368948] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name b8e208a7-af92-40f4-ae03-627b6b0fc84e already exists 00:06:26.903 [2024-07-15 21:19:55.369040] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:b8e208a7-af92-40f4-ae03-627b6b0fc84e alias for bdev bdev0 00:06:26.903 passed 00:06:26.903 Test: bdev_unregister_by_name ...passed 00:06:26.903 Test: for_each_bdev_test ...passed 00:06:26.903 Test: bdev_seek_test ...[2024-07-15 21:19:55.389577] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7974:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:06:26.903 [2024-07-15 21:19:55.389635] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7982:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:06:26.903 passed 00:06:26.903 Test: bdev_copy ...passed 00:06:26.903 Test: bdev_copy_split_test ...passed 00:06:26.903 Test: examine_locks ...passed 00:06:26.903 Test: claim_v2_rwo ...[2024-07-15 21:19:55.509726] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:26.903 [2024-07-15 21:19:55.509789] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8708:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:26.903 [2024-07-15 21:19:55.509803] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:26.903 [2024-07-15 21:19:55.509854] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:26.903 [2024-07-15 21:19:55.509868] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:26.903 [2024-07-15 21:19:55.509907] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8703:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:06:26.903 passed 00:06:26.903 Test: claim_v2_rom ...[2024-07-15 21:19:55.510039] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:26.903 [2024-07-15 21:19:55.510099] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:26.903 [2024-07-15 21:19:55.510119] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:26.903 [2024-07-15 21:19:55.510139] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:26.903 [2024-07-15 21:19:55.510183] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8746:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:06:26.903 [2024-07-15 21:19:55.510213] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8741:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:26.903 passed 00:06:26.903 Test: claim_v2_rwm ...[2024-07-15 21:19:55.510303] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8776:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:26.903 [2024-07-15 21:19:55.510352] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:26.903 [2024-07-15 21:19:55.510374] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:26.903 [2024-07-15 21:19:55.510394] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:26.903 [2024-07-15 21:19:55.510409] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:26.903 [2024-07-15 21:19:55.510431] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8796:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:06:26.903 [2024-07-15 21:19:55.510474] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8776:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:26.903 passed 00:06:26.903 Test: claim_v2_existing_writer ...[2024-07-15 21:19:55.510595] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8741:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:26.903 passed 00:06:26.903 Test: claim_v2_existing_v1 ...[2024-07-15 21:19:55.510623] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8741:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:26.903 [2024-07-15 21:19:55.510731] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:26.903 [2024-07-15 21:19:55.510759] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:26.903 [2024-07-15 21:19:55.510774] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:26.903 passed 00:06:26.903 Test: claim_v1_existing_v2 ...[2024-07-15 21:19:55.510877] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:26.903 [2024-07-15 21:19:55.510923] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:26.903 [2024-07-15 21:19:55.510958] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:26.903 passed 00:06:26.903 Test: examine_claimed ...[2024-07-15 21:19:55.511216] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:06:26.903 passed 00:06:26.903 00:06:26.903 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.903 suites 1 1 n/a 0 0 00:06:26.903 tests 59 59 59 0 0 00:06:26.903 asserts 4599 4599 4599 0 n/a 00:06:26.903 00:06:26.903 Elapsed time = 1.590 seconds 00:06:26.903 21:19:55 unittest.unittest_bdev -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:06:26.903 00:06:26.903 00:06:26.903 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.903 http://cunit.sourceforge.net/ 00:06:26.903 00:06:26.903 00:06:26.903 Suite: nvme 00:06:26.903 Test: test_create_ctrlr ...passed 00:06:26.903 Test: test_reset_ctrlr ...[2024-07-15 21:19:55.559164] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.903 passed 00:06:26.903 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:06:26.903 Test: test_failover_ctrlr ...passed 00:06:26.903 Test: test_race_between_failover_and_add_secondary_trid ...[2024-07-15 21:19:55.561877] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.903 [2024-07-15 21:19:55.562127] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.903 [2024-07-15 21:19:55.562348] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.903 passed 00:06:26.903 Test: test_pending_reset ...[2024-07-15 21:19:55.563949] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.903 [2024-07-15 21:19:55.564187] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.903 passed 00:06:26.903 Test: test_attach_ctrlr ...[2024-07-15 21:19:55.565308] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:06:26.903 passed 00:06:26.903 Test: test_aer_cb ...passed 00:06:26.903 Test: test_submit_nvme_cmd ...passed 00:06:26.903 Test: test_add_remove_trid ...passed 00:06:26.903 Test: test_abort ...[2024-07-15 21:19:55.568902] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7452:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:06:26.903 passed 00:06:26.903 Test: test_get_io_qpair ...passed 00:06:26.903 Test: test_bdev_unregister ...passed 00:06:26.903 Test: test_compare_ns ...passed 00:06:26.903 Test: test_init_ana_log_page ...passed 00:06:26.903 Test: test_get_memory_domains ...passed 00:06:26.903 Test: test_reconnect_qpair ...[2024-07-15 21:19:55.571672] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.903 passed 00:06:26.903 Test: test_create_bdev_ctrlr ...[2024-07-15 21:19:55.572216] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5382:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:06:26.903 passed 00:06:26.903 Test: test_add_multi_ns_to_bdev ...[2024-07-15 21:19:55.573685] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4573:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:06:26.903 passed 00:06:26.903 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:06:26.903 Test: test_admin_path ...passed 00:06:26.903 Test: test_reset_bdev_ctrlr ...passed 00:06:26.903 Test: test_find_io_path ...passed 00:06:26.903 Test: test_retry_io_if_ana_state_is_updating ...passed 00:06:26.903 Test: test_retry_io_for_io_path_error ...passed 00:06:26.903 Test: test_retry_io_count ...passed 00:06:26.904 Test: test_concurrent_read_ana_log_page ...passed 00:06:26.904 Test: test_retry_io_for_ana_error ...passed 00:06:26.904 Test: test_check_io_error_resiliency_params ...[2024-07-15 21:19:55.580877] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6076:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:06:26.904 [2024-07-15 21:19:55.580963] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6080:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:26.904 [2024-07-15 21:19:55.580991] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6089:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:26.904 [2024-07-15 21:19:55.581029] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6092:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:06:26.904 [2024-07-15 21:19:55.581050] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6104:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:26.904 [2024-07-15 21:19:55.581079] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6104:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:26.904 [2024-07-15 21:19:55.581098] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6084:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:06:26.904 [2024-07-15 21:19:55.581141] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6099:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:06:26.904 [2024-07-15 21:19:55.581168] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6096:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:06:26.904 passed 00:06:26.904 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:06:26.904 Test: test_reconnect_ctrlr ...[2024-07-15 21:19:55.582052] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.904 [2024-07-15 21:19:55.582213] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.904 [2024-07-15 21:19:55.582473] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.904 [2024-07-15 21:19:55.582602] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.904 [2024-07-15 21:19:55.582746] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.904 passed 00:06:26.904 Test: test_retry_failover_ctrlr ...[2024-07-15 21:19:55.583124] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.904 passed 00:06:26.904 Test: test_fail_path ...[2024-07-15 21:19:55.583748] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.904 [2024-07-15 21:19:55.583899] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.904 [2024-07-15 21:19:55.584057] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.904 [2024-07-15 21:19:55.584175] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.904 [2024-07-15 21:19:55.584334] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.904 passed 00:06:26.904 Test: test_nvme_ns_cmp ...passed 00:06:26.904 Test: test_ana_transition ...passed 00:06:26.904 Test: test_set_preferred_path ...passed 00:06:26.904 Test: test_find_next_io_path ...passed 00:06:26.904 Test: test_find_io_path_min_qd ...passed 00:06:26.904 Test: test_disable_auto_failback ...[2024-07-15 21:19:55.586035] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.904 passed 00:06:26.904 Test: test_set_multipath_policy ...passed 00:06:26.904 Test: test_uuid_generation ...passed 00:06:26.904 Test: test_retry_io_to_same_path ...passed 00:06:26.904 Test: test_race_between_reset_and_disconnected ...passed 00:06:26.904 Test: test_ctrlr_op_rpc ...passed 00:06:26.904 Test: test_bdev_ctrlr_op_rpc ...passed 00:06:26.904 Test: test_disable_enable_ctrlr ...[2024-07-15 21:19:55.589777] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.904 [2024-07-15 21:19:55.589940] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.904 passed 00:06:26.904 Test: test_delete_ctrlr_done ...passed 00:06:26.904 Test: test_ns_remove_during_reset ...passed 00:06:26.904 Test: test_io_path_is_current ...passed 00:06:26.904 00:06:26.904 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.904 suites 1 1 n/a 0 0 00:06:26.904 tests 49 49 49 0 0 00:06:26.904 asserts 3577 3577 3577 0 n/a 00:06:26.904 00:06:26.904 Elapsed time = 0.033 seconds 00:06:26.904 21:19:55 unittest.unittest_bdev -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:06:26.904 00:06:26.904 00:06:26.904 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.904 http://cunit.sourceforge.net/ 00:06:26.904 00:06:26.904 Test Options 00:06:26.904 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:06:26.904 00:06:26.904 Suite: raid 00:06:26.904 Test: test_create_raid ...passed 00:06:26.904 Test: test_create_raid_superblock ...passed 00:06:26.904 Test: test_delete_raid ...passed 00:06:26.904 Test: test_create_raid_invalid_args ...[2024-07-15 21:19:55.643781] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1481:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:06:26.904 [2024-07-15 21:19:55.644163] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1475:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:06:26.904 [2024-07-15 21:19:55.644692] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1465:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:06:26.904 [2024-07-15 21:19:55.644884] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3193:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:26.904 [2024-07-15 21:19:55.644954] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3369:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:06:26.904 [2024-07-15 21:19:55.645688] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3193:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:26.904 [2024-07-15 21:19:55.645722] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3369:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:06:26.904 passed 00:06:26.904 Test: test_delete_raid_invalid_args ...passed 00:06:26.904 Test: test_io_channel ...passed 00:06:26.904 Test: test_reset_io ...passed 00:06:26.904 Test: test_multi_raid ...passed 00:06:26.904 Test: test_io_type_supported ...passed 00:06:26.904 Test: test_raid_json_dump_info ...passed 00:06:26.904 Test: test_context_size ...passed 00:06:26.904 Test: test_raid_level_conversions ...passed 00:06:26.904 Test: test_raid_io_split ...passed 00:06:26.904 Test: test_raid_process ...passed 00:06:26.904 00:06:26.904 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.904 suites 1 1 n/a 0 0 00:06:26.904 tests 14 14 14 0 0 00:06:26.904 asserts 6183 6183 6183 0 n/a 00:06:26.904 00:06:26.904 Elapsed time = 0.017 seconds 00:06:26.904 21:19:55 unittest.unittest_bdev -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:06:26.904 00:06:26.904 00:06:26.904 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.904 http://cunit.sourceforge.net/ 00:06:26.904 00:06:26.904 00:06:26.904 Suite: raid_sb 00:06:26.904 Test: test_raid_bdev_write_superblock ...passed 00:06:26.904 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:26.904 Test: test_raid_bdev_parse_superblock ...[2024-07-15 21:19:55.702549] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:26.904 passed 00:06:26.904 Suite: raid_sb_md 00:06:26.904 Test: test_raid_bdev_write_superblock ...passed 00:06:26.904 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:26.904 Test: test_raid_bdev_parse_superblock ...[2024-07-15 21:19:55.703319] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:26.904 passed 00:06:26.904 Suite: raid_sb_md_interleaved 00:06:26.904 Test: test_raid_bdev_write_superblock ...passed 00:06:26.904 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:26.904 Test: test_raid_bdev_parse_superblock ...[2024-07-15 21:19:55.703737] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:26.904 passed 00:06:26.904 00:06:26.904 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.904 suites 3 3 n/a 0 0 00:06:26.904 tests 9 9 9 0 0 00:06:26.904 asserts 139 139 139 0 n/a 00:06:26.904 00:06:26.904 Elapsed time = 0.002 seconds 00:06:26.904 21:19:55 unittest.unittest_bdev -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:06:26.904 00:06:26.904 00:06:26.904 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.904 http://cunit.sourceforge.net/ 00:06:26.904 00:06:26.904 00:06:26.904 Suite: concat 00:06:26.904 Test: test_concat_start ...passed 00:06:26.904 Test: test_concat_rw ...passed 00:06:26.904 Test: test_concat_null_payload ...passed 00:06:26.904 00:06:26.904 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.904 suites 1 1 n/a 0 0 00:06:26.904 tests 3 3 3 0 0 00:06:26.904 asserts 8460 8460 8460 0 n/a 00:06:26.904 00:06:26.904 Elapsed time = 0.007 seconds 00:06:26.904 21:19:55 unittest.unittest_bdev -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid0.c/raid0_ut 00:06:26.904 00:06:26.904 00:06:26.904 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.904 http://cunit.sourceforge.net/ 00:06:26.904 00:06:26.904 00:06:26.904 Suite: raid0 00:06:26.904 Test: test_write_io ...passed 00:06:26.904 Test: test_read_io ...passed 00:06:26.904 Test: test_unmap_io ...passed 00:06:26.904 Test: test_io_failure ...passed 00:06:26.904 Suite: raid0_dif 00:06:26.904 Test: test_write_io ...passed 00:06:26.904 Test: test_read_io ...passed 00:06:26.904 Test: test_unmap_io ...passed 00:06:26.904 Test: test_io_failure ...passed 00:06:26.904 00:06:26.904 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.904 suites 2 2 n/a 0 0 00:06:26.904 tests 8 8 8 0 0 00:06:26.904 asserts 368291 368291 368291 0 n/a 00:06:26.904 00:06:26.905 Elapsed time = 0.100 seconds 00:06:26.905 21:19:55 unittest.unittest_bdev -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:06:26.905 00:06:26.905 00:06:26.905 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.905 http://cunit.sourceforge.net/ 00:06:26.905 00:06:26.905 00:06:26.905 Suite: raid1 00:06:26.905 Test: test_raid1_start ...passed 00:06:26.905 Test: test_raid1_read_balancing ...passed 00:06:26.905 Test: test_raid1_write_error ...passed 00:06:26.905 Test: test_raid1_read_error ...passed 00:06:26.905 00:06:26.905 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.905 suites 1 1 n/a 0 0 00:06:26.905 tests 4 4 4 0 0 00:06:26.905 asserts 4374 4374 4374 0 n/a 00:06:26.905 00:06:26.905 Elapsed time = 0.005 seconds 00:06:26.905 21:19:55 unittest.unittest_bdev -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:06:26.905 00:06:26.905 00:06:26.905 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.905 http://cunit.sourceforge.net/ 00:06:26.905 00:06:26.905 00:06:26.905 Suite: zone 00:06:26.905 Test: test_zone_get_operation ...passed 00:06:26.905 Test: test_bdev_zone_get_info ...passed 00:06:26.905 Test: test_bdev_zone_management ...passed 00:06:26.905 Test: test_bdev_zone_append ...passed 00:06:26.905 Test: test_bdev_zone_append_with_md ...passed 00:06:26.905 Test: test_bdev_zone_appendv ...passed 00:06:26.905 Test: test_bdev_zone_appendv_with_md ...passed 00:06:26.905 Test: test_bdev_io_get_append_location ...passed 00:06:26.905 00:06:26.905 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.905 suites 1 1 n/a 0 0 00:06:26.905 tests 8 8 8 0 0 00:06:26.905 asserts 94 94 94 0 n/a 00:06:26.905 00:06:26.905 Elapsed time = 0.000 seconds 00:06:26.905 21:19:56 unittest.unittest_bdev -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:06:26.905 00:06:26.905 00:06:26.905 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.905 http://cunit.sourceforge.net/ 00:06:26.905 00:06:26.905 00:06:26.905 Suite: gpt_parse 00:06:26.905 Test: test_parse_mbr_and_primary ...[2024-07-15 21:19:56.032783] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:26.905 [2024-07-15 21:19:56.033054] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:26.905 [2024-07-15 21:19:56.033092] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:26.905 [2024-07-15 21:19:56.033179] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:26.905 [2024-07-15 21:19:56.033224] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:26.905 [2024-07-15 21:19:56.033308] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:26.905 passed 00:06:26.905 Test: test_parse_secondary ...[2024-07-15 21:19:56.033891] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:26.905 [2024-07-15 21:19:56.033944] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:26.905 [2024-07-15 21:19:56.033973] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:26.905 [2024-07-15 21:19:56.033998] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:26.905 passed 00:06:26.905 Test: test_check_mbr ...[2024-07-15 21:19:56.034565] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:26.905 [2024-07-15 21:19:56.034611] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:26.905 passed 00:06:26.905 Test: test_read_header ...[2024-07-15 21:19:56.034658] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:06:26.905 [2024-07-15 21:19:56.034741] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:06:26.905 [2024-07-15 21:19:56.034829] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:06:26.905 [2024-07-15 21:19:56.034867] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:06:26.905 [2024-07-15 21:19:56.034890] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:06:26.905 passed 00:06:26.905 Test: test_read_partitions ...[2024-07-15 21:19:56.034915] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:06:26.905 [2024-07-15 21:19:56.034964] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:06:26.905 [2024-07-15 21:19:56.035004] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:06:26.905 [2024-07-15 21:19:56.035029] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:06:26.905 [2024-07-15 21:19:56.035046] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:06:26.905 [2024-07-15 21:19:56.035381] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:06:26.905 passed 00:06:26.905 00:06:26.905 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.905 suites 1 1 n/a 0 0 00:06:26.905 tests 5 5 5 0 0 00:06:26.905 asserts 33 33 33 0 n/a 00:06:26.905 00:06:26.905 Elapsed time = 0.003 seconds 00:06:26.905 21:19:56 unittest.unittest_bdev -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:06:26.905 00:06:26.905 00:06:26.905 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.905 http://cunit.sourceforge.net/ 00:06:26.905 00:06:26.905 00:06:26.905 Suite: bdev_part 00:06:26.905 Test: part_test ...[2024-07-15 21:19:56.077911] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 0d076f6d-0b67-599b-8426-f2d3f31ef32f already exists 00:06:26.905 [2024-07-15 21:19:56.078213] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:0d076f6d-0b67-599b-8426-f2d3f31ef32f alias for bdev test1 00:06:26.905 passed 00:06:26.905 Test: part_free_test ...passed 00:06:26.905 Test: part_get_io_channel_test ...passed 00:06:26.905 Test: part_construct_ext ...passed 00:06:26.905 00:06:26.905 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.905 suites 1 1 n/a 0 0 00:06:26.905 tests 4 4 4 0 0 00:06:26.905 asserts 48 48 48 0 n/a 00:06:26.905 00:06:26.905 Elapsed time = 0.051 seconds 00:06:26.905 21:19:56 unittest.unittest_bdev -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:06:26.905 00:06:26.905 00:06:26.905 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.905 http://cunit.sourceforge.net/ 00:06:26.905 00:06:26.905 00:06:26.905 Suite: scsi_nvme_suite 00:06:26.905 Test: scsi_nvme_translate_test ...passed 00:06:26.905 00:06:26.905 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.905 suites 1 1 n/a 0 0 00:06:26.905 tests 1 1 1 0 0 00:06:26.905 asserts 104 104 104 0 n/a 00:06:26.905 00:06:26.905 Elapsed time = 0.000 seconds 00:06:26.905 21:19:56 unittest.unittest_bdev -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:06:26.905 00:06:26.905 00:06:26.905 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.905 http://cunit.sourceforge.net/ 00:06:26.905 00:06:26.905 00:06:26.905 Suite: lvol 00:06:26.905 Test: ut_lvs_init ...[2024-07-15 21:19:56.208099] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:06:26.905 [2024-07-15 21:19:56.209087] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:06:26.905 passed 00:06:26.905 Test: ut_lvol_init ...passed 00:06:26.905 Test: ut_lvol_snapshot ...passed 00:06:26.905 Test: ut_lvol_clone ...passed 00:06:26.905 Test: ut_lvs_destroy ...passed 00:06:26.905 Test: ut_lvs_unload ...passed 00:06:26.905 Test: ut_lvol_resize ...[2024-07-15 21:19:56.210966] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1394:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:06:26.905 passed 00:06:26.905 Test: ut_lvol_set_read_only ...passed 00:06:26.905 Test: ut_lvol_hotremove ...passed 00:06:26.905 Test: ut_vbdev_lvol_get_io_channel ...passed 00:06:26.905 Test: ut_vbdev_lvol_io_type_supported ...passed 00:06:26.905 Test: ut_lvol_read_write ...passed 00:06:26.905 Test: ut_vbdev_lvol_submit_request ...passed 00:06:26.905 Test: ut_lvol_examine_config ...passed 00:06:26.905 Test: ut_lvol_examine_disk ...[2024-07-15 21:19:56.211899] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1536:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:06:26.905 passed 00:06:26.905 Test: ut_lvol_rename ...[2024-07-15 21:19:56.213148] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:06:26.905 [2024-07-15 21:19:56.213380] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1344:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:06:26.905 passed 00:06:26.905 Test: ut_bdev_finish ...passed 00:06:26.905 Test: ut_lvs_rename ...passed 00:06:26.905 Test: ut_lvol_seek ...passed 00:06:26.905 Test: ut_esnap_dev_create ...[2024-07-15 21:19:56.214397] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:06:26.905 [2024-07-15 21:19:56.214583] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1885:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:06:26.905 [2024-07-15 21:19:56.214694] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1890:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:06:26.905 passed 00:06:26.905 Test: ut_lvol_esnap_clone_bad_args ...[2024-07-15 21:19:56.214925] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1280:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:06:26.905 [2024-07-15 21:19:56.215042] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1287:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:06:26.905 passed 00:06:26.905 Test: ut_lvol_shallow_copy ...[2024-07-15 21:19:56.215611] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1977:vbdev_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:06:26.905 [2024-07-15 21:19:56.215747] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1982:vbdev_lvol_shallow_copy: *ERROR*: lvol lvol_sc, bdev name must not be NULL 00:06:26.905 passed 00:06:26.906 Test: ut_lvol_set_external_parent ...[2024-07-15 21:19:56.215990] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:2037:vbdev_lvol_set_external_parent: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:06:26.906 passed 00:06:26.906 00:06:26.906 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.906 suites 1 1 n/a 0 0 00:06:26.906 tests 23 23 23 0 0 00:06:26.906 asserts 770 770 770 0 n/a 00:06:26.906 00:06:26.906 Elapsed time = 0.007 seconds 00:06:26.906 21:19:56 unittest.unittest_bdev -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:06:26.906 00:06:26.906 00:06:26.906 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.906 http://cunit.sourceforge.net/ 00:06:26.906 00:06:26.906 00:06:26.906 Suite: zone_block 00:06:26.906 Test: test_zone_block_create ...passed 00:06:26.906 Test: test_zone_block_create_invalid ...[2024-07-15 21:19:56.278744] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:06:26.906 [2024-07-15 21:19:56.279061] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-15 21:19:56.279230] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:06:26.906 [2024-07-15 21:19:56.279288] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-15 21:19:56.279442] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:06:26.906 [2024-07-15 21:19:56.279476] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-15 21:19:56.279568] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:06:26.906 [2024-07-15 21:19:56.279612] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:06:26.906 Test: test_get_zone_info ...[2024-07-15 21:19:56.280054] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:26.906 [2024-07-15 21:19:56.280123] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:26.906 [2024-07-15 21:19:56.280179] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:26.906 passed 00:06:26.906 Test: test_supported_io_types ...passed 00:06:26.906 Test: test_reset_zone ...[2024-07-15 21:19:56.281030] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:26.906 [2024-07-15 21:19:56.281090] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:26.906 passed 00:06:26.906 Test: test_open_zone ...[2024-07-15 21:19:56.281511] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:26.906 [2024-07-15 21:19:56.282149] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:26.906 [2024-07-15 21:19:56.282222] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:26.906 passed 00:06:26.906 Test: test_zone_write ...[2024-07-15 21:19:56.282736] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:26.906 [2024-07-15 21:19:56.282789] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:26.906 [2024-07-15 21:19:56.282843] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:26.906 [2024-07-15 21:19:56.282904] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:26.906 [2024-07-15 21:19:56.287820] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:06:26.906 [2024-07-15 21:19:56.287921] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:26.906 [2024-07-15 21:19:56.288002] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:06:26.906 [2024-07-15 21:19:56.288040] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:26.906 [2024-07-15 21:19:56.292420] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:26.906 [2024-07-15 21:19:56.292498] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:26.906 passed 00:06:26.906 Test: test_zone_read ...[2024-07-15 21:19:56.292971] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:06:26.906 [2024-07-15 21:19:56.293004] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:26.906 [2024-07-15 21:19:56.293062] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:06:26.906 [2024-07-15 21:19:56.293088] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:26.906 [2024-07-15 21:19:56.293488] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:06:26.906 [2024-07-15 21:19:56.293517] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:26.906 passed 00:06:26.906 Test: test_close_zone ...[2024-07-15 21:19:56.293850] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:26.906 [2024-07-15 21:19:56.293932] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:26.906 [2024-07-15 21:19:56.294114] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:26.906 [2024-07-15 21:19:56.294157] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:26.906 passed 00:06:26.906 Test: test_finish_zone ...[2024-07-15 21:19:56.294707] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:26.906 [2024-07-15 21:19:56.294763] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:26.906 passed 00:06:26.906 Test: test_append_zone ...[2024-07-15 21:19:56.295124] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:26.906 [2024-07-15 21:19:56.295167] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:26.906 [2024-07-15 21:19:56.295216] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:26.906 [2024-07-15 21:19:56.295238] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:26.906 [2024-07-15 21:19:56.303839] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:26.906 [2024-07-15 21:19:56.303920] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:26.906 passed 00:06:26.906 00:06:26.906 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.906 suites 1 1 n/a 0 0 00:06:26.906 tests 11 11 11 0 0 00:06:26.906 asserts 3437 3437 3437 0 n/a 00:06:26.906 00:06:26.906 Elapsed time = 0.026 seconds 00:06:26.906 21:19:56 unittest.unittest_bdev -- unit/unittest.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:06:26.906 00:06:26.906 00:06:26.906 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.906 http://cunit.sourceforge.net/ 00:06:26.906 00:06:26.906 00:06:26.906 Suite: bdev 00:06:26.906 Test: basic ...[2024-07-15 21:19:56.418173] thread.c:2373:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55bd353897c1): Operation not permitted (rc=-1) 00:06:26.906 [2024-07-15 21:19:56.418495] thread.c:2373:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x55bd35389780): Operation not permitted (rc=-1) 00:06:26.906 [2024-07-15 21:19:56.418538] thread.c:2373:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55bd353897c1): Operation not permitted (rc=-1) 00:06:26.906 passed 00:06:26.906 Test: unregister_and_close ...passed 00:06:26.906 Test: unregister_and_close_different_threads ...passed 00:06:26.906 Test: basic_qos ...passed 00:06:26.906 Test: put_channel_during_reset ...passed 00:06:26.906 Test: aborted_reset ...passed 00:06:26.907 Test: aborted_reset_no_outstanding_io ...passed 00:06:26.907 Test: io_during_reset ...passed 00:06:26.907 Test: reset_completions ...passed 00:06:26.907 Test: io_during_qos_queue ...passed 00:06:26.907 Test: io_during_qos_reset ...passed 00:06:26.907 Test: enomem ...passed 00:06:26.907 Test: enomem_multi_bdev ...passed 00:06:26.907 Test: enomem_multi_bdev_unregister ...passed 00:06:26.907 Test: enomem_multi_io_target ...passed 00:06:26.907 Test: qos_dynamic_enable ...passed 00:06:26.907 Test: bdev_histograms_mt ...passed 00:06:26.907 Test: bdev_set_io_timeout_mt ...[2024-07-15 21:19:57.266869] thread.c: 471:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:06:26.907 passed 00:06:26.907 Test: lock_lba_range_then_submit_io ...[2024-07-15 21:19:57.289909] thread.c:2177:spdk_io_device_register: *ERROR*: io_device 0x55bd35389740 already registered (old:0x6130000003c0 new:0x613000000c80) 00:06:26.907 passed 00:06:26.907 Test: unregister_during_reset ...passed 00:06:26.907 Test: event_notify_and_close ...passed 00:06:26.907 Test: unregister_and_qos_poller ...passed 00:06:26.907 Suite: bdev_wrong_thread 00:06:26.907 Test: spdk_bdev_register_wt ...[2024-07-15 21:19:57.450920] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8502:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x619000158b80 (0x619000158b80) 00:06:26.907 passed 00:06:26.907 Test: spdk_bdev_examine_wt ...[2024-07-15 21:19:57.451302] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 810:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x619000158b80 (0x619000158b80) 00:06:26.907 passed 00:06:26.907 00:06:26.907 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.907 suites 2 2 n/a 0 0 00:06:26.907 tests 24 24 24 0 0 00:06:26.907 asserts 621 621 621 0 n/a 00:06:26.907 00:06:26.907 Elapsed time = 1.054 seconds 00:06:26.907 00:06:26.907 real 0m3.584s 00:06:26.907 user 0m1.582s 00:06:26.907 sys 0m2.001s 00:06:26.907 21:19:57 unittest.unittest_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.907 21:19:57 unittest.unittest_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:26.907 ************************************ 00:06:26.907 END TEST unittest_bdev 00:06:26.907 ************************************ 00:06:26.907 21:19:57 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:26.907 21:19:57 unittest -- unit/unittest.sh@215 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:26.907 21:19:57 unittest -- unit/unittest.sh@220 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:26.907 21:19:57 unittest -- unit/unittest.sh@225 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:26.907 21:19:57 unittest -- unit/unittest.sh@229 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:26.907 21:19:57 unittest -- unit/unittest.sh@230 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:26.907 21:19:57 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:26.907 21:19:57 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.907 21:19:57 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:26.907 ************************************ 00:06:26.907 START TEST unittest_bdev_raid5f 00:06:26.907 ************************************ 00:06:26.907 21:19:57 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:26.907 00:06:26.907 00:06:26.907 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.907 http://cunit.sourceforge.net/ 00:06:26.907 00:06:26.907 00:06:26.907 Suite: raid5f 00:06:26.907 Test: test_raid5f_start ...passed 00:06:26.907 Test: test_raid5f_submit_read_request ...passed 00:06:26.907 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:06:30.205 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:06:56.751 Test: test_raid5f_chunk_write_error ...passed 00:07:04.876 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:07:09.068 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:07:47.787 Test: test_raid5f_submit_read_request_degraded ...passed 00:07:47.787 00:07:47.787 Run Summary: Type Total Ran Passed Failed Inactive 00:07:47.787 suites 1 1 n/a 0 0 00:07:47.787 tests 8 8 8 0 0 00:07:47.787 asserts 518158 518158 518158 0 n/a 00:07:47.787 00:07:47.787 Elapsed time = 78.599 seconds 00:07:47.787 00:07:47.787 real 1m18.723s 00:07:47.787 user 1m15.168s 00:07:47.787 sys 0m3.520s 00:07:47.787 21:21:16 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.787 21:21:16 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:07:47.787 ************************************ 00:07:47.787 END TEST unittest_bdev_raid5f 00:07:47.787 ************************************ 00:07:47.787 21:21:16 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:47.787 21:21:16 unittest -- unit/unittest.sh@233 -- # run_test unittest_blob_blobfs unittest_blob 00:07:47.787 21:21:16 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:47.787 21:21:16 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.787 21:21:16 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:47.787 ************************************ 00:07:47.787 START TEST unittest_blob_blobfs 00:07:47.787 ************************************ 00:07:47.787 21:21:16 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1123 -- # unittest_blob 00:07:47.787 21:21:16 unittest.unittest_blob_blobfs -- unit/unittest.sh@39 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:07:47.787 21:21:16 unittest.unittest_blob_blobfs -- unit/unittest.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:07:47.787 00:07:47.787 00:07:47.787 CUnit - A unit testing framework for C - Version 2.1-3 00:07:47.787 http://cunit.sourceforge.net/ 00:07:47.787 00:07:47.787 00:07:47.787 Suite: blob_nocopy_noextent 00:07:47.787 Test: blob_init ...[2024-07-15 21:21:16.384039] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:47.787 passed 00:07:47.787 Test: blob_thin_provision ...passed 00:07:47.787 Test: blob_read_only ...passed 00:07:47.788 Test: bs_load ...[2024-07-15 21:21:16.461977] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:47.788 passed 00:07:47.788 Test: bs_load_custom_cluster_size ...passed 00:07:47.788 Test: bs_load_after_failed_grow ...passed 00:07:47.788 Test: bs_cluster_sz ...[2024-07-15 21:21:16.490877] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:47.788 [2024-07-15 21:21:16.491197] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:47.788 [2024-07-15 21:21:16.491343] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:47.788 passed 00:07:47.788 Test: bs_resize_md ...passed 00:07:47.788 Test: bs_destroy ...passed 00:07:47.788 Test: bs_type ...passed 00:07:47.788 Test: bs_super_block ...passed 00:07:47.788 Test: bs_test_recover_cluster_count ...passed 00:07:47.788 Test: bs_grow_live ...passed 00:07:47.788 Test: bs_grow_live_no_space ...passed 00:07:47.788 Test: bs_test_grow ...passed 00:07:47.788 Test: blob_serialize_test ...passed 00:07:47.788 Test: super_block_crc ...passed 00:07:47.788 Test: blob_thin_prov_write_count_io ...passed 00:07:47.788 Test: blob_thin_prov_unmap_cluster ...passed 00:07:47.788 Test: bs_load_iter_test ...passed 00:07:47.788 Test: blob_relations ...[2024-07-15 21:21:16.686626] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:47.788 [2024-07-15 21:21:16.686793] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.788 [2024-07-15 21:21:16.687668] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:47.788 [2024-07-15 21:21:16.687768] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.788 passed 00:07:47.788 Test: blob_relations2 ...[2024-07-15 21:21:16.702008] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:47.788 [2024-07-15 21:21:16.702164] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.788 [2024-07-15 21:21:16.702210] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:47.788 [2024-07-15 21:21:16.702255] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.788 [2024-07-15 21:21:16.703650] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:47.788 [2024-07-15 21:21:16.703741] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.788 [2024-07-15 21:21:16.704186] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:47.788 [2024-07-15 21:21:16.704273] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.788 passed 00:07:47.788 Test: blob_relations3 ...passed 00:07:47.788 Test: blobstore_clean_power_failure ...passed 00:07:47.788 Test: blob_delete_snapshot_power_failure ...[2024-07-15 21:21:16.858190] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:47.788 [2024-07-15 21:21:16.870344] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:47.788 [2024-07-15 21:21:16.870499] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:47.788 [2024-07-15 21:21:16.870550] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.788 [2024-07-15 21:21:16.882450] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:47.788 [2024-07-15 21:21:16.882633] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:47.788 [2024-07-15 21:21:16.882684] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:47.788 [2024-07-15 21:21:16.882761] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.788 [2024-07-15 21:21:16.894711] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:47.788 [2024-07-15 21:21:16.894916] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.788 [2024-07-15 21:21:16.906894] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:47.788 [2024-07-15 21:21:16.907082] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.788 [2024-07-15 21:21:16.919427] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:47.788 [2024-07-15 21:21:16.919609] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.788 passed 00:07:47.788 Test: blob_create_snapshot_power_failure ...[2024-07-15 21:21:16.955846] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:47.788 [2024-07-15 21:21:16.979294] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:47.788 [2024-07-15 21:21:16.991605] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:47.788 passed 00:07:47.788 Test: blob_io_unit ...passed 00:07:47.788 Test: blob_io_unit_compatibility ...passed 00:07:47.788 Test: blob_ext_md_pages ...passed 00:07:47.788 Test: blob_esnap_io_4096_4096 ...passed 00:07:47.788 Test: blob_esnap_io_512_512 ...passed 00:07:47.788 Test: blob_esnap_io_4096_512 ...passed 00:07:47.788 Test: blob_esnap_io_512_4096 ...passed 00:07:47.788 Test: blob_esnap_clone_resize ...passed 00:07:47.788 Suite: blob_bs_nocopy_noextent 00:07:47.788 Test: blob_open ...passed 00:07:47.788 Test: blob_create ...[2024-07-15 21:21:17.266672] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:47.788 passed 00:07:47.788 Test: blob_create_loop ...passed 00:07:47.788 Test: blob_create_fail ...[2024-07-15 21:21:17.360069] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:47.788 passed 00:07:47.788 Test: blob_create_internal ...passed 00:07:47.788 Test: blob_create_zero_extent ...passed 00:07:47.788 Test: blob_snapshot ...passed 00:07:47.788 Test: blob_clone ...passed 00:07:47.788 Test: blob_inflate ...[2024-07-15 21:21:17.538215] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:47.788 passed 00:07:47.788 Test: blob_delete ...passed 00:07:47.788 Test: blob_resize_test ...[2024-07-15 21:21:17.603675] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:47.788 passed 00:07:47.788 Test: blob_resize_thin_test ...passed 00:07:47.788 Test: channel_ops ...passed 00:07:47.788 Test: blob_super ...passed 00:07:47.788 Test: blob_rw_verify_iov ...passed 00:07:47.788 Test: blob_unmap ...passed 00:07:47.788 Test: blob_iter ...passed 00:07:47.788 Test: blob_parse_md ...passed 00:07:47.788 Test: bs_load_pending_removal ...passed 00:07:47.788 Test: bs_unload ...[2024-07-15 21:21:17.898274] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:47.788 passed 00:07:47.788 Test: bs_usable_clusters ...passed 00:07:47.788 Test: blob_crc ...[2024-07-15 21:21:17.960572] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:47.788 [2024-07-15 21:21:17.960753] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:47.788 passed 00:07:47.788 Test: blob_flags ...passed 00:07:47.788 Test: bs_version ...passed 00:07:47.788 Test: blob_set_xattrs_test ...[2024-07-15 21:21:18.055270] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:47.788 [2024-07-15 21:21:18.055420] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:47.788 passed 00:07:47.788 Test: blob_thin_prov_alloc ...passed 00:07:47.788 Test: blob_insert_cluster_msg_test ...passed 00:07:47.788 Test: blob_thin_prov_rw ...passed 00:07:47.788 Test: blob_thin_prov_rle ...passed 00:07:47.788 Test: blob_thin_prov_rw_iov ...passed 00:07:47.788 Test: blob_snapshot_rw ...passed 00:07:47.788 Test: blob_snapshot_rw_iov ...passed 00:07:47.788 Test: blob_inflate_rw ...passed 00:07:47.788 Test: blob_snapshot_freeze_io ...passed 00:07:47.788 Test: blob_operation_split_rw ...passed 00:07:47.788 Test: blob_operation_split_rw_iov ...passed 00:07:47.788 Test: blob_simultaneous_operations ...[2024-07-15 21:21:18.913141] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:47.788 [2024-07-15 21:21:18.913311] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.788 [2024-07-15 21:21:18.914248] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:47.788 [2024-07-15 21:21:18.914355] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.788 [2024-07-15 21:21:18.923604] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:47.788 [2024-07-15 21:21:18.923706] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.788 [2024-07-15 21:21:18.923817] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:47.788 [2024-07-15 21:21:18.923886] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.788 passed 00:07:47.788 Test: blob_persist_test ...passed 00:07:47.788 Test: blob_decouple_snapshot ...passed 00:07:47.788 Test: blob_seek_io_unit ...passed 00:07:47.788 Test: blob_nested_freezes ...passed 00:07:47.788 Test: blob_clone_resize ...passed 00:07:47.788 Test: blob_shallow_copy ...[2024-07-15 21:21:19.179627] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:07:47.788 [2024-07-15 21:21:19.179958] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:07:47.788 [2024-07-15 21:21:19.180204] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:07:47.788 passed 00:07:47.788 Suite: blob_blob_nocopy_noextent 00:07:47.788 Test: blob_write ...passed 00:07:47.788 Test: blob_read ...passed 00:07:47.788 Test: blob_rw_verify ...passed 00:07:47.788 Test: blob_rw_verify_iov_nomem ...passed 00:07:47.788 Test: blob_rw_iov_read_only ...passed 00:07:47.788 Test: blob_xattr ...passed 00:07:47.788 Test: blob_dirty_shutdown ...passed 00:07:47.788 Test: blob_is_degraded ...passed 00:07:47.788 Suite: blob_esnap_bs_nocopy_noextent 00:07:47.789 Test: blob_esnap_create ...passed 00:07:47.789 Test: blob_esnap_thread_add_remove ...passed 00:07:47.789 Test: blob_esnap_clone_snapshot ...passed 00:07:47.789 Test: blob_esnap_clone_inflate ...passed 00:07:47.789 Test: blob_esnap_clone_decouple ...passed 00:07:47.789 Test: blob_esnap_clone_reload ...passed 00:07:47.789 Test: blob_esnap_hotplug ...passed 00:07:47.789 Test: blob_set_parent ...[2024-07-15 21:21:19.696567] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:07:47.789 [2024-07-15 21:21:19.696757] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:07:47.789 [2024-07-15 21:21:19.696938] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:07:47.789 [2024-07-15 21:21:19.697007] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:07:47.789 [2024-07-15 21:21:19.697502] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:47.789 passed 00:07:47.789 Test: blob_set_external_parent ...[2024-07-15 21:21:19.729753] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:07:47.789 [2024-07-15 21:21:19.729935] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:07:47.789 [2024-07-15 21:21:19.730005] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:07:47.789 [2024-07-15 21:21:19.730439] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:47.789 passed 00:07:47.789 Suite: blob_nocopy_extent 00:07:47.789 Test: blob_init ...[2024-07-15 21:21:19.741571] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:47.789 passed 00:07:47.789 Test: blob_thin_provision ...passed 00:07:47.789 Test: blob_read_only ...passed 00:07:47.789 Test: bs_load ...[2024-07-15 21:21:19.785874] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:47.789 passed 00:07:47.789 Test: bs_load_custom_cluster_size ...passed 00:07:47.789 Test: bs_load_after_failed_grow ...passed 00:07:47.789 Test: bs_cluster_sz ...[2024-07-15 21:21:19.810351] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:47.789 [2024-07-15 21:21:19.810599] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:47.789 [2024-07-15 21:21:19.810675] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:47.789 passed 00:07:47.789 Test: bs_resize_md ...passed 00:07:47.789 Test: bs_destroy ...passed 00:07:47.789 Test: bs_type ...passed 00:07:47.789 Test: bs_super_block ...passed 00:07:47.789 Test: bs_test_recover_cluster_count ...passed 00:07:47.789 Test: bs_grow_live ...passed 00:07:47.789 Test: bs_grow_live_no_space ...passed 00:07:47.789 Test: bs_test_grow ...passed 00:07:47.789 Test: blob_serialize_test ...passed 00:07:47.789 Test: super_block_crc ...passed 00:07:47.789 Test: blob_thin_prov_write_count_io ...passed 00:07:47.789 Test: blob_thin_prov_unmap_cluster ...passed 00:07:47.789 Test: bs_load_iter_test ...passed 00:07:47.789 Test: blob_relations ...[2024-07-15 21:21:19.979960] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:47.789 [2024-07-15 21:21:19.980132] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.789 [2024-07-15 21:21:19.980978] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:47.789 [2024-07-15 21:21:19.981058] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.789 passed 00:07:47.789 Test: blob_relations2 ...[2024-07-15 21:21:19.994193] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:47.789 [2024-07-15 21:21:19.994331] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.789 [2024-07-15 21:21:19.994372] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:47.789 [2024-07-15 21:21:19.994409] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.789 [2024-07-15 21:21:19.995633] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:47.789 [2024-07-15 21:21:19.995743] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.789 [2024-07-15 21:21:19.996136] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:47.789 [2024-07-15 21:21:19.996223] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.789 passed 00:07:47.789 Test: blob_relations3 ...passed 00:07:47.789 Test: blobstore_clean_power_failure ...passed 00:07:47.789 Test: blob_delete_snapshot_power_failure ...[2024-07-15 21:21:20.144553] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:47.789 [2024-07-15 21:21:20.156290] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:47.789 [2024-07-15 21:21:20.168127] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:47.789 [2024-07-15 21:21:20.168305] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:47.789 [2024-07-15 21:21:20.168353] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.789 [2024-07-15 21:21:20.180027] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:47.789 [2024-07-15 21:21:20.180211] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:47.789 [2024-07-15 21:21:20.180251] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:47.789 [2024-07-15 21:21:20.180296] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.789 [2024-07-15 21:21:20.191965] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:47.789 [2024-07-15 21:21:20.192117] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:47.789 [2024-07-15 21:21:20.192185] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:47.789 [2024-07-15 21:21:20.192235] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.789 [2024-07-15 21:21:20.203949] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:47.789 [2024-07-15 21:21:20.204085] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.789 [2024-07-15 21:21:20.215997] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:47.789 [2024-07-15 21:21:20.216192] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.789 [2024-07-15 21:21:20.228340] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:47.789 [2024-07-15 21:21:20.228493] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.789 passed 00:07:47.789 Test: blob_create_snapshot_power_failure ...[2024-07-15 21:21:20.263939] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:47.789 [2024-07-15 21:21:20.275380] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:47.789 [2024-07-15 21:21:20.297971] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:47.789 [2024-07-15 21:21:20.309780] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:47.789 passed 00:07:47.789 Test: blob_io_unit ...passed 00:07:47.789 Test: blob_io_unit_compatibility ...passed 00:07:47.789 Test: blob_ext_md_pages ...passed 00:07:47.789 Test: blob_esnap_io_4096_4096 ...passed 00:07:47.789 Test: blob_esnap_io_512_512 ...passed 00:07:47.789 Test: blob_esnap_io_4096_512 ...passed 00:07:47.789 Test: blob_esnap_io_512_4096 ...passed 00:07:47.789 Test: blob_esnap_clone_resize ...passed 00:07:47.789 Suite: blob_bs_nocopy_extent 00:07:47.789 Test: blob_open ...passed 00:07:47.789 Test: blob_create ...[2024-07-15 21:21:20.569673] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:47.789 passed 00:07:47.789 Test: blob_create_loop ...passed 00:07:47.789 Test: blob_create_fail ...[2024-07-15 21:21:20.665737] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:47.789 passed 00:07:47.789 Test: blob_create_internal ...passed 00:07:47.789 Test: blob_create_zero_extent ...passed 00:07:47.789 Test: blob_snapshot ...passed 00:07:47.789 Test: blob_clone ...passed 00:07:47.789 Test: blob_inflate ...[2024-07-15 21:21:20.837462] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:47.789 passed 00:07:47.789 Test: blob_delete ...passed 00:07:47.789 Test: blob_resize_test ...[2024-07-15 21:21:20.901852] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:47.789 passed 00:07:47.789 Test: blob_resize_thin_test ...passed 00:07:47.789 Test: channel_ops ...passed 00:07:47.789 Test: blob_super ...passed 00:07:47.789 Test: blob_rw_verify_iov ...passed 00:07:47.789 Test: blob_unmap ...passed 00:07:47.789 Test: blob_iter ...passed 00:07:47.789 Test: blob_parse_md ...passed 00:07:48.048 Test: bs_load_pending_removal ...passed 00:07:48.048 Test: bs_unload ...[2024-07-15 21:21:21.191762] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:48.048 passed 00:07:48.048 Test: bs_usable_clusters ...passed 00:07:48.048 Test: blob_crc ...[2024-07-15 21:21:21.255900] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:48.048 [2024-07-15 21:21:21.256074] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:48.048 passed 00:07:48.048 Test: blob_flags ...passed 00:07:48.048 Test: bs_version ...passed 00:07:48.048 Test: blob_set_xattrs_test ...[2024-07-15 21:21:21.353416] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:48.048 [2024-07-15 21:21:21.353597] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:48.048 passed 00:07:48.307 Test: blob_thin_prov_alloc ...passed 00:07:48.307 Test: blob_insert_cluster_msg_test ...passed 00:07:48.307 Test: blob_thin_prov_rw ...passed 00:07:48.307 Test: blob_thin_prov_rle ...passed 00:07:48.307 Test: blob_thin_prov_rw_iov ...passed 00:07:48.307 Test: blob_snapshot_rw ...passed 00:07:48.566 Test: blob_snapshot_rw_iov ...passed 00:07:48.566 Test: blob_inflate_rw ...passed 00:07:48.825 Test: blob_snapshot_freeze_io ...passed 00:07:48.825 Test: blob_operation_split_rw ...passed 00:07:49.084 Test: blob_operation_split_rw_iov ...passed 00:07:49.084 Test: blob_simultaneous_operations ...[2024-07-15 21:21:22.242980] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:49.084 [2024-07-15 21:21:22.243070] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:49.084 [2024-07-15 21:21:22.243918] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:49.084 [2024-07-15 21:21:22.243971] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:49.084 [2024-07-15 21:21:22.253086] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:49.084 [2024-07-15 21:21:22.253137] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:49.084 [2024-07-15 21:21:22.253218] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:49.084 [2024-07-15 21:21:22.253231] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:49.084 passed 00:07:49.084 Test: blob_persist_test ...passed 00:07:49.084 Test: blob_decouple_snapshot ...passed 00:07:49.084 Test: blob_seek_io_unit ...passed 00:07:49.084 Test: blob_nested_freezes ...passed 00:07:49.343 Test: blob_clone_resize ...passed 00:07:49.343 Test: blob_shallow_copy ...[2024-07-15 21:21:22.504785] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:07:49.343 [2024-07-15 21:21:22.505056] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:07:49.343 [2024-07-15 21:21:22.505236] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:07:49.343 passed 00:07:49.343 Suite: blob_blob_nocopy_extent 00:07:49.343 Test: blob_write ...passed 00:07:49.343 Test: blob_read ...passed 00:07:49.343 Test: blob_rw_verify ...passed 00:07:49.343 Test: blob_rw_verify_iov_nomem ...passed 00:07:49.343 Test: blob_rw_iov_read_only ...passed 00:07:49.601 Test: blob_xattr ...passed 00:07:49.601 Test: blob_dirty_shutdown ...passed 00:07:49.601 Test: blob_is_degraded ...passed 00:07:49.601 Suite: blob_esnap_bs_nocopy_extent 00:07:49.601 Test: blob_esnap_create ...passed 00:07:49.601 Test: blob_esnap_thread_add_remove ...passed 00:07:49.601 Test: blob_esnap_clone_snapshot ...passed 00:07:49.601 Test: blob_esnap_clone_inflate ...passed 00:07:49.601 Test: blob_esnap_clone_decouple ...passed 00:07:49.859 Test: blob_esnap_clone_reload ...passed 00:07:49.859 Test: blob_esnap_hotplug ...passed 00:07:49.859 Test: blob_set_parent ...[2024-07-15 21:21:23.019406] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:07:49.859 [2024-07-15 21:21:23.019493] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:07:49.859 [2024-07-15 21:21:23.019597] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:07:49.859 [2024-07-15 21:21:23.019621] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:07:49.859 [2024-07-15 21:21:23.020042] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:49.859 passed 00:07:49.859 Test: blob_set_external_parent ...[2024-07-15 21:21:23.051753] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:07:49.859 [2024-07-15 21:21:23.051857] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:07:49.859 [2024-07-15 21:21:23.051877] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:07:49.859 [2024-07-15 21:21:23.052194] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:49.859 passed 00:07:49.859 Suite: blob_copy_noextent 00:07:49.859 Test: blob_init ...[2024-07-15 21:21:23.063677] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:49.859 passed 00:07:49.859 Test: blob_thin_provision ...passed 00:07:49.859 Test: blob_read_only ...passed 00:07:49.859 Test: bs_load ...[2024-07-15 21:21:23.109102] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:49.859 passed 00:07:49.859 Test: bs_load_custom_cluster_size ...passed 00:07:49.859 Test: bs_load_after_failed_grow ...passed 00:07:49.859 Test: bs_cluster_sz ...[2024-07-15 21:21:23.132131] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:49.859 [2024-07-15 21:21:23.132331] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:49.859 [2024-07-15 21:21:23.132362] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:49.859 passed 00:07:49.859 Test: bs_resize_md ...passed 00:07:49.859 Test: bs_destroy ...passed 00:07:49.859 Test: bs_type ...passed 00:07:49.859 Test: bs_super_block ...passed 00:07:49.859 Test: bs_test_recover_cluster_count ...passed 00:07:49.859 Test: bs_grow_live ...passed 00:07:49.859 Test: bs_grow_live_no_space ...passed 00:07:49.859 Test: bs_test_grow ...passed 00:07:50.116 Test: blob_serialize_test ...passed 00:07:50.116 Test: super_block_crc ...passed 00:07:50.116 Test: blob_thin_prov_write_count_io ...passed 00:07:50.116 Test: blob_thin_prov_unmap_cluster ...passed 00:07:50.116 Test: bs_load_iter_test ...passed 00:07:50.116 Test: blob_relations ...[2024-07-15 21:21:23.339632] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:50.116 [2024-07-15 21:21:23.339763] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.116 [2024-07-15 21:21:23.340382] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:50.116 [2024-07-15 21:21:23.340437] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.116 passed 00:07:50.116 Test: blob_relations2 ...[2024-07-15 21:21:23.355841] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:50.116 [2024-07-15 21:21:23.355968] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.116 [2024-07-15 21:21:23.356010] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:50.116 [2024-07-15 21:21:23.356028] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.116 [2024-07-15 21:21:23.357157] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:50.116 [2024-07-15 21:21:23.357227] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.116 [2024-07-15 21:21:23.357617] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:50.116 [2024-07-15 21:21:23.357676] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.116 passed 00:07:50.116 Test: blob_relations3 ...passed 00:07:50.374 Test: blobstore_clean_power_failure ...passed 00:07:50.374 Test: blob_delete_snapshot_power_failure ...[2024-07-15 21:21:23.513045] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:50.374 [2024-07-15 21:21:23.524470] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:50.374 [2024-07-15 21:21:23.524552] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:50.374 [2024-07-15 21:21:23.524571] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.374 [2024-07-15 21:21:23.535795] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:50.374 [2024-07-15 21:21:23.535870] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:50.374 [2024-07-15 21:21:23.535887] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:50.374 [2024-07-15 21:21:23.535914] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.374 [2024-07-15 21:21:23.547478] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:50.374 [2024-07-15 21:21:23.547589] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.374 [2024-07-15 21:21:23.559186] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:50.374 [2024-07-15 21:21:23.559315] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.374 [2024-07-15 21:21:23.570737] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:50.374 [2024-07-15 21:21:23.570823] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.374 passed 00:07:50.374 Test: blob_create_snapshot_power_failure ...[2024-07-15 21:21:23.604710] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:50.374 [2024-07-15 21:21:23.627573] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:50.374 [2024-07-15 21:21:23.639263] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:50.374 passed 00:07:50.374 Test: blob_io_unit ...passed 00:07:50.374 Test: blob_io_unit_compatibility ...passed 00:07:50.374 Test: blob_ext_md_pages ...passed 00:07:50.632 Test: blob_esnap_io_4096_4096 ...passed 00:07:50.632 Test: blob_esnap_io_512_512 ...passed 00:07:50.632 Test: blob_esnap_io_4096_512 ...passed 00:07:50.632 Test: blob_esnap_io_512_4096 ...passed 00:07:50.632 Test: blob_esnap_clone_resize ...passed 00:07:50.633 Suite: blob_bs_copy_noextent 00:07:50.633 Test: blob_open ...passed 00:07:50.633 Test: blob_create ...[2024-07-15 21:21:23.899728] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:50.633 passed 00:07:50.633 Test: blob_create_loop ...passed 00:07:50.633 Test: blob_create_fail ...[2024-07-15 21:21:23.985523] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:50.633 passed 00:07:50.890 Test: blob_create_internal ...passed 00:07:50.890 Test: blob_create_zero_extent ...passed 00:07:50.890 Test: blob_snapshot ...passed 00:07:50.890 Test: blob_clone ...passed 00:07:50.890 Test: blob_inflate ...[2024-07-15 21:21:24.147558] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:50.890 passed 00:07:50.890 Test: blob_delete ...passed 00:07:50.890 Test: blob_resize_test ...[2024-07-15 21:21:24.210229] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:50.890 passed 00:07:51.161 Test: blob_resize_thin_test ...passed 00:07:51.161 Test: channel_ops ...passed 00:07:51.161 Test: blob_super ...passed 00:07:51.161 Test: blob_rw_verify_iov ...passed 00:07:51.161 Test: blob_unmap ...passed 00:07:51.161 Test: blob_iter ...passed 00:07:51.161 Test: blob_parse_md ...passed 00:07:51.161 Test: bs_load_pending_removal ...passed 00:07:51.161 Test: bs_unload ...[2024-07-15 21:21:24.494823] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:51.161 passed 00:07:51.419 Test: bs_usable_clusters ...passed 00:07:51.419 Test: blob_crc ...[2024-07-15 21:21:24.557738] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:51.419 [2024-07-15 21:21:24.557830] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:51.419 passed 00:07:51.419 Test: blob_flags ...passed 00:07:51.419 Test: bs_version ...passed 00:07:51.419 Test: blob_set_xattrs_test ...[2024-07-15 21:21:24.652055] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:51.419 [2024-07-15 21:21:24.652181] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:51.419 passed 00:07:51.677 Test: blob_thin_prov_alloc ...passed 00:07:51.677 Test: blob_insert_cluster_msg_test ...passed 00:07:51.677 Test: blob_thin_prov_rw ...passed 00:07:51.677 Test: blob_thin_prov_rle ...passed 00:07:51.677 Test: blob_thin_prov_rw_iov ...passed 00:07:51.677 Test: blob_snapshot_rw ...passed 00:07:51.677 Test: blob_snapshot_rw_iov ...passed 00:07:51.935 Test: blob_inflate_rw ...passed 00:07:51.935 Test: blob_snapshot_freeze_io ...passed 00:07:52.193 Test: blob_operation_split_rw ...passed 00:07:52.193 Test: blob_operation_split_rw_iov ...passed 00:07:52.193 Test: blob_simultaneous_operations ...[2024-07-15 21:21:25.483031] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:52.193 [2024-07-15 21:21:25.483114] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:52.193 [2024-07-15 21:21:25.483523] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:52.193 [2024-07-15 21:21:25.483589] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:52.193 [2024-07-15 21:21:25.486158] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:52.193 [2024-07-15 21:21:25.486223] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:52.193 [2024-07-15 21:21:25.486308] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:52.193 [2024-07-15 21:21:25.486321] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:52.193 passed 00:07:52.193 Test: blob_persist_test ...passed 00:07:52.451 Test: blob_decouple_snapshot ...passed 00:07:52.451 Test: blob_seek_io_unit ...passed 00:07:52.451 Test: blob_nested_freezes ...passed 00:07:52.451 Test: blob_clone_resize ...passed 00:07:52.451 Test: blob_shallow_copy ...[2024-07-15 21:21:25.711913] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:07:52.451 [2024-07-15 21:21:25.712172] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:07:52.451 [2024-07-15 21:21:25.712322] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:07:52.451 passed 00:07:52.451 Suite: blob_blob_copy_noextent 00:07:52.451 Test: blob_write ...passed 00:07:52.451 Test: blob_read ...passed 00:07:52.708 Test: blob_rw_verify ...passed 00:07:52.708 Test: blob_rw_verify_iov_nomem ...passed 00:07:52.708 Test: blob_rw_iov_read_only ...passed 00:07:52.708 Test: blob_xattr ...passed 00:07:52.708 Test: blob_dirty_shutdown ...passed 00:07:52.708 Test: blob_is_degraded ...passed 00:07:52.708 Suite: blob_esnap_bs_copy_noextent 00:07:52.708 Test: blob_esnap_create ...passed 00:07:52.708 Test: blob_esnap_thread_add_remove ...passed 00:07:52.708 Test: blob_esnap_clone_snapshot ...passed 00:07:52.967 Test: blob_esnap_clone_inflate ...passed 00:07:52.967 Test: blob_esnap_clone_decouple ...passed 00:07:52.967 Test: blob_esnap_clone_reload ...passed 00:07:52.967 Test: blob_esnap_hotplug ...passed 00:07:52.967 Test: blob_set_parent ...[2024-07-15 21:21:26.208927] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:07:52.967 [2024-07-15 21:21:26.209020] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:07:52.967 [2024-07-15 21:21:26.209113] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:07:52.967 [2024-07-15 21:21:26.209145] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:07:52.967 [2024-07-15 21:21:26.209469] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:52.967 passed 00:07:52.967 Test: blob_set_external_parent ...[2024-07-15 21:21:26.240080] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:07:52.967 [2024-07-15 21:21:26.240145] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:07:52.967 [2024-07-15 21:21:26.240163] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:07:52.967 [2024-07-15 21:21:26.240466] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:52.967 passed 00:07:52.967 Suite: blob_copy_extent 00:07:52.967 Test: blob_init ...[2024-07-15 21:21:26.251217] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:52.967 passed 00:07:52.967 Test: blob_thin_provision ...passed 00:07:52.967 Test: blob_read_only ...passed 00:07:52.967 Test: bs_load ...[2024-07-15 21:21:26.293970] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:52.967 passed 00:07:52.967 Test: bs_load_custom_cluster_size ...passed 00:07:52.967 Test: bs_load_after_failed_grow ...passed 00:07:52.967 Test: bs_cluster_sz ...[2024-07-15 21:21:26.316573] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:52.967 [2024-07-15 21:21:26.316733] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:52.967 [2024-07-15 21:21:26.316763] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:52.967 passed 00:07:53.226 Test: bs_resize_md ...passed 00:07:53.226 Test: bs_destroy ...passed 00:07:53.226 Test: bs_type ...passed 00:07:53.226 Test: bs_super_block ...passed 00:07:53.226 Test: bs_test_recover_cluster_count ...passed 00:07:53.226 Test: bs_grow_live ...passed 00:07:53.226 Test: bs_grow_live_no_space ...passed 00:07:53.226 Test: bs_test_grow ...passed 00:07:53.226 Test: blob_serialize_test ...passed 00:07:53.226 Test: super_block_crc ...passed 00:07:53.226 Test: blob_thin_prov_write_count_io ...passed 00:07:53.226 Test: blob_thin_prov_unmap_cluster ...passed 00:07:53.226 Test: bs_load_iter_test ...passed 00:07:53.226 Test: blob_relations ...[2024-07-15 21:21:26.475596] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:53.226 [2024-07-15 21:21:26.475713] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.226 [2024-07-15 21:21:26.476221] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:53.226 [2024-07-15 21:21:26.476252] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.226 passed 00:07:53.226 Test: blob_relations2 ...[2024-07-15 21:21:26.488370] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:53.226 [2024-07-15 21:21:26.488440] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.226 [2024-07-15 21:21:26.488465] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:53.226 [2024-07-15 21:21:26.488477] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.226 [2024-07-15 21:21:26.489260] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:53.226 [2024-07-15 21:21:26.489307] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.226 [2024-07-15 21:21:26.489552] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:53.226 [2024-07-15 21:21:26.489589] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.226 passed 00:07:53.226 Test: blob_relations3 ...passed 00:07:53.486 Test: blobstore_clean_power_failure ...passed 00:07:53.486 Test: blob_delete_snapshot_power_failure ...[2024-07-15 21:21:26.631295] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:53.486 [2024-07-15 21:21:26.642380] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:53.486 [2024-07-15 21:21:26.653440] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:53.486 [2024-07-15 21:21:26.653503] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:53.486 [2024-07-15 21:21:26.653522] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.486 [2024-07-15 21:21:26.664720] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:53.486 [2024-07-15 21:21:26.664787] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:53.486 [2024-07-15 21:21:26.664803] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:53.486 [2024-07-15 21:21:26.664821] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.487 [2024-07-15 21:21:26.675951] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:53.487 [2024-07-15 21:21:26.677875] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:53.487 [2024-07-15 21:21:26.677918] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:53.487 [2024-07-15 21:21:26.677943] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.487 [2024-07-15 21:21:26.689260] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:53.487 [2024-07-15 21:21:26.689355] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.487 [2024-07-15 21:21:26.700698] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:53.487 [2024-07-15 21:21:26.700797] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.487 [2024-07-15 21:21:26.712188] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:53.487 [2024-07-15 21:21:26.712267] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.487 passed 00:07:53.487 Test: blob_create_snapshot_power_failure ...[2024-07-15 21:21:26.745324] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:53.487 [2024-07-15 21:21:26.756114] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:53.487 [2024-07-15 21:21:26.777366] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:53.487 [2024-07-15 21:21:26.788299] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:53.487 passed 00:07:53.487 Test: blob_io_unit ...passed 00:07:53.487 Test: blob_io_unit_compatibility ...passed 00:07:53.746 Test: blob_ext_md_pages ...passed 00:07:53.746 Test: blob_esnap_io_4096_4096 ...passed 00:07:53.746 Test: blob_esnap_io_512_512 ...passed 00:07:53.746 Test: blob_esnap_io_4096_512 ...passed 00:07:53.746 Test: blob_esnap_io_512_4096 ...passed 00:07:53.746 Test: blob_esnap_clone_resize ...passed 00:07:53.746 Suite: blob_bs_copy_extent 00:07:53.746 Test: blob_open ...passed 00:07:53.746 Test: blob_create ...[2024-07-15 21:21:27.033965] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:53.746 passed 00:07:53.746 Test: blob_create_loop ...passed 00:07:54.005 Test: blob_create_fail ...[2024-07-15 21:21:27.121490] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:54.005 passed 00:07:54.005 Test: blob_create_internal ...passed 00:07:54.005 Test: blob_create_zero_extent ...passed 00:07:54.005 Test: blob_snapshot ...passed 00:07:54.005 Test: blob_clone ...passed 00:07:54.005 Test: blob_inflate ...[2024-07-15 21:21:27.280837] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:54.005 passed 00:07:54.005 Test: blob_delete ...passed 00:07:54.005 Test: blob_resize_test ...[2024-07-15 21:21:27.341518] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:54.005 passed 00:07:54.264 Test: blob_resize_thin_test ...passed 00:07:54.264 Test: channel_ops ...passed 00:07:54.264 Test: blob_super ...passed 00:07:54.264 Test: blob_rw_verify_iov ...passed 00:07:54.264 Test: blob_unmap ...passed 00:07:54.264 Test: blob_iter ...passed 00:07:54.264 Test: blob_parse_md ...passed 00:07:54.264 Test: bs_load_pending_removal ...passed 00:07:54.264 Test: bs_unload ...[2024-07-15 21:21:27.617603] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:54.264 passed 00:07:54.522 Test: bs_usable_clusters ...passed 00:07:54.522 Test: blob_crc ...[2024-07-15 21:21:27.678162] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:54.522 [2024-07-15 21:21:27.678278] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:54.522 passed 00:07:54.522 Test: blob_flags ...passed 00:07:54.522 Test: bs_version ...passed 00:07:54.522 Test: blob_set_xattrs_test ...[2024-07-15 21:21:27.769435] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:54.522 [2024-07-15 21:21:27.769521] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:54.522 passed 00:07:54.780 Test: blob_thin_prov_alloc ...passed 00:07:54.780 Test: blob_insert_cluster_msg_test ...passed 00:07:54.780 Test: blob_thin_prov_rw ...passed 00:07:54.780 Test: blob_thin_prov_rle ...passed 00:07:54.780 Test: blob_thin_prov_rw_iov ...passed 00:07:54.780 Test: blob_snapshot_rw ...passed 00:07:54.780 Test: blob_snapshot_rw_iov ...passed 00:07:55.039 Test: blob_inflate_rw ...passed 00:07:55.039 Test: blob_snapshot_freeze_io ...passed 00:07:55.297 Test: blob_operation_split_rw ...passed 00:07:55.297 Test: blob_operation_split_rw_iov ...passed 00:07:55.297 Test: blob_simultaneous_operations ...[2024-07-15 21:21:28.564186] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:55.297 [2024-07-15 21:21:28.564273] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.297 [2024-07-15 21:21:28.564720] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:55.297 [2024-07-15 21:21:28.564769] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.297 [2024-07-15 21:21:28.567222] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:55.297 [2024-07-15 21:21:28.567281] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.297 [2024-07-15 21:21:28.567356] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:55.298 [2024-07-15 21:21:28.567368] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.298 passed 00:07:55.298 Test: blob_persist_test ...passed 00:07:55.298 Test: blob_decouple_snapshot ...passed 00:07:55.556 Test: blob_seek_io_unit ...passed 00:07:55.556 Test: blob_nested_freezes ...passed 00:07:55.556 Test: blob_clone_resize ...passed 00:07:55.556 Test: blob_shallow_copy ...[2024-07-15 21:21:28.784002] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:07:55.556 [2024-07-15 21:21:28.784272] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:07:55.556 [2024-07-15 21:21:28.784435] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:07:55.556 passed 00:07:55.556 Suite: blob_blob_copy_extent 00:07:55.556 Test: blob_write ...passed 00:07:55.556 Test: blob_read ...passed 00:07:55.556 Test: blob_rw_verify ...passed 00:07:55.814 Test: blob_rw_verify_iov_nomem ...passed 00:07:55.814 Test: blob_rw_iov_read_only ...passed 00:07:55.814 Test: blob_xattr ...passed 00:07:55.814 Test: blob_dirty_shutdown ...passed 00:07:55.814 Test: blob_is_degraded ...passed 00:07:55.814 Suite: blob_esnap_bs_copy_extent 00:07:55.814 Test: blob_esnap_create ...passed 00:07:55.814 Test: blob_esnap_thread_add_remove ...passed 00:07:55.814 Test: blob_esnap_clone_snapshot ...passed 00:07:56.088 Test: blob_esnap_clone_inflate ...passed 00:07:56.088 Test: blob_esnap_clone_decouple ...passed 00:07:56.088 Test: blob_esnap_clone_reload ...passed 00:07:56.088 Test: blob_esnap_hotplug ...passed 00:07:56.088 Test: blob_set_parent ...[2024-07-15 21:21:29.329020] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:07:56.088 [2024-07-15 21:21:29.329105] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:07:56.088 [2024-07-15 21:21:29.329218] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:07:56.088 [2024-07-15 21:21:29.329263] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:07:56.088 [2024-07-15 21:21:29.329734] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:56.088 passed 00:07:56.088 Test: blob_set_external_parent ...[2024-07-15 21:21:29.362888] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:07:56.088 [2024-07-15 21:21:29.363003] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:07:56.088 [2024-07-15 21:21:29.363031] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:07:56.088 [2024-07-15 21:21:29.363458] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:56.088 passed 00:07:56.088 00:07:56.088 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.088 suites 16 16 n/a 0 0 00:07:56.088 tests 376 376 376 0 0 00:07:56.088 asserts 143973 143973 143973 0 n/a 00:07:56.088 00:07:56.088 Elapsed time = 12.970 seconds 00:07:56.088 21:21:29 unittest.unittest_blob_blobfs -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:07:56.346 00:07:56.346 00:07:56.346 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.346 http://cunit.sourceforge.net/ 00:07:56.346 00:07:56.346 00:07:56.346 Suite: blob_bdev 00:07:56.346 Test: create_bs_dev ...passed 00:07:56.346 Test: create_bs_dev_ro ...[2024-07-15 21:21:29.470005] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 529:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:07:56.346 passed 00:07:56.346 Test: create_bs_dev_rw ...passed 00:07:56.346 Test: claim_bs_dev ...[2024-07-15 21:21:29.470572] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:07:56.346 passed 00:07:56.346 Test: claim_bs_dev_ro ...passed 00:07:56.346 Test: deferred_destroy_refs ...passed 00:07:56.346 Test: deferred_destroy_channels ...passed 00:07:56.346 Test: deferred_destroy_threads ...passed 00:07:56.346 00:07:56.346 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.346 suites 1 1 n/a 0 0 00:07:56.346 tests 8 8 8 0 0 00:07:56.346 asserts 119 119 119 0 n/a 00:07:56.346 00:07:56.346 Elapsed time = 0.001 seconds 00:07:56.346 21:21:29 unittest.unittest_blob_blobfs -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:07:56.346 00:07:56.346 00:07:56.346 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.346 http://cunit.sourceforge.net/ 00:07:56.346 00:07:56.346 00:07:56.346 Suite: tree 00:07:56.346 Test: blobfs_tree_op_test ...passed 00:07:56.346 00:07:56.346 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.346 suites 1 1 n/a 0 0 00:07:56.346 tests 1 1 1 0 0 00:07:56.346 asserts 27 27 27 0 n/a 00:07:56.346 00:07:56.346 Elapsed time = 0.000 seconds 00:07:56.346 21:21:29 unittest.unittest_blob_blobfs -- unit/unittest.sh@44 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:07:56.346 00:07:56.346 00:07:56.346 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.346 http://cunit.sourceforge.net/ 00:07:56.346 00:07:56.346 00:07:56.346 Suite: blobfs_async_ut 00:07:56.346 Test: fs_init ...passed 00:07:56.346 Test: fs_open ...passed 00:07:56.346 Test: fs_create ...passed 00:07:56.346 Test: fs_truncate ...passed 00:07:56.346 Test: fs_rename ...[2024-07-15 21:21:29.638647] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:07:56.346 passed 00:07:56.346 Test: fs_rw_async ...passed 00:07:56.346 Test: fs_writev_readv_async ...passed 00:07:56.346 Test: tree_find_buffer_ut ...passed 00:07:56.346 Test: channel_ops ...passed 00:07:56.346 Test: channel_ops_sync ...passed 00:07:56.346 00:07:56.346 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.346 suites 1 1 n/a 0 0 00:07:56.346 tests 10 10 10 0 0 00:07:56.346 asserts 292 292 292 0 n/a 00:07:56.346 00:07:56.346 Elapsed time = 0.159 seconds 00:07:56.605 21:21:29 unittest.unittest_blob_blobfs -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:07:56.605 00:07:56.605 00:07:56.605 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.605 http://cunit.sourceforge.net/ 00:07:56.605 00:07:56.605 00:07:56.605 Suite: blobfs_sync_ut 00:07:56.605 Test: cache_read_after_write ...[2024-07-15 21:21:29.815318] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:07:56.605 passed 00:07:56.605 Test: file_length ...passed 00:07:56.605 Test: append_write_to_extend_blob ...passed 00:07:56.605 Test: partial_buffer ...passed 00:07:56.605 Test: cache_write_null_buffer ...passed 00:07:56.605 Test: fs_create_sync ...passed 00:07:56.605 Test: fs_rename_sync ...passed 00:07:56.605 Test: cache_append_no_cache ...passed 00:07:56.605 Test: fs_delete_file_without_close ...passed 00:07:56.605 00:07:56.605 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.605 suites 1 1 n/a 0 0 00:07:56.605 tests 9 9 9 0 0 00:07:56.605 asserts 345 345 345 0 n/a 00:07:56.605 00:07:56.605 Elapsed time = 0.328 seconds 00:07:56.605 21:21:29 unittest.unittest_blob_blobfs -- unit/unittest.sh@47 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:07:56.866 00:07:56.866 00:07:56.866 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.866 http://cunit.sourceforge.net/ 00:07:56.866 00:07:56.866 00:07:56.866 Suite: blobfs_bdev_ut 00:07:56.866 Test: spdk_blobfs_bdev_detect_test ...[2024-07-15 21:21:29.982018] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:07:56.866 passed 00:07:56.866 Test: spdk_blobfs_bdev_create_test ...[2024-07-15 21:21:29.982560] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:07:56.866 passed 00:07:56.866 Test: spdk_blobfs_bdev_mount_test ...passed 00:07:56.866 00:07:56.866 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.866 suites 1 1 n/a 0 0 00:07:56.866 tests 3 3 3 0 0 00:07:56.866 asserts 9 9 9 0 n/a 00:07:56.866 00:07:56.866 Elapsed time = 0.001 seconds 00:07:56.866 ************************************ 00:07:56.866 END TEST unittest_blob_blobfs 00:07:56.866 ************************************ 00:07:56.866 00:07:56.866 real 0m13.661s 00:07:56.866 user 0m13.186s 00:07:56.866 sys 0m0.632s 00:07:56.866 21:21:30 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.866 21:21:30 unittest.unittest_blob_blobfs -- common/autotest_common.sh@10 -- # set +x 00:07:56.866 21:21:30 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:56.866 21:21:30 unittest -- unit/unittest.sh@234 -- # run_test unittest_event unittest_event 00:07:56.866 21:21:30 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:56.866 21:21:30 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.866 21:21:30 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:56.866 ************************************ 00:07:56.866 START TEST unittest_event 00:07:56.866 ************************************ 00:07:56.866 21:21:30 unittest.unittest_event -- common/autotest_common.sh@1123 -- # unittest_event 00:07:56.866 21:21:30 unittest.unittest_event -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:07:56.866 00:07:56.866 00:07:56.866 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.866 http://cunit.sourceforge.net/ 00:07:56.866 00:07:56.866 00:07:56.866 Suite: app_suite 00:07:56.866 Test: test_spdk_app_parse_args ...app_ut: invalid option -- 'z' 00:07:56.866 app_ut [options] 00:07:56.866 00:07:56.866 CPU options: 00:07:56.866 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:56.866 (like [0,1,10]) 00:07:56.866 --lcores lcore to CPU mapping list. The list is in the format: 00:07:56.866 [<,lcores[@CPUs]>...] 00:07:56.866 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:56.866 Within the group, '-' is used for range separator, 00:07:56.866 ',' is used for single number separator. 00:07:56.866 '( )' can be omitted for single element group, 00:07:56.866 '@' can be omitted if cpus and lcores have the same value 00:07:56.866 --disable-cpumask-locks Disable CPU core lock files. 00:07:56.866 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:56.866 pollers in the app support interrupt mode) 00:07:56.866 -p, --main-core main (primary) core for DPDK 00:07:56.866 00:07:56.866 Configuration options: 00:07:56.866 -c, --config, --json JSON config file 00:07:56.866 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:56.866 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:56.866 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:56.866 --rpcs-allowed comma-separated list of permitted RPCS 00:07:56.866 --json-ignore-init-errors don't exit on invalid config entry 00:07:56.866 00:07:56.866 Memory options: 00:07:56.866 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:56.866 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:56.866 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:56.866 -R, --huge-unlink unlink huge files after initialization 00:07:56.866 -n, --mem-channels number of memory channels used for DPDK 00:07:56.866 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:56.866 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:56.866 --no-huge run without using hugepages 00:07:56.866 -i, --shm-id shared memory ID (optional) 00:07:56.866 -g, --single-file-segments force creating just one hugetlbfs file 00:07:56.866 00:07:56.866 PCI options: 00:07:56.866 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:56.866 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:56.866 -u, --no-pci disable PCI access 00:07:56.866 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:56.866 00:07:56.866 Log options: 00:07:56.866 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:07:56.866 --silence-noticelog disable notice level logging to stderr 00:07:56.866 00:07:56.866 Trace options: 00:07:56.866 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:56.866 setting 0 to disable trace (default 32768) 00:07:56.866 Tracepoints vary in size and can use more than one trace entry. 00:07:56.866 -e, --tpoint-group [:] 00:07:56.866 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:07:56.866 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:56.866 a tracepoint group. First tpoint inside a group can be enabled by 00:07:56.866 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:56.866 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:56.866 in /include/spdk_internal/trace_defs.h 00:07:56.866 00:07:56.866 Other options: 00:07:56.866 -h, --help show this usage 00:07:56.866 -v, --version print SPDK version 00:07:56.866 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:56.866 --env-context Opaque context for use of the env implementation 00:07:56.866 app_ut: unrecognized option '--test-long-opt' 00:07:56.866 app_ut [options] 00:07:56.866 00:07:56.867 CPU options: 00:07:56.867 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:56.867 (like [0,1,10]) 00:07:56.867 --lcores lcore to CPU mapping list. The list is in the format: 00:07:56.867 [<,lcores[@CPUs]>...] 00:07:56.867 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:56.867 Within the group, '-' is used for range separator, 00:07:56.867 ',' is used for single number separator. 00:07:56.867 '( )' can be omitted for single element group, 00:07:56.867 '@' can be omitted if cpus and lcores have the same value 00:07:56.867 --disable-cpumask-locks Disable CPU core lock files. 00:07:56.867 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:56.867 pollers in the app support interrupt mode) 00:07:56.867 -p, --main-core main (primary) core for DPDK 00:07:56.867 00:07:56.867 Configuration options: 00:07:56.867 -c, --config, --json JSON config file 00:07:56.867 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:56.867 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:56.867 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:56.867 --rpcs-allowed comma-separated list of permitted RPCS 00:07:56.867 --json-ignore-init-errors don't exit on invalid config entry 00:07:56.867 00:07:56.867 Memory options: 00:07:56.867 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:56.867 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:56.867 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:56.867 -R, --huge-unlink unlink huge files after initialization 00:07:56.867 -n, --mem-channels number of memory channels used for DPDK 00:07:56.867 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:56.867 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:56.867 --no-huge run without using hugepages 00:07:56.867 -i, --shm-id shared memory ID (optional) 00:07:56.867 -g, --single-file-segments force creating just one hugetlbfs file 00:07:56.867 00:07:56.867 PCI options: 00:07:56.867 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:56.867 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:56.867 -u, --no-pci disable PCI access 00:07:56.867 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:56.867 00:07:56.867 Log options: 00:07:56.867 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:07:56.867 --silence-noticelog disable notice level logging to stderr 00:07:56.867 00:07:56.867 Trace options: 00:07:56.867 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:56.867 setting 0 to disable trace (default 32768) 00:07:56.867 Tracepoints vary in size and can use more than one trace entry. 00:07:56.867 -e, --tpoint-group [:] 00:07:56.867 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:07:56.867 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:56.867 a tracepoint group. First tpoint inside a group can be enabled by 00:07:56.867 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:56.867 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:56.867 in /include/spdk_internal/trace_defs.h 00:07:56.867 00:07:56.867 Other options: 00:07:56.867 -h, --help show this usage 00:07:56.867 -v, --version print SPDK version 00:07:56.867 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:56.867 --env-context Opaque context for use of the env implementation 00:07:56.867 [2024-07-15 21:21:30.096378] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1192:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:07:56.867 [2024-07-15 21:21:30.096774] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1373:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:07:56.867 app_ut [options] 00:07:56.867 00:07:56.867 CPU options: 00:07:56.867 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:56.867 (like [0,1,10]) 00:07:56.867 --lcores lcore to CPU mapping list. The list is in the format: 00:07:56.867 [<,lcores[@CPUs]>...] 00:07:56.867 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:56.867 Within the group, '-' is used for range separator, 00:07:56.867 ',' is used for single number separator. 00:07:56.867 '( )' can be omitted for single element group, 00:07:56.867 '@' can be omitted if cpus and lcores have the same value 00:07:56.867 --disable-cpumask-locks Disable CPU core lock files. 00:07:56.867 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:56.867 pollers in the app support interrupt mode) 00:07:56.867 -p, --main-core main (primary) core for DPDK 00:07:56.867 00:07:56.867 Configuration options: 00:07:56.867 -c, --config, --json JSON config file 00:07:56.867 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:56.867 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:56.867 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:56.867 --rpcs-allowed comma-separated list of permitted RPCS 00:07:56.867 --json-ignore-init-errors don't exit on invalid config entry 00:07:56.867 00:07:56.867 Memory options: 00:07:56.867 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:56.867 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:56.867 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:56.867 -R, --huge-unlink unlink huge files after initialization 00:07:56.867 -n, --mem-channels number of memory channels used for DPDK 00:07:56.867 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:56.867 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:56.867 --no-huge run without using hugepages 00:07:56.867 -i, --shm-id shared memory ID (optional) 00:07:56.867 -g, --single-file-segments force creating just one hugetlbfs file 00:07:56.867 00:07:56.867 PCI options: 00:07:56.867 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:56.867 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:56.867 -u, --no-pci disable PCI access 00:07:56.867 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:56.867 00:07:56.867 Log options: 00:07:56.867 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:07:56.867 --silence-noticelog disable notice level logging to stderr 00:07:56.867 00:07:56.867 Trace options: 00:07:56.867 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:56.867 setting 0 to disable trace (default 32768) 00:07:56.867 Tracepoints vary in size and can use more than one trace entry. 00:07:56.867 -e, --tpoint-group [:] 00:07:56.867 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:07:56.867 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:56.867 a tracepoint group. First tpoint inside a group can be enabled by 00:07:56.867 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:56.867 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:56.867 in /include/spdk_internal/trace_defs.h 00:07:56.867 00:07:56.867 Other options: 00:07:56.867 -h, --help show this usage 00:07:56.867 -v, --version print SPDK version 00:07:56.867 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:56.867 --env-context Opaque context for use of the env implementation 00:07:56.867 [2024-07-15 21:21:30.099309] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1278:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:07:56.867 passed 00:07:56.867 00:07:56.867 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.867 suites 1 1 n/a 0 0 00:07:56.867 tests 1 1 1 0 0 00:07:56.867 asserts 8 8 8 0 n/a 00:07:56.867 00:07:56.867 Elapsed time = 0.002 seconds 00:07:56.867 21:21:30 unittest.unittest_event -- unit/unittest.sh@52 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:07:56.867 00:07:56.867 00:07:56.867 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.867 http://cunit.sourceforge.net/ 00:07:56.867 00:07:56.867 00:07:56.867 Suite: app_suite 00:07:56.867 Test: test_create_reactor ...passed 00:07:56.867 Test: test_init_reactors ...passed 00:07:56.867 Test: test_event_call ...passed 00:07:56.867 Test: test_schedule_thread ...passed 00:07:56.867 Test: test_reschedule_thread ...passed 00:07:56.867 Test: test_bind_thread ...passed 00:07:56.867 Test: test_for_each_reactor ...passed 00:07:56.867 Test: test_reactor_stats ...passed 00:07:56.867 Test: test_scheduler ...passed 00:07:56.867 Test: test_governor ...passed 00:07:56.867 00:07:56.867 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.867 suites 1 1 n/a 0 0 00:07:56.867 tests 10 10 10 0 0 00:07:56.867 asserts 344 344 344 0 n/a 00:07:56.867 00:07:56.867 Elapsed time = 0.014 seconds 00:07:56.867 00:07:56.867 real 0m0.115s 00:07:56.867 user 0m0.071s 00:07:56.867 sys 0m0.039s 00:07:56.867 21:21:30 unittest.unittest_event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.867 21:21:30 unittest.unittest_event -- common/autotest_common.sh@10 -- # set +x 00:07:56.867 ************************************ 00:07:56.867 END TEST unittest_event 00:07:56.867 ************************************ 00:07:56.867 21:21:30 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:57.127 21:21:30 unittest -- unit/unittest.sh@235 -- # uname -s 00:07:57.127 21:21:30 unittest -- unit/unittest.sh@235 -- # '[' Linux = Linux ']' 00:07:57.127 21:21:30 unittest -- unit/unittest.sh@236 -- # run_test unittest_ftl unittest_ftl 00:07:57.127 21:21:30 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:57.127 21:21:30 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.127 21:21:30 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:57.127 ************************************ 00:07:57.127 START TEST unittest_ftl 00:07:57.127 ************************************ 00:07:57.127 21:21:30 unittest.unittest_ftl -- common/autotest_common.sh@1123 -- # unittest_ftl 00:07:57.127 21:21:30 unittest.unittest_ftl -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:07:57.127 00:07:57.127 00:07:57.127 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.127 http://cunit.sourceforge.net/ 00:07:57.127 00:07:57.127 00:07:57.127 Suite: ftl_band_suite 00:07:57.127 Test: test_band_block_offset_from_addr_base ...passed 00:07:57.127 Test: test_band_block_offset_from_addr_offset ...passed 00:07:57.127 Test: test_band_addr_from_block_offset ...passed 00:07:57.127 Test: test_band_set_addr ...passed 00:07:57.127 Test: test_invalidate_addr ...passed 00:07:57.127 Test: test_next_xfer_addr ...passed 00:07:57.127 00:07:57.127 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.127 suites 1 1 n/a 0 0 00:07:57.127 tests 6 6 6 0 0 00:07:57.127 asserts 30356 30356 30356 0 n/a 00:07:57.127 00:07:57.127 Elapsed time = 0.122 seconds 00:07:57.127 21:21:30 unittest.unittest_ftl -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:07:57.127 00:07:57.127 00:07:57.127 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.127 http://cunit.sourceforge.net/ 00:07:57.127 00:07:57.127 00:07:57.127 Suite: ftl_bitmap 00:07:57.127 Test: test_ftl_bitmap_create ...[2024-07-15 21:21:30.487786] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:07:57.127 [2024-07-15 21:21:30.488255] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:07:57.127 passed 00:07:57.127 Test: test_ftl_bitmap_get ...passed 00:07:57.127 Test: test_ftl_bitmap_set ...passed 00:07:57.127 Test: test_ftl_bitmap_clear ...passed 00:07:57.127 Test: test_ftl_bitmap_find_first_set ...passed 00:07:57.127 Test: test_ftl_bitmap_find_first_clear ...passed 00:07:57.127 Test: test_ftl_bitmap_count_set ...passed 00:07:57.127 00:07:57.127 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.127 suites 1 1 n/a 0 0 00:07:57.127 tests 7 7 7 0 0 00:07:57.127 asserts 137 137 137 0 n/a 00:07:57.127 00:07:57.127 Elapsed time = 0.002 seconds 00:07:57.387 21:21:30 unittest.unittest_ftl -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:07:57.387 00:07:57.387 00:07:57.387 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.387 http://cunit.sourceforge.net/ 00:07:57.387 00:07:57.387 00:07:57.387 Suite: ftl_io_suite 00:07:57.387 Test: test_completion ...passed 00:07:57.387 Test: test_multiple_ios ...passed 00:07:57.387 00:07:57.387 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.387 suites 1 1 n/a 0 0 00:07:57.387 tests 2 2 2 0 0 00:07:57.387 asserts 47 47 47 0 n/a 00:07:57.387 00:07:57.387 Elapsed time = 0.003 seconds 00:07:57.387 21:21:30 unittest.unittest_ftl -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:07:57.387 00:07:57.387 00:07:57.387 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.387 http://cunit.sourceforge.net/ 00:07:57.387 00:07:57.387 00:07:57.387 Suite: ftl_mngt 00:07:57.387 Test: test_next_step ...passed 00:07:57.387 Test: test_continue_step ...passed 00:07:57.387 Test: test_get_func_and_step_cntx_alloc ...passed 00:07:57.387 Test: test_fail_step ...passed 00:07:57.387 Test: test_mngt_call_and_call_rollback ...passed 00:07:57.387 Test: test_nested_process_failure ...passed 00:07:57.387 Test: test_call_init_success ...passed 00:07:57.387 Test: test_call_init_failure ...passed 00:07:57.387 00:07:57.387 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.387 suites 1 1 n/a 0 0 00:07:57.387 tests 8 8 8 0 0 00:07:57.387 asserts 196 196 196 0 n/a 00:07:57.387 00:07:57.387 Elapsed time = 0.002 seconds 00:07:57.387 21:21:30 unittest.unittest_ftl -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:07:57.387 00:07:57.387 00:07:57.387 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.387 http://cunit.sourceforge.net/ 00:07:57.387 00:07:57.387 00:07:57.387 Suite: ftl_mempool 00:07:57.387 Test: test_ftl_mempool_create ...passed 00:07:57.387 Test: test_ftl_mempool_get_put ...passed 00:07:57.387 00:07:57.387 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.387 suites 1 1 n/a 0 0 00:07:57.387 tests 2 2 2 0 0 00:07:57.387 asserts 36 36 36 0 n/a 00:07:57.387 00:07:57.387 Elapsed time = 0.000 seconds 00:07:57.387 21:21:30 unittest.unittest_ftl -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:07:57.387 00:07:57.387 00:07:57.387 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.387 http://cunit.sourceforge.net/ 00:07:57.387 00:07:57.387 00:07:57.387 Suite: ftl_addr64_suite 00:07:57.387 Test: test_addr_cached ...passed 00:07:57.387 00:07:57.387 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.387 suites 1 1 n/a 0 0 00:07:57.387 tests 1 1 1 0 0 00:07:57.387 asserts 1536 1536 1536 0 n/a 00:07:57.387 00:07:57.387 Elapsed time = 0.000 seconds 00:07:57.387 21:21:30 unittest.unittest_ftl -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:07:57.387 00:07:57.387 00:07:57.387 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.387 http://cunit.sourceforge.net/ 00:07:57.387 00:07:57.387 00:07:57.387 Suite: ftl_sb 00:07:57.387 Test: test_sb_crc_v2 ...passed 00:07:57.387 Test: test_sb_crc_v3 ...passed 00:07:57.387 Test: test_sb_v3_md_layout ...[2024-07-15 21:21:30.686094] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:07:57.387 [2024-07-15 21:21:30.686583] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:57.387 [2024-07-15 21:21:30.686692] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:57.387 [2024-07-15 21:21:30.686785] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:57.388 [2024-07-15 21:21:30.686870] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:07:57.388 [2024-07-15 21:21:30.687023] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:07:57.388 [2024-07-15 21:21:30.687107] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:07:57.388 [2024-07-15 21:21:30.687217] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:07:57.388 [2024-07-15 21:21:30.687372] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:07:57.388 [2024-07-15 21:21:30.687463] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:07:57.388 [2024-07-15 21:21:30.687587] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:07:57.388 passed 00:07:57.388 Test: test_sb_v5_md_layout ...passed 00:07:57.388 00:07:57.388 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.388 suites 1 1 n/a 0 0 00:07:57.388 tests 4 4 4 0 0 00:07:57.388 asserts 160 160 160 0 n/a 00:07:57.388 00:07:57.388 Elapsed time = 0.003 seconds 00:07:57.388 21:21:30 unittest.unittest_ftl -- unit/unittest.sh@63 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:07:57.388 00:07:57.388 00:07:57.388 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.388 http://cunit.sourceforge.net/ 00:07:57.388 00:07:57.388 00:07:57.388 Suite: ftl_layout_upgrade 00:07:57.388 Test: test_l2p_upgrade ...passed 00:07:57.388 00:07:57.388 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.388 suites 1 1 n/a 0 0 00:07:57.388 tests 1 1 1 0 0 00:07:57.388 asserts 152 152 152 0 n/a 00:07:57.388 00:07:57.388 Elapsed time = 0.001 seconds 00:07:57.388 21:21:30 unittest.unittest_ftl -- unit/unittest.sh@64 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut 00:07:57.648 00:07:57.648 00:07:57.648 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.648 http://cunit.sourceforge.net/ 00:07:57.648 00:07:57.648 00:07:57.648 Suite: ftl_p2l_suite 00:07:57.648 Test: test_p2l_num_pages ...passed 00:07:57.907 Test: test_ckpt_issue ...passed 00:07:58.166 Test: test_persist_band_p2l ...passed 00:07:58.734 Test: test_clean_restore_p2l ...passed 00:07:59.304 Test: test_dirty_restore_p2l ...passed 00:07:59.304 00:07:59.304 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.304 suites 1 1 n/a 0 0 00:07:59.304 tests 5 5 5 0 0 00:07:59.304 asserts 10020 10020 10020 0 n/a 00:07:59.304 00:07:59.304 Elapsed time = 1.885 seconds 00:07:59.565 ************************************ 00:07:59.565 END TEST unittest_ftl 00:07:59.565 ************************************ 00:07:59.565 00:07:59.565 real 0m2.435s 00:07:59.565 user 0m0.791s 00:07:59.565 sys 0m1.634s 00:07:59.565 21:21:32 unittest.unittest_ftl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.565 21:21:32 unittest.unittest_ftl -- common/autotest_common.sh@10 -- # set +x 00:07:59.565 21:21:32 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:59.565 21:21:32 unittest -- unit/unittest.sh@239 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:07:59.565 21:21:32 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:59.565 21:21:32 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.565 21:21:32 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:59.565 ************************************ 00:07:59.565 START TEST unittest_accel 00:07:59.565 ************************************ 00:07:59.565 21:21:32 unittest.unittest_accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:07:59.565 00:07:59.565 00:07:59.565 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.565 http://cunit.sourceforge.net/ 00:07:59.565 00:07:59.565 00:07:59.565 Suite: accel_sequence 00:07:59.565 Test: test_sequence_fill_copy ...passed 00:07:59.565 Test: test_sequence_abort ...passed 00:07:59.565 Test: test_sequence_append_error ...passed 00:07:59.565 Test: test_sequence_completion_error ...[2024-07-15 21:21:32.781396] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1959:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f4d27d257c0 00:07:59.565 [2024-07-15 21:21:32.781790] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1959:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7f4d27d257c0 00:07:59.565 [2024-07-15 21:21:32.781928] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1869:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7f4d27d257c0 00:07:59.565 [2024-07-15 21:21:32.782037] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1869:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7f4d27d257c0 00:07:59.565 passed 00:07:59.565 Test: test_sequence_decompress ...passed 00:07:59.565 Test: test_sequence_reverse ...passed 00:07:59.565 Test: test_sequence_copy_elision ...passed 00:07:59.565 Test: test_sequence_accel_buffers ...passed 00:07:59.565 Test: test_sequence_memory_domain ...[2024-07-15 21:21:32.790760] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1761:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:07:59.565 [2024-07-15 21:21:32.790939] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1800:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:07:59.565 passed 00:07:59.565 Test: test_sequence_module_memory_domain ...passed 00:07:59.565 Test: test_sequence_crypto ...passed 00:07:59.565 Test: test_sequence_driver ...[2024-07-15 21:21:32.795612] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1908:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7f4d26e697c0 using driver: ut 00:07:59.565 [2024-07-15 21:21:32.795714] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1972:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f4d26e697c0 through driver: ut 00:07:59.565 passed 00:07:59.565 Test: test_sequence_same_iovs ...passed 00:07:59.565 Test: test_sequence_crc32 ...passed 00:07:59.565 Suite: accel 00:07:59.565 Test: test_spdk_accel_task_complete ...passed 00:07:59.565 Test: test_get_task ...passed 00:07:59.565 Test: test_spdk_accel_submit_copy ...passed 00:07:59.565 Test: test_spdk_accel_submit_dualcast ...[2024-07-15 21:21:32.799335] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 425:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:07:59.565 [2024-07-15 21:21:32.799405] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 425:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:07:59.565 passed 00:07:59.565 Test: test_spdk_accel_submit_compare ...passed 00:07:59.565 Test: test_spdk_accel_submit_fill ...passed 00:07:59.565 Test: test_spdk_accel_submit_crc32c ...passed 00:07:59.565 Test: test_spdk_accel_submit_crc32cv ...passed 00:07:59.565 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:07:59.565 Test: test_spdk_accel_submit_xor ...passed 00:07:59.565 Test: test_spdk_accel_module_find_by_name ...passed 00:07:59.565 Test: test_spdk_accel_module_register ...passed 00:07:59.565 00:07:59.565 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.565 suites 2 2 n/a 0 0 00:07:59.565 tests 26 26 26 0 0 00:07:59.565 asserts 830 830 830 0 n/a 00:07:59.565 00:07:59.565 Elapsed time = 0.028 seconds 00:07:59.565 00:07:59.565 real 0m0.085s 00:07:59.565 user 0m0.032s 00:07:59.565 sys 0m0.052s 00:07:59.565 21:21:32 unittest.unittest_accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.565 21:21:32 unittest.unittest_accel -- common/autotest_common.sh@10 -- # set +x 00:07:59.565 ************************************ 00:07:59.565 END TEST unittest_accel 00:07:59.565 ************************************ 00:07:59.565 21:21:32 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:59.565 21:21:32 unittest -- unit/unittest.sh@240 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:07:59.565 21:21:32 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:59.565 21:21:32 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.565 21:21:32 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:59.565 ************************************ 00:07:59.565 START TEST unittest_ioat 00:07:59.565 ************************************ 00:07:59.565 21:21:32 unittest.unittest_ioat -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:07:59.565 00:07:59.565 00:07:59.565 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.565 http://cunit.sourceforge.net/ 00:07:59.565 00:07:59.565 00:07:59.565 Suite: ioat 00:07:59.565 Test: ioat_state_check ...passed 00:07:59.565 00:07:59.565 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.565 suites 1 1 n/a 0 0 00:07:59.565 tests 1 1 1 0 0 00:07:59.565 asserts 32 32 32 0 n/a 00:07:59.565 00:07:59.565 Elapsed time = 0.000 seconds 00:07:59.839 00:07:59.839 real 0m0.043s 00:07:59.839 user 0m0.020s 00:07:59.839 sys 0m0.024s 00:07:59.839 21:21:32 unittest.unittest_ioat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.839 21:21:32 unittest.unittest_ioat -- common/autotest_common.sh@10 -- # set +x 00:07:59.839 ************************************ 00:07:59.839 END TEST unittest_ioat 00:07:59.839 ************************************ 00:07:59.839 21:21:32 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:59.839 21:21:32 unittest -- unit/unittest.sh@241 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:59.839 21:21:32 unittest -- unit/unittest.sh@242 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:07:59.839 21:21:32 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:59.839 21:21:32 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.839 21:21:32 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:59.839 ************************************ 00:07:59.839 START TEST unittest_idxd_user 00:07:59.839 ************************************ 00:07:59.839 21:21:32 unittest.unittest_idxd_user -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:07:59.839 00:07:59.839 00:07:59.839 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.839 http://cunit.sourceforge.net/ 00:07:59.839 00:07:59.839 00:07:59.839 Suite: idxd_user 00:07:59.839 Test: test_idxd_wait_cmd ...[2024-07-15 21:21:33.021072] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:07:59.839 [2024-07-15 21:21:33.021568] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:07:59.839 passed 00:07:59.839 Test: test_idxd_reset_dev ...[2024-07-15 21:21:33.021859] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:07:59.839 [2024-07-15 21:21:33.021961] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:07:59.839 passed 00:07:59.839 Test: test_idxd_group_config ...passed 00:07:59.839 Test: test_idxd_wq_config ...passed 00:07:59.839 00:07:59.839 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.839 suites 1 1 n/a 0 0 00:07:59.839 tests 4 4 4 0 0 00:07:59.839 asserts 20 20 20 0 n/a 00:07:59.839 00:07:59.839 Elapsed time = 0.001 seconds 00:07:59.839 00:07:59.839 real 0m0.047s 00:07:59.839 user 0m0.027s 00:07:59.839 sys 0m0.019s 00:07:59.839 21:21:33 unittest.unittest_idxd_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.839 21:21:33 unittest.unittest_idxd_user -- common/autotest_common.sh@10 -- # set +x 00:07:59.839 ************************************ 00:07:59.839 END TEST unittest_idxd_user 00:07:59.839 ************************************ 00:07:59.840 21:21:33 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:59.840 21:21:33 unittest -- unit/unittest.sh@244 -- # run_test unittest_iscsi unittest_iscsi 00:07:59.840 21:21:33 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:59.840 21:21:33 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.840 21:21:33 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:59.840 ************************************ 00:07:59.840 START TEST unittest_iscsi 00:07:59.840 ************************************ 00:07:59.840 21:21:33 unittest.unittest_iscsi -- common/autotest_common.sh@1123 -- # unittest_iscsi 00:07:59.840 21:21:33 unittest.unittest_iscsi -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:07:59.840 00:07:59.840 00:07:59.840 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.840 http://cunit.sourceforge.net/ 00:07:59.840 00:07:59.840 00:07:59.840 Suite: conn_suite 00:07:59.840 Test: read_task_split_in_order_case ...passed 00:07:59.840 Test: read_task_split_reverse_order_case ...passed 00:07:59.840 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:07:59.840 Test: process_non_read_task_completion_test ...passed 00:07:59.840 Test: free_tasks_on_connection ...passed 00:07:59.840 Test: free_tasks_with_queued_datain ...passed 00:07:59.840 Test: abort_queued_datain_task_test ...passed 00:07:59.840 Test: abort_queued_datain_tasks_test ...passed 00:07:59.840 00:07:59.840 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.840 suites 1 1 n/a 0 0 00:07:59.840 tests 8 8 8 0 0 00:07:59.840 asserts 230 230 230 0 n/a 00:07:59.840 00:07:59.840 Elapsed time = 0.001 seconds 00:07:59.840 21:21:33 unittest.unittest_iscsi -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:07:59.840 00:07:59.840 00:07:59.840 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.840 http://cunit.sourceforge.net/ 00:07:59.840 00:07:59.840 00:07:59.840 Suite: iscsi_suite 00:07:59.840 Test: param_negotiation_test ...passed 00:07:59.840 Test: list_negotiation_test ...passed 00:07:59.840 Test: parse_valid_test ...passed 00:07:59.840 Test: parse_invalid_test ...[2024-07-15 21:21:33.188440] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:07:59.840 [2024-07-15 21:21:33.188940] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:07:59.840 [2024-07-15 21:21:33.189061] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:07:59.840 [2024-07-15 21:21:33.189187] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:07:59.840 [2024-07-15 21:21:33.189433] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:07:59.840 [2024-07-15 21:21:33.189525] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:07:59.840 [2024-07-15 21:21:33.189669] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:07:59.840 passed 00:07:59.840 00:07:59.840 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.840 suites 1 1 n/a 0 0 00:07:59.840 tests 4 4 4 0 0 00:07:59.840 asserts 161 161 161 0 n/a 00:07:59.840 00:07:59.840 Elapsed time = 0.008 seconds 00:08:00.102 21:21:33 unittest.unittest_iscsi -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:08:00.102 00:08:00.102 00:08:00.102 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.102 http://cunit.sourceforge.net/ 00:08:00.102 00:08:00.102 00:08:00.102 Suite: iscsi_target_node_suite 00:08:00.102 Test: add_lun_test_cases ...[2024-07-15 21:21:33.238316] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1252:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:08:00.102 [2024-07-15 21:21:33.238788] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:08:00.102 [2024-07-15 21:21:33.238945] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:08:00.102 [2024-07-15 21:21:33.239038] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:08:00.102 [2024-07-15 21:21:33.239110] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:08:00.102 passed 00:08:00.102 Test: allow_any_allowed ...passed 00:08:00.102 Test: allow_ipv6_allowed ...passed 00:08:00.102 Test: allow_ipv6_denied ...passed 00:08:00.102 Test: allow_ipv6_invalid ...passed 00:08:00.102 Test: allow_ipv4_allowed ...passed 00:08:00.102 Test: allow_ipv4_denied ...passed 00:08:00.102 Test: allow_ipv4_invalid ...passed 00:08:00.102 Test: node_access_allowed ...passed 00:08:00.102 Test: node_access_denied_by_empty_netmask ...passed 00:08:00.102 Test: node_access_multi_initiator_groups_cases ...passed 00:08:00.102 Test: allow_iscsi_name_multi_maps_case ...passed 00:08:00.102 Test: chap_param_test_cases ...[2024-07-15 21:21:33.240175] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:08:00.102 [2024-07-15 21:21:33.240255] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:08:00.102 [2024-07-15 21:21:33.240327] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:08:00.102 [2024-07-15 21:21:33.240377] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:08:00.102 [2024-07-15 21:21:33.240428] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:08:00.102 passed 00:08:00.102 00:08:00.102 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.102 suites 1 1 n/a 0 0 00:08:00.102 tests 13 13 13 0 0 00:08:00.102 asserts 50 50 50 0 n/a 00:08:00.102 00:08:00.102 Elapsed time = 0.001 seconds 00:08:00.102 21:21:33 unittest.unittest_iscsi -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:08:00.102 00:08:00.102 00:08:00.102 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.102 http://cunit.sourceforge.net/ 00:08:00.102 00:08:00.102 00:08:00.102 Suite: iscsi_suite 00:08:00.102 Test: op_login_check_target_test ...[2024-07-15 21:21:33.293500] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1439:iscsi_op_login_check_target: *ERROR*: access denied 00:08:00.102 passed 00:08:00.102 Test: op_login_session_normal_test ...[2024-07-15 21:21:33.293975] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:08:00.102 [2024-07-15 21:21:33.294052] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:08:00.102 [2024-07-15 21:21:33.294118] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:08:00.102 [2024-07-15 21:21:33.294217] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:08:00.102 [2024-07-15 21:21:33.294349] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1472:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:08:00.102 [2024-07-15 21:21:33.294478] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:08:00.102 [2024-07-15 21:21:33.294564] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1472:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:08:00.102 passed 00:08:00.102 Test: maxburstlength_test ...[2024-07-15 21:21:33.294890] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:08:00.102 [2024-07-15 21:21:33.294991] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:08:00.102 passed 00:08:00.102 Test: underflow_for_read_transfer_test ...passed 00:08:00.102 Test: underflow_for_zero_read_transfer_test ...passed 00:08:00.102 Test: underflow_for_request_sense_test ...passed 00:08:00.102 Test: underflow_for_check_condition_test ...passed 00:08:00.102 Test: add_transfer_task_test ...passed 00:08:00.102 Test: get_transfer_task_test ...passed 00:08:00.102 Test: del_transfer_task_test ...passed 00:08:00.102 Test: clear_all_transfer_tasks_test ...passed 00:08:00.102 Test: build_iovs_test ...passed 00:08:00.102 Test: build_iovs_with_md_test ...passed 00:08:00.102 Test: pdu_hdr_op_login_test ...[2024-07-15 21:21:33.297362] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1256:iscsi_op_login_rsp_init: *ERROR*: transit error 00:08:00.102 [2024-07-15 21:21:33.297526] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:08:00.102 [2024-07-15 21:21:33.297656] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1277:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:08:00.102 passed 00:08:00.102 Test: pdu_hdr_op_text_test ...[2024-07-15 21:21:33.297848] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2258:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:08:00.102 [2024-07-15 21:21:33.297980] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2290:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:08:00.102 [2024-07-15 21:21:33.298060] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2303:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:08:00.102 passed 00:08:00.102 Test: pdu_hdr_op_logout_test ...[2024-07-15 21:21:33.298224] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2533:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:08:00.102 passed 00:08:00.102 Test: pdu_hdr_op_scsi_test ...[2024-07-15 21:21:33.298467] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:08:00.102 [2024-07-15 21:21:33.298547] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:08:00.102 [2024-07-15 21:21:33.298615] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3382:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:08:00.102 [2024-07-15 21:21:33.298757] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3415:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:08:00.102 [2024-07-15 21:21:33.298894] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3422:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:08:00.103 [2024-07-15 21:21:33.299127] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3446:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:08:00.103 passed 00:08:00.103 Test: pdu_hdr_op_task_mgmt_test ...[2024-07-15 21:21:33.299330] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3623:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:08:00.103 [2024-07-15 21:21:33.299445] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3712:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:08:00.103 passed 00:08:00.103 Test: pdu_hdr_op_nopout_test ...[2024-07-15 21:21:33.299782] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3731:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:08:00.103 [2024-07-15 21:21:33.299911] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:08:00.103 [2024-07-15 21:21:33.299972] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:08:00.103 [2024-07-15 21:21:33.300029] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3761:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:08:00.103 passed 00:08:00.103 Test: pdu_hdr_op_data_test ...[2024-07-15 21:21:33.300141] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4204:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:08:00.103 [2024-07-15 21:21:33.300244] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:08:00.103 [2024-07-15 21:21:33.300350] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:08:00.103 [2024-07-15 21:21:33.300432] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4234:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:08:00.103 [2024-07-15 21:21:33.300535] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4240:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:08:00.103 [2024-07-15 21:21:33.300640] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4251:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:08:00.103 [2024-07-15 21:21:33.300688] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4261:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:08:00.103 passed 00:08:00.103 Test: empty_text_with_cbit_test ...passed 00:08:00.103 Test: pdu_payload_read_test ...[2024-07-15 21:21:33.302458] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4649:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:08:00.103 passed 00:08:00.103 Test: data_out_pdu_sequence_test ...passed 00:08:00.103 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:08:00.103 00:08:00.103 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.103 suites 1 1 n/a 0 0 00:08:00.103 tests 24 24 24 0 0 00:08:00.103 asserts 150253 150253 150253 0 n/a 00:08:00.103 00:08:00.103 Elapsed time = 0.015 seconds 00:08:00.103 21:21:33 unittest.unittest_iscsi -- unit/unittest.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:08:00.103 00:08:00.103 00:08:00.103 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.103 http://cunit.sourceforge.net/ 00:08:00.103 00:08:00.103 00:08:00.103 Suite: init_grp_suite 00:08:00.103 Test: create_initiator_group_success_case ...passed 00:08:00.103 Test: find_initiator_group_success_case ...passed 00:08:00.103 Test: register_initiator_group_twice_case ...passed 00:08:00.103 Test: add_initiator_name_success_case ...passed 00:08:00.103 Test: add_initiator_name_fail_case ...[2024-07-15 21:21:33.357624] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:08:00.103 passed 00:08:00.103 Test: delete_all_initiator_names_success_case ...passed 00:08:00.103 Test: add_netmask_success_case ...passed 00:08:00.103 Test: add_netmask_fail_case ...[2024-07-15 21:21:33.358467] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:08:00.103 passed 00:08:00.103 Test: delete_all_netmasks_success_case ...passed 00:08:00.103 Test: initiator_name_overwrite_all_to_any_case ...passed 00:08:00.103 Test: netmask_overwrite_all_to_any_case ...passed 00:08:00.103 Test: add_delete_initiator_names_case ...passed 00:08:00.103 Test: add_duplicated_initiator_names_case ...passed 00:08:00.103 Test: delete_nonexisting_initiator_names_case ...passed 00:08:00.103 Test: add_delete_netmasks_case ...passed 00:08:00.103 Test: add_duplicated_netmasks_case ...passed 00:08:00.103 Test: delete_nonexisting_netmasks_case ...passed 00:08:00.103 00:08:00.103 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.103 suites 1 1 n/a 0 0 00:08:00.103 tests 17 17 17 0 0 00:08:00.103 asserts 108 108 108 0 n/a 00:08:00.103 00:08:00.103 Elapsed time = 0.002 seconds 00:08:00.103 21:21:33 unittest.unittest_iscsi -- unit/unittest.sh@73 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:08:00.103 00:08:00.103 00:08:00.103 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.103 http://cunit.sourceforge.net/ 00:08:00.103 00:08:00.103 00:08:00.103 Suite: portal_grp_suite 00:08:00.103 Test: portal_create_ipv4_normal_case ...passed 00:08:00.103 Test: portal_create_ipv6_normal_case ...passed 00:08:00.103 Test: portal_create_ipv4_wildcard_case ...passed 00:08:00.103 Test: portal_create_ipv6_wildcard_case ...passed 00:08:00.103 Test: portal_create_twice_case ...[2024-07-15 21:21:33.407509] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:08:00.103 passed 00:08:00.103 Test: portal_grp_register_unregister_case ...passed 00:08:00.103 Test: portal_grp_register_twice_case ...passed 00:08:00.103 Test: portal_grp_add_delete_case ...passed 00:08:00.103 Test: portal_grp_add_delete_twice_case ...passed 00:08:00.103 00:08:00.103 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.103 suites 1 1 n/a 0 0 00:08:00.103 tests 9 9 9 0 0 00:08:00.103 asserts 44 44 44 0 n/a 00:08:00.103 00:08:00.103 Elapsed time = 0.005 seconds 00:08:00.103 ************************************ 00:08:00.103 END TEST unittest_iscsi 00:08:00.103 ************************************ 00:08:00.103 00:08:00.103 real 0m0.330s 00:08:00.103 user 0m0.177s 00:08:00.103 sys 0m0.147s 00:08:00.103 21:21:33 unittest.unittest_iscsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.103 21:21:33 unittest.unittest_iscsi -- common/autotest_common.sh@10 -- # set +x 00:08:00.363 21:21:33 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:00.363 21:21:33 unittest -- unit/unittest.sh@245 -- # run_test unittest_json unittest_json 00:08:00.363 21:21:33 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:00.363 21:21:33 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.363 21:21:33 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:00.363 ************************************ 00:08:00.363 START TEST unittest_json 00:08:00.363 ************************************ 00:08:00.363 21:21:33 unittest.unittest_json -- common/autotest_common.sh@1123 -- # unittest_json 00:08:00.363 21:21:33 unittest.unittest_json -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:08:00.363 00:08:00.363 00:08:00.363 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.363 http://cunit.sourceforge.net/ 00:08:00.363 00:08:00.363 00:08:00.363 Suite: json 00:08:00.363 Test: test_parse_literal ...passed 00:08:00.363 Test: test_parse_string_simple ...passed 00:08:00.363 Test: test_parse_string_control_chars ...passed 00:08:00.363 Test: test_parse_string_utf8 ...passed 00:08:00.363 Test: test_parse_string_escapes_twochar ...passed 00:08:00.363 Test: test_parse_string_escapes_unicode ...passed 00:08:00.363 Test: test_parse_number ...passed 00:08:00.363 Test: test_parse_array ...passed 00:08:00.363 Test: test_parse_object ...passed 00:08:00.363 Test: test_parse_nesting ...passed 00:08:00.363 Test: test_parse_comment ...passed 00:08:00.363 00:08:00.363 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.363 suites 1 1 n/a 0 0 00:08:00.363 tests 11 11 11 0 0 00:08:00.363 asserts 1516 1516 1516 0 n/a 00:08:00.363 00:08:00.363 Elapsed time = 0.002 seconds 00:08:00.363 21:21:33 unittest.unittest_json -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:08:00.363 00:08:00.363 00:08:00.363 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.363 http://cunit.sourceforge.net/ 00:08:00.363 00:08:00.363 00:08:00.363 Suite: json 00:08:00.363 Test: test_strequal ...passed 00:08:00.363 Test: test_num_to_uint16 ...passed 00:08:00.363 Test: test_num_to_int32 ...passed 00:08:00.363 Test: test_num_to_uint64 ...passed 00:08:00.363 Test: test_decode_object ...passed 00:08:00.363 Test: test_decode_array ...passed 00:08:00.363 Test: test_decode_bool ...passed 00:08:00.363 Test: test_decode_uint16 ...passed 00:08:00.363 Test: test_decode_int32 ...passed 00:08:00.363 Test: test_decode_uint32 ...passed 00:08:00.363 Test: test_decode_uint64 ...passed 00:08:00.363 Test: test_decode_string ...passed 00:08:00.363 Test: test_decode_uuid ...passed 00:08:00.363 Test: test_find ...passed 00:08:00.363 Test: test_find_array ...passed 00:08:00.363 Test: test_iterating ...passed 00:08:00.363 Test: test_free_object ...passed 00:08:00.363 00:08:00.363 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.363 suites 1 1 n/a 0 0 00:08:00.363 tests 17 17 17 0 0 00:08:00.363 asserts 236 236 236 0 n/a 00:08:00.363 00:08:00.363 Elapsed time = 0.001 seconds 00:08:00.363 21:21:33 unittest.unittest_json -- unit/unittest.sh@79 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:08:00.363 00:08:00.363 00:08:00.363 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.363 http://cunit.sourceforge.net/ 00:08:00.363 00:08:00.363 00:08:00.363 Suite: json 00:08:00.363 Test: test_write_literal ...passed 00:08:00.363 Test: test_write_string_simple ...passed 00:08:00.363 Test: test_write_string_escapes ...passed 00:08:00.363 Test: test_write_string_utf16le ...passed 00:08:00.363 Test: test_write_number_int32 ...passed 00:08:00.363 Test: test_write_number_uint32 ...passed 00:08:00.363 Test: test_write_number_uint128 ...passed 00:08:00.363 Test: test_write_string_number_uint128 ...passed 00:08:00.363 Test: test_write_number_int64 ...passed 00:08:00.363 Test: test_write_number_uint64 ...passed 00:08:00.363 Test: test_write_number_double ...passed 00:08:00.363 Test: test_write_uuid ...passed 00:08:00.363 Test: test_write_array ...passed 00:08:00.363 Test: test_write_object ...passed 00:08:00.363 Test: test_write_nesting ...passed 00:08:00.363 Test: test_write_val ...passed 00:08:00.363 00:08:00.363 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.363 suites 1 1 n/a 0 0 00:08:00.363 tests 16 16 16 0 0 00:08:00.363 asserts 918 918 918 0 n/a 00:08:00.363 00:08:00.363 Elapsed time = 0.006 seconds 00:08:00.364 21:21:33 unittest.unittest_json -- unit/unittest.sh@80 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:08:00.364 00:08:00.364 00:08:00.364 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.364 http://cunit.sourceforge.net/ 00:08:00.364 00:08:00.364 00:08:00.364 Suite: jsonrpc 00:08:00.364 Test: test_parse_request ...passed 00:08:00.364 Test: test_parse_request_streaming ...passed 00:08:00.364 00:08:00.364 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.364 suites 1 1 n/a 0 0 00:08:00.364 tests 2 2 2 0 0 00:08:00.364 asserts 289 289 289 0 n/a 00:08:00.364 00:08:00.364 Elapsed time = 0.004 seconds 00:08:00.364 00:08:00.364 real 0m0.195s 00:08:00.364 user 0m0.088s 00:08:00.364 sys 0m0.105s 00:08:00.364 21:21:33 unittest.unittest_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.364 21:21:33 unittest.unittest_json -- common/autotest_common.sh@10 -- # set +x 00:08:00.364 ************************************ 00:08:00.364 END TEST unittest_json 00:08:00.364 ************************************ 00:08:00.622 21:21:33 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:00.622 21:21:33 unittest -- unit/unittest.sh@246 -- # run_test unittest_rpc unittest_rpc 00:08:00.622 21:21:33 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:00.622 21:21:33 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.622 21:21:33 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:00.622 ************************************ 00:08:00.622 START TEST unittest_rpc 00:08:00.622 ************************************ 00:08:00.622 21:21:33 unittest.unittest_rpc -- common/autotest_common.sh@1123 -- # unittest_rpc 00:08:00.622 21:21:33 unittest.unittest_rpc -- unit/unittest.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:08:00.622 00:08:00.622 00:08:00.622 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.622 http://cunit.sourceforge.net/ 00:08:00.622 00:08:00.622 00:08:00.622 Suite: rpc 00:08:00.622 Test: test_jsonrpc_handler ...passed 00:08:00.622 Test: test_spdk_rpc_is_method_allowed ...passed 00:08:00.622 Test: test_rpc_get_methods ...[2024-07-15 21:21:33.773134] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:08:00.622 passed 00:08:00.622 Test: test_rpc_spdk_get_version ...passed 00:08:00.622 Test: test_spdk_rpc_listen_close ...passed 00:08:00.622 Test: test_rpc_run_multiple_servers ...passed 00:08:00.622 00:08:00.622 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.622 suites 1 1 n/a 0 0 00:08:00.622 tests 6 6 6 0 0 00:08:00.622 asserts 23 23 23 0 n/a 00:08:00.622 00:08:00.622 Elapsed time = 0.001 seconds 00:08:00.622 00:08:00.622 real 0m0.048s 00:08:00.622 user 0m0.012s 00:08:00.622 sys 0m0.037s 00:08:00.622 21:21:33 unittest.unittest_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.622 21:21:33 unittest.unittest_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.623 ************************************ 00:08:00.623 END TEST unittest_rpc 00:08:00.623 ************************************ 00:08:00.623 21:21:33 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:00.623 21:21:33 unittest -- unit/unittest.sh@247 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:08:00.623 21:21:33 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:00.623 21:21:33 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.623 21:21:33 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:00.623 ************************************ 00:08:00.623 START TEST unittest_notify 00:08:00.623 ************************************ 00:08:00.623 21:21:33 unittest.unittest_notify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:08:00.623 00:08:00.623 00:08:00.623 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.623 http://cunit.sourceforge.net/ 00:08:00.623 00:08:00.623 00:08:00.623 Suite: app_suite 00:08:00.623 Test: notify ...passed 00:08:00.623 00:08:00.623 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.623 suites 1 1 n/a 0 0 00:08:00.623 tests 1 1 1 0 0 00:08:00.623 asserts 13 13 13 0 n/a 00:08:00.623 00:08:00.623 Elapsed time = 0.000 seconds 00:08:00.623 00:08:00.623 real 0m0.046s 00:08:00.623 user 0m0.029s 00:08:00.623 sys 0m0.017s 00:08:00.623 21:21:33 unittest.unittest_notify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.623 21:21:33 unittest.unittest_notify -- common/autotest_common.sh@10 -- # set +x 00:08:00.623 ************************************ 00:08:00.623 END TEST unittest_notify 00:08:00.623 ************************************ 00:08:00.623 21:21:33 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:00.623 21:21:33 unittest -- unit/unittest.sh@248 -- # run_test unittest_nvme unittest_nvme 00:08:00.623 21:21:33 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:00.623 21:21:33 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.623 21:21:33 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:00.623 ************************************ 00:08:00.623 START TEST unittest_nvme 00:08:00.623 ************************************ 00:08:00.623 21:21:33 unittest.unittest_nvme -- common/autotest_common.sh@1123 -- # unittest_nvme 00:08:00.623 21:21:33 unittest.unittest_nvme -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:08:00.881 00:08:00.881 00:08:00.881 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.881 http://cunit.sourceforge.net/ 00:08:00.881 00:08:00.881 00:08:00.881 Suite: nvme 00:08:00.881 Test: test_opc_data_transfer ...passed 00:08:00.881 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:08:00.881 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:08:00.881 Test: test_trid_parse_and_compare ...[2024-07-15 21:21:33.994827] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1199:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:08:00.881 [2024-07-15 21:21:33.995903] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:08:00.881 [2024-07-15 21:21:33.996253] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1211:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:08:00.881 [2024-07-15 21:21:33.996485] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:08:00.881 [2024-07-15 21:21:33.996711] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1222:parse_next_key: *ERROR*: Key without value 00:08:00.881 [2024-07-15 21:21:33.997012] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:08:00.881 passed 00:08:00.881 Test: test_trid_trtype_str ...passed 00:08:00.881 Test: test_trid_adrfam_str ...passed 00:08:00.881 Test: test_nvme_ctrlr_probe ...[2024-07-15 21:21:33.997804] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:08:00.881 passed 00:08:00.881 Test: test_spdk_nvme_probe ...[2024-07-15 21:21:33.998250] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:08:00.881 [2024-07-15 21:21:33.998431] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:08:00.881 [2024-07-15 21:21:33.998731] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:08:00.881 [2024-07-15 21:21:33.998963] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:08:00.881 passed 00:08:00.881 Test: test_spdk_nvme_connect ...[2024-07-15 21:21:33.999341] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1010:spdk_nvme_connect: *ERROR*: No transport ID specified 00:08:00.881 [2024-07-15 21:21:34.000144] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:08:00.881 passed 00:08:00.881 Test: test_nvme_ctrlr_probe_internal ...[2024-07-15 21:21:34.000641] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:08:00.881 [2024-07-15 21:21:34.000790] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:08:00.881 passed 00:08:00.881 Test: test_nvme_init_controllers ...[2024-07-15 21:21:34.001034] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:08:00.881 passed 00:08:00.881 Test: test_nvme_driver_init ...[2024-07-15 21:21:34.001317] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:08:00.881 [2024-07-15 21:21:34.001488] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:08:00.881 [2024-07-15 21:21:34.109662] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:08:00.881 [2024-07-15 21:21:34.110399] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:08:00.881 passed 00:08:00.881 Test: test_spdk_nvme_detach ...passed 00:08:00.881 Test: test_nvme_completion_poll_cb ...passed 00:08:00.881 Test: test_nvme_user_copy_cmd_complete ...passed 00:08:00.881 Test: test_nvme_allocate_request_null ...passed 00:08:00.881 Test: test_nvme_allocate_request ...passed 00:08:00.882 Test: test_nvme_free_request ...passed 00:08:00.882 Test: test_nvme_allocate_request_user_copy ...passed 00:08:00.882 Test: test_nvme_robust_mutex_init_shared ...passed 00:08:00.882 Test: test_nvme_request_check_timeout ...passed 00:08:00.882 Test: test_nvme_wait_for_completion ...passed 00:08:00.882 Test: test_spdk_nvme_parse_func ...passed 00:08:00.882 Test: test_spdk_nvme_detach_async ...passed 00:08:00.882 Test: test_nvme_parse_addr ...[2024-07-15 21:21:34.113323] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1609:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:08:00.882 passed 00:08:00.882 00:08:00.882 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.882 suites 1 1 n/a 0 0 00:08:00.882 tests 25 25 25 0 0 00:08:00.882 asserts 326 326 326 0 n/a 00:08:00.882 00:08:00.882 Elapsed time = 0.007 seconds 00:08:00.882 21:21:34 unittest.unittest_nvme -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:08:00.882 00:08:00.882 00:08:00.882 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.882 http://cunit.sourceforge.net/ 00:08:00.882 00:08:00.882 00:08:00.882 Suite: nvme_ctrlr 00:08:00.882 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-07-15 21:21:34.164901] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:00.882 passed 00:08:00.882 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-07-15 21:21:34.166900] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:00.882 passed 00:08:00.882 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-07-15 21:21:34.168246] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:00.882 passed 00:08:00.882 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-07-15 21:21:34.169580] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:00.882 passed 00:08:00.882 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-07-15 21:21:34.170921] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:00.882 [2024-07-15 21:21:34.172089] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-15 21:21:34.173317] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-15 21:21:34.174519] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:08:00.882 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-07-15 21:21:34.177023] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:00.882 [2024-07-15 21:21:34.179289] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-15 21:21:34.180497] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:08:00.882 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-07-15 21:21:34.182965] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:00.882 [2024-07-15 21:21:34.184165] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-15 21:21:34.186551] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:08:00.882 Test: test_nvme_ctrlr_init_delay ...[2024-07-15 21:21:34.189224] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:00.882 passed 00:08:00.882 Test: test_alloc_io_qpair_rr_1 ...[2024-07-15 21:21:34.190745] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:00.882 [2024-07-15 21:21:34.191047] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:08:00.882 [2024-07-15 21:21:34.191334] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:08:00.882 [2024-07-15 21:21:34.191492] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:08:00.882 [2024-07-15 21:21:34.191592] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:08:00.882 passed 00:08:00.882 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:08:00.882 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:08:00.882 Test: test_alloc_io_qpair_wrr_1 ...[2024-07-15 21:21:34.192073] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:00.882 passed 00:08:00.882 Test: test_alloc_io_qpair_wrr_2 ...[2024-07-15 21:21:34.192476] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:00.882 [2024-07-15 21:21:34.192741] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:08:00.882 passed 00:08:00.882 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-07-15 21:21:34.193405] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4993:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:08:00.882 [2024-07-15 21:21:34.193744] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5030:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:08:00.882 [2024-07-15 21:21:34.194004] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5070:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:08:00.882 [2024-07-15 21:21:34.194187] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5030:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:08:00.882 passed 00:08:00.882 Test: test_nvme_ctrlr_fail ...[2024-07-15 21:21:34.194486] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:08:00.882 passed 00:08:00.882 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:08:00.882 Test: test_nvme_ctrlr_set_supported_features ...passed 00:08:00.882 Test: test_nvme_ctrlr_set_host_feature ...[2024-07-15 21:21:34.195123] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:00.882 passed 00:08:00.882 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:08:00.882 Test: test_nvme_ctrlr_test_active_ns ...[2024-07-15 21:21:34.197016] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:01.142 passed 00:08:01.142 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:08:01.142 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:08:01.142 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:08:01.142 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-07-15 21:21:34.426176] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:01.142 passed 00:08:01.142 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-07-15 21:21:34.432969] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:01.142 passed 00:08:01.142 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-07-15 21:21:34.434182] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:01.142 [2024-07-15 21:21:34.434255] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3002:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:08:01.142 passed 00:08:01.142 Test: test_alloc_io_qpair_fail ...[2024-07-15 21:21:34.435419] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:01.142 [2024-07-15 21:21:34.435478] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 506:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:08:01.142 passed 00:08:01.142 Test: test_nvme_ctrlr_add_remove_process ...passed 00:08:01.142 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:08:01.142 Test: test_nvme_ctrlr_set_state ...[2024-07-15 21:21:34.435712] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1546:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:08:01.142 passed 00:08:01.142 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-07-15 21:21:34.435800] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:01.142 passed 00:08:01.142 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-07-15 21:21:34.453515] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:01.142 passed 00:08:01.142 Test: test_nvme_ctrlr_ns_mgmt ...[2024-07-15 21:21:34.483281] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:01.142 passed 00:08:01.142 Test: test_nvme_ctrlr_reset ...[2024-07-15 21:21:34.484759] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:01.142 passed 00:08:01.142 Test: test_nvme_ctrlr_aer_callback ...[2024-07-15 21:21:34.485102] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:01.142 passed 00:08:01.142 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-07-15 21:21:34.486477] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:01.142 passed 00:08:01.142 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:08:01.142 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:08:01.142 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-07-15 21:21:34.488161] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:01.142 passed 00:08:01.142 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:08:01.142 Test: test_nvme_ctrlr_ana_resize ...[2024-07-15 21:21:34.489506] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:01.142 passed 00:08:01.142 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:08:01.142 Test: test_nvme_transport_ctrlr_ready ...[2024-07-15 21:21:34.490995] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4152:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:08:01.142 passed 00:08:01.143 Test: test_nvme_ctrlr_disable ...[2024-07-15 21:21:34.491060] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4204:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 53 (error) 00:08:01.143 [2024-07-15 21:21:34.491167] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:01.143 passed 00:08:01.143 00:08:01.143 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.143 suites 1 1 n/a 0 0 00:08:01.143 tests 44 44 44 0 0 00:08:01.143 asserts 10434 10434 10434 0 n/a 00:08:01.143 00:08:01.143 Elapsed time = 0.282 seconds 00:08:01.407 21:21:34 unittest.unittest_nvme -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:08:01.407 00:08:01.407 00:08:01.407 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.407 http://cunit.sourceforge.net/ 00:08:01.407 00:08:01.407 00:08:01.407 Suite: nvme_ctrlr_cmd 00:08:01.407 Test: test_get_log_pages ...passed 00:08:01.407 Test: test_set_feature_cmd ...passed 00:08:01.407 Test: test_set_feature_ns_cmd ...passed 00:08:01.407 Test: test_get_feature_cmd ...passed 00:08:01.407 Test: test_get_feature_ns_cmd ...passed 00:08:01.407 Test: test_abort_cmd ...passed 00:08:01.407 Test: test_set_host_id_cmds ...[2024-07-15 21:21:34.552470] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:08:01.407 passed 00:08:01.407 Test: test_io_cmd_raw_no_payload_build ...passed 00:08:01.407 Test: test_io_raw_cmd ...passed 00:08:01.407 Test: test_io_raw_cmd_with_md ...passed 00:08:01.407 Test: test_namespace_attach ...passed 00:08:01.407 Test: test_namespace_detach ...passed 00:08:01.407 Test: test_namespace_create ...passed 00:08:01.407 Test: test_namespace_delete ...passed 00:08:01.407 Test: test_doorbell_buffer_config ...passed 00:08:01.407 Test: test_format_nvme ...passed 00:08:01.407 Test: test_fw_commit ...passed 00:08:01.407 Test: test_fw_image_download ...passed 00:08:01.407 Test: test_sanitize ...passed 00:08:01.407 Test: test_directive ...passed 00:08:01.407 Test: test_nvme_request_add_abort ...passed 00:08:01.407 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:08:01.407 Test: test_nvme_ctrlr_cmd_identify ...passed 00:08:01.407 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:08:01.407 00:08:01.407 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.407 suites 1 1 n/a 0 0 00:08:01.407 tests 24 24 24 0 0 00:08:01.407 asserts 198 198 198 0 n/a 00:08:01.407 00:08:01.407 Elapsed time = 0.001 seconds 00:08:01.407 21:21:34 unittest.unittest_nvme -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:08:01.407 00:08:01.407 00:08:01.407 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.407 http://cunit.sourceforge.net/ 00:08:01.407 00:08:01.407 00:08:01.407 Suite: nvme_ctrlr_cmd 00:08:01.407 Test: test_geometry_cmd ...passed 00:08:01.407 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:08:01.407 00:08:01.407 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.407 suites 1 1 n/a 0 0 00:08:01.407 tests 2 2 2 0 0 00:08:01.407 asserts 7 7 7 0 n/a 00:08:01.407 00:08:01.407 Elapsed time = 0.000 seconds 00:08:01.407 21:21:34 unittest.unittest_nvme -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:08:01.407 00:08:01.407 00:08:01.407 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.407 http://cunit.sourceforge.net/ 00:08:01.407 00:08:01.407 00:08:01.407 Suite: nvme 00:08:01.407 Test: test_nvme_ns_construct ...passed 00:08:01.407 Test: test_nvme_ns_uuid ...passed 00:08:01.407 Test: test_nvme_ns_csi ...passed 00:08:01.407 Test: test_nvme_ns_data ...passed 00:08:01.407 Test: test_nvme_ns_set_identify_data ...passed 00:08:01.407 Test: test_spdk_nvme_ns_get_values ...passed 00:08:01.407 Test: test_spdk_nvme_ns_is_active ...passed 00:08:01.407 Test: spdk_nvme_ns_supports ...passed 00:08:01.407 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:08:01.407 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:08:01.407 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:08:01.407 Test: test_nvme_ns_find_id_desc ...passed 00:08:01.407 00:08:01.407 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.407 suites 1 1 n/a 0 0 00:08:01.407 tests 12 12 12 0 0 00:08:01.407 asserts 95 95 95 0 n/a 00:08:01.407 00:08:01.407 Elapsed time = 0.001 seconds 00:08:01.407 21:21:34 unittest.unittest_nvme -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:08:01.407 00:08:01.407 00:08:01.407 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.407 http://cunit.sourceforge.net/ 00:08:01.407 00:08:01.407 00:08:01.407 Suite: nvme_ns_cmd 00:08:01.407 Test: split_test ...passed 00:08:01.407 Test: split_test2 ...passed 00:08:01.407 Test: split_test3 ...passed 00:08:01.407 Test: split_test4 ...passed 00:08:01.407 Test: test_nvme_ns_cmd_flush ...passed 00:08:01.407 Test: test_nvme_ns_cmd_dataset_management ...passed 00:08:01.407 Test: test_nvme_ns_cmd_copy ...passed 00:08:01.407 Test: test_io_flags ...[2024-07-15 21:21:34.679102] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:08:01.407 passed 00:08:01.407 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:08:01.407 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:08:01.407 Test: test_nvme_ns_cmd_reservation_register ...passed 00:08:01.407 Test: test_nvme_ns_cmd_reservation_release ...passed 00:08:01.407 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:08:01.407 Test: test_nvme_ns_cmd_reservation_report ...passed 00:08:01.407 Test: test_cmd_child_request ...passed 00:08:01.407 Test: test_nvme_ns_cmd_readv ...passed 00:08:01.407 Test: test_nvme_ns_cmd_read_with_md ...passed 00:08:01.407 Test: test_nvme_ns_cmd_writev ...[2024-07-15 21:21:34.680833] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 291:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:08:01.407 passed 00:08:01.407 Test: test_nvme_ns_cmd_write_with_md ...passed 00:08:01.407 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:08:01.407 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:08:01.407 Test: test_nvme_ns_cmd_comparev ...passed 00:08:01.407 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:08:01.407 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:08:01.407 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:08:01.407 Test: test_nvme_ns_cmd_setup_request ...passed 00:08:01.407 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:08:01.407 Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-07-15 21:21:34.683205] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:08:01.407 passed 00:08:01.407 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-07-15 21:21:34.683384] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:08:01.407 passed 00:08:01.407 Test: test_nvme_ns_cmd_verify ...passed 00:08:01.407 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:08:01.407 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:08:01.407 00:08:01.407 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.408 suites 1 1 n/a 0 0 00:08:01.408 tests 32 32 32 0 0 00:08:01.408 asserts 550 550 550 0 n/a 00:08:01.408 00:08:01.408 Elapsed time = 0.005 seconds 00:08:01.408 21:21:34 unittest.unittest_nvme -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:08:01.408 00:08:01.408 00:08:01.408 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.408 http://cunit.sourceforge.net/ 00:08:01.408 00:08:01.408 00:08:01.408 Suite: nvme_ns_cmd 00:08:01.408 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:08:01.408 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:08:01.408 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:08:01.408 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:08:01.408 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:08:01.408 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:08:01.408 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:08:01.408 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:08:01.408 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:08:01.408 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:08:01.408 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:08:01.408 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:08:01.408 00:08:01.408 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.408 suites 1 1 n/a 0 0 00:08:01.408 tests 12 12 12 0 0 00:08:01.408 asserts 123 123 123 0 n/a 00:08:01.408 00:08:01.408 Elapsed time = 0.001 seconds 00:08:01.408 21:21:34 unittest.unittest_nvme -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:08:01.666 00:08:01.666 00:08:01.666 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.666 http://cunit.sourceforge.net/ 00:08:01.667 00:08:01.667 00:08:01.667 Suite: nvme_qpair 00:08:01.667 Test: test3 ...passed 00:08:01.667 Test: test_ctrlr_failed ...passed 00:08:01.667 Test: struct_packing ...passed 00:08:01.667 Test: test_nvme_qpair_process_completions ...[2024-07-15 21:21:34.781278] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:01.667 [2024-07-15 21:21:34.781733] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:01.667 [2024-07-15 21:21:34.781873] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:08:01.667 [2024-07-15 21:21:34.782020] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:08:01.667 passed 00:08:01.667 Test: test_nvme_completion_is_retry ...passed 00:08:01.667 Test: test_get_status_string ...passed 00:08:01.667 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:08:01.667 Test: test_nvme_qpair_submit_request ...passed 00:08:01.667 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:08:01.667 Test: test_nvme_qpair_manual_complete_request ...passed 00:08:01.667 Test: test_nvme_qpair_init_deinit ...[2024-07-15 21:21:34.783006] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:01.667 passed 00:08:01.667 Test: test_nvme_get_sgl_print_info ...passed 00:08:01.667 00:08:01.667 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.667 suites 1 1 n/a 0 0 00:08:01.667 tests 12 12 12 0 0 00:08:01.667 asserts 154 154 154 0 n/a 00:08:01.667 00:08:01.667 Elapsed time = 0.002 seconds 00:08:01.667 21:21:34 unittest.unittest_nvme -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:08:01.667 00:08:01.667 00:08:01.667 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.667 http://cunit.sourceforge.net/ 00:08:01.667 00:08:01.667 00:08:01.667 Suite: nvme_pcie 00:08:01.667 Test: test_prp_list_append ...[2024-07-15 21:21:34.823859] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:08:01.667 [2024-07-15 21:21:34.824229] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1234:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:08:01.667 [2024-07-15 21:21:34.824323] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1224:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:08:01.667 [2024-07-15 21:21:34.824647] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:08:01.667 [2024-07-15 21:21:34.824775] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:08:01.667 passed 00:08:01.667 Test: test_nvme_pcie_hotplug_monitor ...passed 00:08:01.667 Test: test_shadow_doorbell_update ...passed 00:08:01.667 Test: test_build_contig_hw_sgl_request ...passed 00:08:01.667 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:08:01.667 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:08:01.667 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:08:01.667 Test: test_nvme_pcie_qpair_build_contig_request ...[2024-07-15 21:21:34.825356] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:08:01.667 passed 00:08:01.667 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:08:01.667 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:08:01.667 Test: test_nvme_pcie_ctrlr_map_io_cmb ...[2024-07-15 21:21:34.825603] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:08:01.667 passed 00:08:01.667 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...[2024-07-15 21:21:34.825745] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:08:01.667 passed 00:08:01.667 Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-07-15 21:21:34.825847] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:08:01.667 passed 00:08:01.667 Test: test_nvme_pcie_ctrlr_map_io_pmr ...[2024-07-15 21:21:34.825948] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:08:01.667 passed 00:08:01.667 00:08:01.667 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.667 suites 1 1 n/a 0 0 00:08:01.667 tests 14 14 14 0 0 00:08:01.667 asserts 235 235 235 0 n/a 00:08:01.667 00:08:01.667 Elapsed time = 0.001 seconds 00:08:01.667 21:21:34 unittest.unittest_nvme -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:08:01.667 00:08:01.667 00:08:01.667 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.667 http://cunit.sourceforge.net/ 00:08:01.667 00:08:01.667 00:08:01.667 Suite: nvme_ns_cmd 00:08:01.667 Test: nvme_poll_group_create_test ...passed 00:08:01.667 Test: nvme_poll_group_add_remove_test ...passed 00:08:01.667 Test: nvme_poll_group_process_completions ...passed 00:08:01.667 Test: nvme_poll_group_destroy_test ...passed 00:08:01.667 Test: nvme_poll_group_get_free_stats ...passed 00:08:01.667 00:08:01.667 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.667 suites 1 1 n/a 0 0 00:08:01.667 tests 5 5 5 0 0 00:08:01.667 asserts 75 75 75 0 n/a 00:08:01.667 00:08:01.667 Elapsed time = 0.000 seconds 00:08:01.667 21:21:34 unittest.unittest_nvme -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:08:01.667 00:08:01.667 00:08:01.667 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.667 http://cunit.sourceforge.net/ 00:08:01.667 00:08:01.667 00:08:01.667 Suite: nvme_quirks 00:08:01.667 Test: test_nvme_quirks_striping ...passed 00:08:01.667 00:08:01.667 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.667 suites 1 1 n/a 0 0 00:08:01.667 tests 1 1 1 0 0 00:08:01.667 asserts 5 5 5 0 n/a 00:08:01.667 00:08:01.667 Elapsed time = 0.000 seconds 00:08:01.667 21:21:34 unittest.unittest_nvme -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:08:01.667 00:08:01.667 00:08:01.667 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.667 http://cunit.sourceforge.net/ 00:08:01.667 00:08:01.667 00:08:01.667 Suite: nvme_tcp 00:08:01.667 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:08:01.667 Test: test_nvme_tcp_build_iovs ...passed 00:08:01.667 Test: test_nvme_tcp_build_sgl_request ...[2024-07-15 21:21:34.963864] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 848:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7ffcefaf3b80, and the iovcnt=16, remaining_size=28672 00:08:01.667 passed 00:08:01.667 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:08:01.667 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:08:01.667 Test: test_nvme_tcp_req_complete_safe ...passed 00:08:01.667 Test: test_nvme_tcp_req_get ...passed 00:08:01.667 Test: test_nvme_tcp_req_init ...passed 00:08:01.667 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:08:01.667 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:08:01.667 Test: test_nvme_tcp_qpair_set_recv_state ...[2024-07-15 21:21:34.965100] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcefaf58c0 is same with the state(6) to be set 00:08:01.667 passed 00:08:01.667 Test: test_nvme_tcp_alloc_reqs ...passed 00:08:01.667 Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-07-15 21:21:34.965577] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcefaf4a70 is same with the state(5) to be set 00:08:01.667 passed 00:08:01.667 Test: test_nvme_tcp_pdu_ch_handle ...[2024-07-15 21:21:34.965740] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1190:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7ffcefaf5600 00:08:01.668 [2024-07-15 21:21:34.965840] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1249:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:08:01.668 [2024-07-15 21:21:34.965951] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcefaf4f30 is same with the state(5) to be set 00:08:01.668 [2024-07-15 21:21:34.966037] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1200:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:08:01.668 [2024-07-15 21:21:34.966149] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcefaf4f30 is same with the state(5) to be set 00:08:01.668 [2024-07-15 21:21:34.966222] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:08:01.668 [2024-07-15 21:21:34.966287] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcefaf4f30 is same with the state(5) to be set 00:08:01.668 [2024-07-15 21:21:34.966362] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcefaf4f30 is same with the state(5) to be set 00:08:01.668 [2024-07-15 21:21:34.966430] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcefaf4f30 is same with the state(5) to be set 00:08:01.668 [2024-07-15 21:21:34.966516] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcefaf4f30 is same with the state(5) to be set 00:08:01.668 [2024-07-15 21:21:34.966578] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcefaf4f30 is same with the state(5) to be set 00:08:01.668 [2024-07-15 21:21:34.966659] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcefaf4f30 is same with the state(5) to be set 00:08:01.668 passed 00:08:01.668 Test: test_nvme_tcp_qpair_connect_sock ...[2024-07-15 21:21:34.966927] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:08:01.668 [2024-07-15 21:21:34.967007] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:08:01.668 [2024-07-15 21:21:34.967278] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:08:01.668 passed 00:08:01.668 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:08:01.668 Test: test_nvme_tcp_c2h_payload_handle ...[2024-07-15 21:21:34.967556] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1357:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffcefaf5140): PDU Sequence Error 00:08:01.668 passed 00:08:01.668 Test: test_nvme_tcp_icresp_handle ...[2024-07-15 21:21:34.967709] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1576:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:08:01.668 [2024-07-15 21:21:34.967776] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1583:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:08:01.668 [2024-07-15 21:21:34.967842] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcefaf4a80 is same with the state(5) to be set 00:08:01.668 [2024-07-15 21:21:34.967908] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1592:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:08:01.668 [2024-07-15 21:21:34.967971] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcefaf4a80 is same with the state(5) to be set 00:08:01.668 [2024-07-15 21:21:34.968050] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcefaf4a80 is same with the state(0) to be set 00:08:01.668 passed 00:08:01.668 Test: test_nvme_tcp_pdu_payload_handle ...[2024-07-15 21:21:34.968193] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1357:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffcefaf5600): PDU Sequence Error 00:08:01.668 passed 00:08:01.668 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-07-15 21:21:34.968357] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1653:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7ffcefaf3d40 00:08:01.668 passed 00:08:01.668 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:08:01.668 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-07-15 21:21:34.968672] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 358:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7ffcefaf33c0, errno=0, rc=0 00:08:01.668 [2024-07-15 21:21:34.968772] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcefaf33c0 is same with the state(5) to be set 00:08:01.668 [2024-07-15 21:21:34.968859] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcefaf33c0 is same with the state(5) to be set 00:08:01.668 [2024-07-15 21:21:34.968935] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffcefaf33c0 (0): Success 00:08:01.668 [2024-07-15 21:21:34.969002] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffcefaf33c0 (0): Success 00:08:01.668 passed 00:08:01.927 Test: test_nvme_tcp_ctrlr_create_io_qpair ...passed[2024-07-15 21:21:35.050300] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2516:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:08:01.927 [2024-07-15 21:21:35.050457] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2516:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:01.927 00:08:01.927 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:08:01.927 Test: test_nvme_tcp_poll_group_get_stats ...[2024-07-15 21:21:35.050732] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:01.927 [2024-07-15 21:21:35.050776] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:01.927 passed 00:08:01.927 Test: test_nvme_tcp_ctrlr_construct ...[2024-07-15 21:21:35.050981] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2516:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:01.927 [2024-07-15 21:21:35.051027] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:01.927 [2024-07-15 21:21:35.051121] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:08:01.927 [2024-07-15 21:21:35.051185] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:01.927 [2024-07-15 21:21:35.051280] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000007d80 with addr=192.168.1.78, port=23 00:08:01.927 [2024-07-15 21:21:35.051345] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:01.927 passed 00:08:01.927 Test: test_nvme_tcp_qpair_submit_request ...[2024-07-15 21:21:35.051491] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 848:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x614000000c40, and the iovcnt=1, remaining_size=1024 00:08:01.927 [2024-07-15 21:21:35.051539] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1035:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:08:01.927 passed 00:08:01.927 00:08:01.927 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.927 suites 1 1 n/a 0 0 00:08:01.927 tests 27 27 27 0 0 00:08:01.927 asserts 624 624 624 0 n/a 00:08:01.927 00:08:01.927 Elapsed time = 0.086 seconds 00:08:01.927 21:21:35 unittest.unittest_nvme -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:08:01.927 00:08:01.927 00:08:01.927 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.927 http://cunit.sourceforge.net/ 00:08:01.927 00:08:01.927 00:08:01.927 Suite: nvme_transport 00:08:01.927 Test: test_nvme_get_transport ...passed 00:08:01.928 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:08:01.928 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:08:01.928 Test: test_nvme_transport_poll_group_add_remove ...passed 00:08:01.928 Test: test_ctrlr_get_memory_domains ...passed 00:08:01.928 00:08:01.928 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.928 suites 1 1 n/a 0 0 00:08:01.928 tests 5 5 5 0 0 00:08:01.928 asserts 28 28 28 0 n/a 00:08:01.928 00:08:01.928 Elapsed time = 0.000 seconds 00:08:01.928 21:21:35 unittest.unittest_nvme -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:08:01.928 00:08:01.928 00:08:01.928 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.928 http://cunit.sourceforge.net/ 00:08:01.928 00:08:01.928 00:08:01.928 Suite: nvme_io_msg 00:08:01.928 Test: test_nvme_io_msg_send ...passed 00:08:01.928 Test: test_nvme_io_msg_process ...passed 00:08:01.928 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:08:01.928 00:08:01.928 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.928 suites 1 1 n/a 0 0 00:08:01.928 tests 3 3 3 0 0 00:08:01.928 asserts 56 56 56 0 n/a 00:08:01.928 00:08:01.928 Elapsed time = 0.000 seconds 00:08:01.928 21:21:35 unittest.unittest_nvme -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:08:01.928 00:08:01.928 00:08:01.928 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.928 http://cunit.sourceforge.net/ 00:08:01.928 00:08:01.928 00:08:01.928 Suite: nvme_pcie_common 00:08:01.928 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-07-15 21:21:35.189950] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:08:01.928 passed 00:08:01.928 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:08:01.928 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:08:01.928 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-07-15 21:21:35.191194] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 504:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:08:01.928 [2024-07-15 21:21:35.191410] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 457:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:08:01.928 [2024-07-15 21:21:35.191507] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 551:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:08:01.928 passed 00:08:01.928 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...passed 00:08:01.928 Test: test_nvme_pcie_poll_group_get_stats ...[2024-07-15 21:21:35.192231] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:01.928 [2024-07-15 21:21:35.192345] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:01.928 passed 00:08:01.928 00:08:01.928 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.928 suites 1 1 n/a 0 0 00:08:01.928 tests 6 6 6 0 0 00:08:01.928 asserts 148 148 148 0 n/a 00:08:01.928 00:08:01.928 Elapsed time = 0.002 seconds 00:08:01.928 21:21:35 unittest.unittest_nvme -- unit/unittest.sh@103 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:08:01.928 00:08:01.928 00:08:01.928 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.928 http://cunit.sourceforge.net/ 00:08:01.928 00:08:01.928 00:08:01.928 Suite: nvme_fabric 00:08:01.928 Test: test_nvme_fabric_prop_set_cmd ...passed 00:08:01.928 Test: test_nvme_fabric_prop_get_cmd ...passed 00:08:01.928 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:08:01.928 Test: test_nvme_fabric_discover_probe ...passed 00:08:01.928 Test: test_nvme_fabric_qpair_connect ...[2024-07-15 21:21:35.238501] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:08:01.928 passed 00:08:01.928 00:08:01.928 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.928 suites 1 1 n/a 0 0 00:08:01.928 tests 5 5 5 0 0 00:08:01.928 asserts 60 60 60 0 n/a 00:08:01.928 00:08:01.928 Elapsed time = 0.001 seconds 00:08:01.928 21:21:35 unittest.unittest_nvme -- unit/unittest.sh@104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:08:01.928 00:08:01.928 00:08:01.928 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.928 http://cunit.sourceforge.net/ 00:08:01.928 00:08:01.928 00:08:01.928 Suite: nvme_opal 00:08:01.928 Test: test_opal_nvme_security_recv_send_done ...passed 00:08:01.928 Test: test_opal_add_short_atom_header ...[2024-07-15 21:21:35.284659] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:08:01.928 passed 00:08:01.928 00:08:01.928 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.928 suites 1 1 n/a 0 0 00:08:01.928 tests 2 2 2 0 0 00:08:01.928 asserts 22 22 22 0 n/a 00:08:01.928 00:08:01.928 Elapsed time = 0.001 seconds 00:08:02.188 00:08:02.188 real 0m1.338s 00:08:02.188 user 0m0.653s 00:08:02.188 sys 0m0.521s 00:08:02.188 21:21:35 unittest.unittest_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.188 21:21:35 unittest.unittest_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:02.188 ************************************ 00:08:02.188 END TEST unittest_nvme 00:08:02.188 ************************************ 00:08:02.188 21:21:35 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:02.188 21:21:35 unittest -- unit/unittest.sh@249 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:08:02.188 21:21:35 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:02.188 21:21:35 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.188 21:21:35 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:02.188 ************************************ 00:08:02.188 START TEST unittest_log 00:08:02.188 ************************************ 00:08:02.188 21:21:35 unittest.unittest_log -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:08:02.188 00:08:02.188 00:08:02.188 CUnit - A unit testing framework for C - Version 2.1-3 00:08:02.188 http://cunit.sourceforge.net/ 00:08:02.188 00:08:02.188 00:08:02.188 Suite: log 00:08:02.188 Test: log_test ...[2024-07-15 21:21:35.392120] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:08:02.188 [2024-07-15 21:21:35.392512] log_ut.c: 57:log_test: *DEBUG*: log test 00:08:02.188 log dump test: 00:08:02.188 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:08:02.188 spdk dump test: 00:08:02.188 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:08:02.188 spdk dump test: 00:08:02.188 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:08:02.188 00000010 65 20 63 68 61 72 73 e chars 00:08:02.188 passed 00:08:03.132 Test: deprecation ...passed 00:08:03.132 00:08:03.132 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.132 suites 1 1 n/a 0 0 00:08:03.132 tests 2 2 2 0 0 00:08:03.132 asserts 73 73 73 0 n/a 00:08:03.132 00:08:03.132 Elapsed time = 0.001 seconds 00:08:03.132 ************************************ 00:08:03.132 END TEST unittest_log 00:08:03.132 ************************************ 00:08:03.132 00:08:03.132 real 0m1.051s 00:08:03.132 user 0m0.020s 00:08:03.132 sys 0m0.031s 00:08:03.132 21:21:36 unittest.unittest_log -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.132 21:21:36 unittest.unittest_log -- common/autotest_common.sh@10 -- # set +x 00:08:03.132 21:21:36 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:03.132 21:21:36 unittest -- unit/unittest.sh@250 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:08:03.132 21:21:36 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:03.132 21:21:36 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.132 21:21:36 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:03.132 ************************************ 00:08:03.132 START TEST unittest_lvol 00:08:03.132 ************************************ 00:08:03.132 21:21:36 unittest.unittest_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:08:03.392 00:08:03.392 00:08:03.392 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.392 http://cunit.sourceforge.net/ 00:08:03.392 00:08:03.392 00:08:03.392 Suite: lvol 00:08:03.392 Test: lvs_init_unload_success ...[2024-07-15 21:21:36.513078] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:08:03.392 passed 00:08:03.392 Test: lvs_init_destroy_success ...[2024-07-15 21:21:36.513916] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:08:03.392 passed 00:08:03.392 Test: lvs_init_opts_success ...passed 00:08:03.392 Test: lvs_unload_lvs_is_null_fail ...[2024-07-15 21:21:36.514455] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:08:03.392 passed 00:08:03.392 Test: lvs_names ...[2024-07-15 21:21:36.514692] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:08:03.392 [2024-07-15 21:21:36.514787] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:08:03.392 [2024-07-15 21:21:36.514983] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:08:03.392 passed 00:08:03.392 Test: lvol_create_destroy_success ...passed 00:08:03.392 Test: lvol_create_fail ...[2024-07-15 21:21:36.515698] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:08:03.393 [2024-07-15 21:21:36.515868] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:08:03.393 passed 00:08:03.393 Test: lvol_destroy_fail ...[2024-07-15 21:21:36.516278] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:08:03.393 passed 00:08:03.393 Test: lvol_close ...[2024-07-15 21:21:36.516593] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:08:03.393 [2024-07-15 21:21:36.516668] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:08:03.393 passed 00:08:03.393 Test: lvol_resize ...passed 00:08:03.393 Test: lvol_set_read_only ...passed 00:08:03.393 Test: test_lvs_load ...[2024-07-15 21:21:36.517694] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:08:03.393 [2024-07-15 21:21:36.517772] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:08:03.393 passed 00:08:03.393 Test: lvols_load ...[2024-07-15 21:21:36.518117] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:08:03.393 [2024-07-15 21:21:36.518293] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:08:03.393 passed 00:08:03.393 Test: lvol_open ...passed 00:08:03.393 Test: lvol_snapshot ...passed 00:08:03.393 Test: lvol_snapshot_fail ...[2024-07-15 21:21:36.519306] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:08:03.393 passed 00:08:03.393 Test: lvol_clone ...passed 00:08:03.393 Test: lvol_clone_fail ...[2024-07-15 21:21:36.520043] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:08:03.393 passed 00:08:03.393 Test: lvol_iter_clones ...passed 00:08:03.393 Test: lvol_refcnt ...[2024-07-15 21:21:36.520719] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 70248964-1cfe-4c1d-8c14-6e73195ee659 because it is still open 00:08:03.393 passed 00:08:03.393 Test: lvol_names ...[2024-07-15 21:21:36.520987] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:08:03.393 [2024-07-15 21:21:36.521139] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:03.393 [2024-07-15 21:21:36.521435] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:08:03.393 passed 00:08:03.393 Test: lvol_create_thin_provisioned ...passed 00:08:03.393 Test: lvol_rename ...[2024-07-15 21:21:36.522103] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:03.393 [2024-07-15 21:21:36.522236] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:08:03.393 passed 00:08:03.393 Test: lvs_rename ...[2024-07-15 21:21:36.522596] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:08:03.393 passed 00:08:03.393 Test: lvol_inflate ...[2024-07-15 21:21:36.522944] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:08:03.393 passed 00:08:03.393 Test: lvol_decouple_parent ...[2024-07-15 21:21:36.523285] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:08:03.393 passed 00:08:03.393 Test: lvol_get_xattr ...passed 00:08:03.393 Test: lvol_esnap_reload ...passed 00:08:03.393 Test: lvol_esnap_create_bad_args ...[2024-07-15 21:21:36.523995] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:08:03.393 [2024-07-15 21:21:36.524069] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:08:03.393 [2024-07-15 21:21:36.524146] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:08:03.393 [2024-07-15 21:21:36.524315] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:03.393 [2024-07-15 21:21:36.524469] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:08:03.393 passed 00:08:03.393 Test: lvol_esnap_create_delete ...passed 00:08:03.393 Test: lvol_esnap_load_esnaps ...[2024-07-15 21:21:36.524847] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:08:03.393 passed 00:08:03.393 Test: lvol_esnap_missing ...[2024-07-15 21:21:36.525066] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:08:03.393 [2024-07-15 21:21:36.525134] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:08:03.393 passed 00:08:03.393 Test: lvol_esnap_hotplug ... 00:08:03.393 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:08:03.393 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:08:03.393 [2024-07-15 21:21:36.525856] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 9b8848b3-9c22-4f40-ad2f-d58278f359f2: failed to create esnap bs_dev: error -12 00:08:03.393 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:08:03.393 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:08:03.393 [2024-07-15 21:21:36.526130] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol f81d49c6-f2fa-4cfd-bbd6-13feb2aee477: failed to create esnap bs_dev: error -12 00:08:03.393 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:08:03.393 [2024-07-15 21:21:36.526309] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 1b0b0bf2-5028-42b8-acc4-e9d786f7924c: failed to create esnap bs_dev: error -12 00:08:03.393 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:08:03.393 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:08:03.393 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:08:03.393 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:08:03.393 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:08:03.393 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:08:03.393 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:08:03.393 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:08:03.393 passed 00:08:03.393 Test: lvol_get_by ...passed 00:08:03.393 Test: lvol_shallow_copy ...[2024-07-15 21:21:36.527634] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2274:spdk_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:08:03.393 [2024-07-15 21:21:36.527700] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2281:spdk_lvol_shallow_copy: *ERROR*: lvol 15f62fa4-531a-4f4b-9ce0-a348c1e498bd shallow copy, ext_dev must not be NULL 00:08:03.393 passed 00:08:03.393 Test: lvol_set_parent ...[2024-07-15 21:21:36.527970] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2338:spdk_lvol_set_parent: *ERROR*: lvol must not be NULL 00:08:03.393 [2024-07-15 21:21:36.528031] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2344:spdk_lvol_set_parent: *ERROR*: snapshot must not be NULL 00:08:03.393 passed 00:08:03.393 Test: lvol_set_external_parent ...[2024-07-15 21:21:36.528281] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2393:spdk_lvol_set_external_parent: *ERROR*: lvol must not be NULL 00:08:03.393 [2024-07-15 21:21:36.528332] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2399:spdk_lvol_set_external_parent: *ERROR*: snapshot must not be NULL 00:08:03.393 [2024-07-15 21:21:36.528415] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2406:spdk_lvol_set_external_parent: *ERROR*: lvol lvol and esnap have the same UUID 00:08:03.393 passed 00:08:03.393 00:08:03.393 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.393 suites 1 1 n/a 0 0 00:08:03.393 tests 37 37 37 0 0 00:08:03.393 asserts 1505 1505 1505 0 n/a 00:08:03.393 00:08:03.393 Elapsed time = 0.013 seconds 00:08:03.393 ************************************ 00:08:03.393 END TEST unittest_lvol 00:08:03.393 ************************************ 00:08:03.393 00:08:03.393 real 0m0.068s 00:08:03.393 user 0m0.048s 00:08:03.393 sys 0m0.016s 00:08:03.393 21:21:36 unittest.unittest_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.393 21:21:36 unittest.unittest_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:03.394 21:21:36 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:03.394 21:21:36 unittest -- unit/unittest.sh@251 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:03.394 21:21:36 unittest -- unit/unittest.sh@252 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:08:03.394 21:21:36 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:03.394 21:21:36 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.394 21:21:36 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:03.394 ************************************ 00:08:03.394 START TEST unittest_nvme_rdma 00:08:03.394 ************************************ 00:08:03.394 21:21:36 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:08:03.394 00:08:03.394 00:08:03.394 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.394 http://cunit.sourceforge.net/ 00:08:03.394 00:08:03.394 00:08:03.394 Suite: nvme_rdma 00:08:03.394 Test: test_nvme_rdma_build_sgl_request ...[2024-07-15 21:21:36.644609] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:08:03.394 [2024-07-15 21:21:36.645094] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1552:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:08:03.394 [2024-07-15 21:21:36.645266] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1608:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:08:03.394 passed 00:08:03.394 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:08:03.394 Test: test_nvme_rdma_build_contig_request ...[2024-07-15 21:21:36.645595] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1489:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:08:03.394 passed 00:08:03.394 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:08:03.394 Test: test_nvme_rdma_create_reqs ...[2024-07-15 21:21:36.645917] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 931:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:08:03.394 passed 00:08:03.394 Test: test_nvme_rdma_create_rsps ...[2024-07-15 21:21:36.646388] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 849:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:08:03.394 passed 00:08:03.394 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-07-15 21:21:36.646613] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1746:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:08:03.394 [2024-07-15 21:21:36.646702] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1746:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:03.394 passed 00:08:03.394 Test: test_nvme_rdma_poller_create ...passed 00:08:03.394 Test: test_nvme_rdma_qpair_process_cm_event ...[2024-07-15 21:21:36.646972] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 450:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:08:03.394 passed 00:08:03.394 Test: test_nvme_rdma_ctrlr_construct ...passed 00:08:03.394 Test: test_nvme_rdma_req_put_and_get ...passed 00:08:03.394 Test: test_nvme_rdma_req_init ...passed 00:08:03.394 Test: test_nvme_rdma_validate_cm_event ...[2024-07-15 21:21:36.647463] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:08:03.394 [2024-07-15 21:21:36.647533] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:08:03.394 passed 00:08:03.394 Test: test_nvme_rdma_qpair_init ...passed 00:08:03.394 Test: test_nvme_rdma_qpair_submit_request ...passed 00:08:03.394 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:08:03.394 Test: test_rdma_get_memory_translation ...[2024-07-15 21:21:36.647883] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1368:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:08:03.394 [2024-07-15 21:21:36.647953] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:08:03.394 passed 00:08:03.394 Test: test_get_rdma_qpair_from_wc ...passed 00:08:03.394 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:08:03.394 Test: test_nvme_rdma_poll_group_get_stats ...[2024-07-15 21:21:36.648223] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:03.394 [2024-07-15 21:21:36.648286] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:03.394 passed 00:08:03.394 Test: test_nvme_rdma_qpair_set_poller ...[2024-07-15 21:21:36.648501] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:08:03.394 [2024-07-15 21:21:36.648574] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:08:03.394 [2024-07-15 21:21:36.648663] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffefc118980 on poll group 0x60c000000040 00:08:03.394 [2024-07-15 21:21:36.648725] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:08:03.394 [2024-07-15 21:21:36.648809] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:08:03.394 [2024-07-15 21:21:36.648872] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffefc118980 on poll group 0x60c000000040 00:08:03.394 [2024-07-15 21:21:36.648971] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 625:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:08:03.394 passed 00:08:03.394 00:08:03.394 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.394 suites 1 1 n/a 0 0 00:08:03.394 tests 21 21 21 0 0 00:08:03.394 asserts 397 397 397 0 n/a 00:08:03.394 00:08:03.394 Elapsed time = 0.003 seconds 00:08:03.394 00:08:03.394 real 0m0.054s 00:08:03.394 user 0m0.032s 00:08:03.394 sys 0m0.020s 00:08:03.394 21:21:36 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.394 21:21:36 unittest.unittest_nvme_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:03.394 ************************************ 00:08:03.394 END TEST unittest_nvme_rdma 00:08:03.394 ************************************ 00:08:03.394 21:21:36 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:03.394 21:21:36 unittest -- unit/unittest.sh@253 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:08:03.394 21:21:36 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:03.394 21:21:36 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.394 21:21:36 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:03.394 ************************************ 00:08:03.394 START TEST unittest_nvmf_transport 00:08:03.394 ************************************ 00:08:03.394 21:21:36 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:08:03.654 00:08:03.654 00:08:03.654 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.654 http://cunit.sourceforge.net/ 00:08:03.654 00:08:03.654 00:08:03.654 Suite: nvmf 00:08:03.654 Test: test_spdk_nvmf_transport_create ...[2024-07-15 21:21:36.769959] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:08:03.654 [2024-07-15 21:21:36.770330] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:08:03.654 [2024-07-15 21:21:36.770420] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 275:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:08:03.654 [2024-07-15 21:21:36.770593] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 258:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:08:03.654 passed 00:08:03.654 Test: test_nvmf_transport_poll_group_create ...passed 00:08:03.654 Test: test_spdk_nvmf_transport_opts_init ...[2024-07-15 21:21:36.770972] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 799:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:08:03.654 [2024-07-15 21:21:36.771086] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 804:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:08:03.654 [2024-07-15 21:21:36.771138] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 809:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:08:03.654 passed 00:08:03.654 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:08:03.654 00:08:03.654 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.654 suites 1 1 n/a 0 0 00:08:03.654 tests 4 4 4 0 0 00:08:03.654 asserts 49 49 49 0 n/a 00:08:03.654 00:08:03.654 Elapsed time = 0.001 seconds 00:08:03.654 00:08:03.654 real 0m0.057s 00:08:03.654 user 0m0.026s 00:08:03.654 sys 0m0.030s 00:08:03.654 21:21:36 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.654 21:21:36 unittest.unittest_nvmf_transport -- common/autotest_common.sh@10 -- # set +x 00:08:03.654 ************************************ 00:08:03.654 END TEST unittest_nvmf_transport 00:08:03.654 ************************************ 00:08:03.654 21:21:36 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:03.654 21:21:36 unittest -- unit/unittest.sh@254 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:08:03.654 21:21:36 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:03.654 21:21:36 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.654 21:21:36 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:03.654 ************************************ 00:08:03.654 START TEST unittest_rdma 00:08:03.654 ************************************ 00:08:03.654 21:21:36 unittest.unittest_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:08:03.654 00:08:03.654 00:08:03.654 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.654 http://cunit.sourceforge.net/ 00:08:03.654 00:08:03.654 00:08:03.654 Suite: rdma_common 00:08:03.654 Test: test_spdk_rdma_pd ...[2024-07-15 21:21:36.891167] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:08:03.654 [2024-07-15 21:21:36.891679] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:08:03.654 passed 00:08:03.654 00:08:03.654 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.654 suites 1 1 n/a 0 0 00:08:03.654 tests 1 1 1 0 0 00:08:03.654 asserts 31 31 31 0 n/a 00:08:03.654 00:08:03.654 Elapsed time = 0.001 seconds 00:08:03.654 00:08:03.654 real 0m0.048s 00:08:03.654 user 0m0.020s 00:08:03.654 sys 0m0.029s 00:08:03.654 21:21:36 unittest.unittest_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.654 21:21:36 unittest.unittest_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:03.654 ************************************ 00:08:03.654 END TEST unittest_rdma 00:08:03.654 ************************************ 00:08:03.654 21:21:36 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:03.654 21:21:36 unittest -- unit/unittest.sh@257 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:03.654 21:21:36 unittest -- unit/unittest.sh@258 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:08:03.654 21:21:36 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:03.654 21:21:36 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.654 21:21:36 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:03.655 ************************************ 00:08:03.655 START TEST unittest_nvme_cuse 00:08:03.655 ************************************ 00:08:03.655 21:21:36 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:08:03.655 00:08:03.655 00:08:03.655 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.655 http://cunit.sourceforge.net/ 00:08:03.655 00:08:03.655 00:08:03.655 Suite: nvme_cuse 00:08:03.655 Test: test_cuse_nvme_submit_io_read_write ...passed 00:08:03.655 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:08:03.655 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:08:03.655 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:08:03.655 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:08:03.655 Test: test_cuse_nvme_submit_io ...[2024-07-15 21:21:37.009826] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 667:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:08:03.655 passed 00:08:03.655 Test: test_cuse_nvme_reset ...[2024-07-15 21:21:37.010349] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 352:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:08:03.655 passed 00:08:04.225 Test: test_nvme_cuse_stop ...passed 00:08:04.225 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:08:04.225 00:08:04.225 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.225 suites 1 1 n/a 0 0 00:08:04.225 tests 9 9 9 0 0 00:08:04.225 asserts 118 118 118 0 n/a 00:08:04.225 00:08:04.225 Elapsed time = 0.505 seconds 00:08:04.225 ************************************ 00:08:04.225 END TEST unittest_nvme_cuse 00:08:04.225 ************************************ 00:08:04.225 00:08:04.225 real 0m0.551s 00:08:04.225 user 0m0.117s 00:08:04.225 sys 0m0.434s 00:08:04.225 21:21:37 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.225 21:21:37 unittest.unittest_nvme_cuse -- common/autotest_common.sh@10 -- # set +x 00:08:04.225 21:21:37 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:04.225 21:21:37 unittest -- unit/unittest.sh@261 -- # run_test unittest_nvmf unittest_nvmf 00:08:04.225 21:21:37 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:04.225 21:21:37 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.225 21:21:37 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:04.487 ************************************ 00:08:04.487 START TEST unittest_nvmf 00:08:04.487 ************************************ 00:08:04.487 21:21:37 unittest.unittest_nvmf -- common/autotest_common.sh@1123 -- # unittest_nvmf 00:08:04.487 21:21:37 unittest.unittest_nvmf -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:08:04.487 00:08:04.487 00:08:04.487 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.487 http://cunit.sourceforge.net/ 00:08:04.487 00:08:04.487 00:08:04.487 Suite: nvmf 00:08:04.487 Test: test_get_log_page ...[2024-07-15 21:21:37.622321] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:08:04.487 passed 00:08:04.487 Test: test_process_fabrics_cmd ...[2024-07-15 21:21:37.622769] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4730:nvmf_check_qpair_active: *ERROR*: Received command 0x0 on qid 0 before CONNECT 00:08:04.487 passed 00:08:04.487 Test: test_connect ...[2024-07-15 21:21:37.623470] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1012:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:08:04.487 [2024-07-15 21:21:37.623606] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 875:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:08:04.487 [2024-07-15 21:21:37.623670] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1051:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:08:04.487 [2024-07-15 21:21:37.623735] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:08:04.487 [2024-07-15 21:21:37.623840] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 886:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:08:04.487 [2024-07-15 21:21:37.623920] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 893:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:08:04.487 [2024-07-15 21:21:37.623976] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 899:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:08:04.487 [2024-07-15 21:21:37.624039] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 926:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:08:04.487 [2024-07-15 21:21:37.624171] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:08:04.487 [2024-07-15 21:21:37.624289] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 676:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:08:04.487 [2024-07-15 21:21:37.624638] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 682:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:08:04.487 [2024-07-15 21:21:37.624771] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 688:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:08:04.487 [2024-07-15 21:21:37.624875] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 695:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:08:04.487 [2024-07-15 21:21:37.624977] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 719:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:08:04.487 [2024-07-15 21:21:37.625104] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 294:nvmf_ctrlr_add_qpair: *ERROR*: Got I/O connect with duplicate QID 1 (cntlid:0) 00:08:04.487 [2024-07-15 21:21:37.625321] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 4, group (nil)) 00:08:04.487 [2024-07-15 21:21:37.625412] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group (nil)) 00:08:04.487 passed 00:08:04.487 Test: test_get_ns_id_desc_list ...passed 00:08:04.487 Test: test_identify_ns ...[2024-07-15 21:21:37.625820] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:04.487 [2024-07-15 21:21:37.626166] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:08:04.487 [2024-07-15 21:21:37.626310] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:08:04.487 passed 00:08:04.487 Test: test_identify_ns_iocs_specific ...[2024-07-15 21:21:37.626528] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:04.487 [2024-07-15 21:21:37.626895] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:04.487 passed 00:08:04.487 Test: test_reservation_write_exclusive ...passed 00:08:04.487 Test: test_reservation_exclusive_access ...passed 00:08:04.487 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:08:04.487 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:08:04.487 Test: test_reservation_notification_log_page ...passed 00:08:04.487 Test: test_get_dif_ctx ...passed 00:08:04.487 Test: test_set_get_features ...[2024-07-15 21:21:37.627772] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:08:04.487 [2024-07-15 21:21:37.627861] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:08:04.487 [2024-07-15 21:21:37.627920] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1659:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:08:04.487 [2024-07-15 21:21:37.627973] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1735:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:08:04.487 passed 00:08:04.487 Test: test_identify_ctrlr ...passed 00:08:04.487 Test: test_identify_ctrlr_iocs_specific ...passed 00:08:04.487 Test: test_custom_admin_cmd ...passed 00:08:04.487 Test: test_fused_compare_and_write ...[2024-07-15 21:21:37.628700] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4238:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:08:04.487 [2024-07-15 21:21:37.628775] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4227:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:08:04.487 [2024-07-15 21:21:37.628842] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4245:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:08:04.487 passed 00:08:04.487 Test: test_multi_async_event_reqs ...passed 00:08:04.487 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:08:04.487 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:08:04.487 Test: test_multi_async_events ...passed 00:08:04.487 Test: test_rae ...passed 00:08:04.487 Test: test_nvmf_ctrlr_create_destruct ...passed 00:08:04.487 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:08:04.487 Test: test_spdk_nvmf_request_zcopy_start ...[2024-07-15 21:21:37.629583] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4730:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 before CONNECT 00:08:04.487 [2024-07-15 21:21:37.629645] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4756:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 in state 4 00:08:04.487 passed 00:08:04.487 Test: test_zcopy_read ...passed 00:08:04.487 Test: test_zcopy_write ...passed 00:08:04.487 Test: test_nvmf_property_set ...passed 00:08:04.487 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-07-15 21:21:37.629940] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:08:04.487 [2024-07-15 21:21:37.629987] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:08:04.487 passed 00:08:04.487 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-07-15 21:21:37.630073] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1969:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:08:04.487 [2024-07-15 21:21:37.630114] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1975:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:08:04.487 [2024-07-15 21:21:37.630175] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1987:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:08:04.487 passed 00:08:04.487 Test: test_nvmf_ctrlr_ns_attachment ...passed 00:08:04.487 Test: test_nvmf_check_qpair_active ...[2024-07-15 21:21:37.630330] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4730:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before CONNECT 00:08:04.487 [2024-07-15 21:21:37.630372] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4744:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before authentication 00:08:04.487 [2024-07-15 21:21:37.630410] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4756:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 0 00:08:04.487 [2024-07-15 21:21:37.630449] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4756:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 4 00:08:04.487 [2024-07-15 21:21:37.630486] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4756:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 5 00:08:04.487 passed 00:08:04.487 00:08:04.487 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.487 suites 1 1 n/a 0 0 00:08:04.487 tests 32 32 32 0 0 00:08:04.487 asserts 977 977 977 0 n/a 00:08:04.487 00:08:04.487 Elapsed time = 0.006 seconds 00:08:04.487 21:21:37 unittest.unittest_nvmf -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:08:04.487 00:08:04.487 00:08:04.487 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.487 http://cunit.sourceforge.net/ 00:08:04.487 00:08:04.487 00:08:04.487 Suite: nvmf 00:08:04.487 Test: test_get_rw_params ...passed 00:08:04.487 Test: test_get_rw_ext_params ...passed 00:08:04.487 Test: test_lba_in_range ...passed 00:08:04.487 Test: test_get_dif_ctx ...passed 00:08:04.487 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:08:04.488 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-07-15 21:21:37.679983] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 447:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:08:04.488 [2024-07-15 21:21:37.680365] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 455:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:08:04.488 [2024-07-15 21:21:37.680501] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 462:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:08:04.488 passed 00:08:04.488 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-07-15 21:21:37.680683] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 965:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:08:04.488 [2024-07-15 21:21:37.680786] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 972:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:08:04.488 passed 00:08:04.488 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-07-15 21:21:37.680986] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 401:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:08:04.488 [2024-07-15 21:21:37.681056] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 408:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:08:04.488 [2024-07-15 21:21:37.681161] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 500:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:08:04.488 [2024-07-15 21:21:37.681228] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 507:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:08:04.488 passed 00:08:04.488 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:08:04.488 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:08:04.488 00:08:04.488 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.488 suites 1 1 n/a 0 0 00:08:04.488 tests 10 10 10 0 0 00:08:04.488 asserts 159 159 159 0 n/a 00:08:04.488 00:08:04.488 Elapsed time = 0.001 seconds 00:08:04.488 21:21:37 unittest.unittest_nvmf -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:08:04.488 00:08:04.488 00:08:04.488 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.488 http://cunit.sourceforge.net/ 00:08:04.488 00:08:04.488 00:08:04.488 Suite: nvmf 00:08:04.488 Test: test_discovery_log ...passed 00:08:04.488 Test: test_discovery_log_with_filters ...passed 00:08:04.488 00:08:04.488 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.488 suites 1 1 n/a 0 0 00:08:04.488 tests 2 2 2 0 0 00:08:04.488 asserts 238 238 238 0 n/a 00:08:04.488 00:08:04.488 Elapsed time = 0.003 seconds 00:08:04.488 21:21:37 unittest.unittest_nvmf -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:08:04.488 00:08:04.488 00:08:04.488 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.488 http://cunit.sourceforge.net/ 00:08:04.488 00:08:04.488 00:08:04.488 Suite: nvmf 00:08:04.488 Test: nvmf_test_create_subsystem ...[2024-07-15 21:21:37.791950] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:08:04.488 [2024-07-15 21:21:37.792251] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid 00:08:04.488 [2024-07-15 21:21:37.792444] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:08:04.488 [2024-07-15 21:21:37.792564] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid 00:08:04.488 [2024-07-15 21:21:37.792633] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:08:04.488 [2024-07-15 21:21:37.792683] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid 00:08:04.488 [2024-07-15 21:21:37.792774] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:08:04.488 [2024-07-15 21:21:37.792857] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid 00:08:04.488 [2024-07-15 21:21:37.792911] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:08:04.488 [2024-07-15 21:21:37.792972] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid 00:08:04.488 [2024-07-15 21:21:37.793025] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:08:04.488 [2024-07-15 21:21:37.793086] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid 00:08:04.488 [2024-07-15 21:21:37.793224] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:08:04.488 [2024-07-15 21:21:37.793356] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid 00:08:04.488 [2024-07-15 21:21:37.793493] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:08:04.488 [2024-07-15 21:21:37.793561] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid 00:08:04.488 [2024-07-15 21:21:37.793673] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:08:04.488 [2024-07-15 21:21:37.793740] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid 00:08:04.488 [2024-07-15 21:21:37.793801] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:08:04.488 [2024-07-15 21:21:37.793869] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid 00:08:04.488 [2024-07-15 21:21:37.793932] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:08:04.488 [2024-07-15 21:21:37.793982] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid 00:08:04.488 passed 00:08:04.488 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-07-15 21:21:37.794255] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:08:04.488 [2024-07-15 21:21:37.794322] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2031:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:08:04.488 passed 00:08:04.488 Test: test_spdk_nvmf_subsystem_add_fdp_ns ...[2024-07-15 21:21:37.794654] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2161:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem with id: 0 can only add FDP namespace. 00:08:04.488 passed 00:08:04.488 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:08:04.488 Test: test_spdk_nvmf_ns_visible ...[2024-07-15 21:21:37.795018] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11 00:08:04.488 passed 00:08:04.488 Test: test_reservation_register ...[2024-07-15 21:21:37.795545] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:04.488 [2024-07-15 21:21:37.795682] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3164:nvmf_ns_reservation_register: *ERROR*: No registrant 00:08:04.488 passed 00:08:04.488 Test: test_reservation_register_with_ptpl ...passed 00:08:04.488 Test: test_reservation_acquire_preempt_1 ...[2024-07-15 21:21:37.796853] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:04.488 passed 00:08:04.488 Test: test_reservation_acquire_release_with_ptpl ...passed 00:08:04.488 Test: test_reservation_release ...[2024-07-15 21:21:37.798740] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:04.488 passed 00:08:04.488 Test: test_reservation_unregister_notification ...[2024-07-15 21:21:37.799057] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:04.488 passed 00:08:04.488 Test: test_reservation_release_notification ...[2024-07-15 21:21:37.799347] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:04.488 passed 00:08:04.488 Test: test_reservation_release_notification_write_exclusive ...[2024-07-15 21:21:37.799681] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:04.488 passed 00:08:04.488 Test: test_reservation_clear_notification ...[2024-07-15 21:21:37.799982] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:04.488 passed 00:08:04.488 Test: test_reservation_preempt_notification ...[2024-07-15 21:21:37.800300] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:04.488 passed 00:08:04.488 Test: test_spdk_nvmf_ns_event ...passed 00:08:04.488 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:08:04.488 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:08:04.488 Test: test_spdk_nvmf_subsystem_add_host ...[2024-07-15 21:21:37.801322] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 264:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:08:04.488 [2024-07-15 21:21:37.801439] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to transport_ut transport 00:08:04.488 passed 00:08:04.488 Test: test_nvmf_ns_reservation_report ...[2024-07-15 21:21:37.801604] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3469:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:08:04.488 passed 00:08:04.488 Test: test_nvmf_nqn_is_valid ...[2024-07-15 21:21:37.801711] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:08:04.488 [2024-07-15 21:21:37.801769] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:1d09868e-ebfe-4fd8-a88c-e4532de602e": uuid is not the correct length 00:08:04.488 [2024-07-15 21:21:37.801811] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:08:04.488 passed 00:08:04.488 Test: test_nvmf_ns_reservation_restore ...[2024-07-15 21:21:37.801936] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2663:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:08:04.488 passed 00:08:04.488 Test: test_nvmf_subsystem_state_change ...passed 00:08:04.488 Test: test_nvmf_reservation_custom_ops ...passed 00:08:04.488 00:08:04.488 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.488 suites 1 1 n/a 0 0 00:08:04.489 tests 24 24 24 0 0 00:08:04.489 asserts 499 499 499 0 n/a 00:08:04.489 00:08:04.489 Elapsed time = 0.009 seconds 00:08:04.489 21:21:37 unittest.unittest_nvmf -- unit/unittest.sh@112 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:08:04.749 00:08:04.749 00:08:04.749 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.749 http://cunit.sourceforge.net/ 00:08:04.749 00:08:04.749 00:08:04.749 Suite: nvmf 00:08:04.749 Test: test_nvmf_tcp_create ...[2024-07-15 21:21:37.879741] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 745:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:08:04.749 passed 00:08:04.749 Test: test_nvmf_tcp_destroy ...passed 00:08:04.749 Test: test_nvmf_tcp_poll_group_create ...passed 00:08:04.749 Test: test_nvmf_tcp_send_c2h_data ...passed 00:08:04.749 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:08:04.749 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:08:04.749 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:08:04.749 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-07-15 21:21:37.943810] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1100:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:04.749 [2024-07-15 21:21:37.943883] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffefaa34db0 is same with the state(5) to be set 00:08:04.749 [2024-07-15 21:21:37.943949] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffefaa34db0 is same with the state(5) to be set 00:08:04.749 [2024-07-15 21:21:37.944015] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1100:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:04.749 passed[2024-07-15 21:21:37.944041] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffefaa34db0 is same with the state(5) to be set 00:08:04.749 00:08:04.749 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:08:04.749 Test: test_nvmf_tcp_icreq_handle ...[2024-07-15 21:21:37.944183] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2136:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:08:04.749 [2024-07-15 21:21:37.944260] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1100:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:04.749 [2024-07-15 21:21:37.944318] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffefaa34db0 is same with the state(5) to be set 00:08:04.749 [2024-07-15 21:21:37.944353] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2136:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:08:04.749 [2024-07-15 21:21:37.944386] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffefaa34db0 is same with the state(5) to be set 00:08:04.749 [2024-07-15 21:21:37.944418] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1100:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:04.749 [2024-07-15 21:21:37.944452] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffefaa34db0 is same with the state(5) to be set 00:08:04.749 [2024-07-15 21:21:37.944488] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1100:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:08:04.749 [2024-07-15 21:21:37.944535] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffefaa34db0 is same with the state(5) to be set 00:08:04.749 passed 00:08:04.749 Test: test_nvmf_tcp_check_xfer_type ...passed 00:08:04.749 Test: test_nvmf_tcp_invalid_sgl ...[2024-07-15 21:21:37.944658] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2531:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:08:04.749 [2024-07-15 21:21:37.944699] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1100:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:04.749 [2024-07-15 21:21:37.944734] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffefaa34db0 is same with the state(5) to be set 00:08:04.749 passed 00:08:04.749 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-07-15 21:21:37.944804] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2263:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffefaa35b10 00:08:04.749 [2024-07-15 21:21:37.944867] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1100:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:04.749 [2024-07-15 21:21:37.944917] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffefaa35270 is same with the state(5) to be set 00:08:04.749 [2024-07-15 21:21:37.944963] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2320:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffefaa35270 00:08:04.749 [2024-07-15 21:21:37.944997] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1100:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:04.749 [2024-07-15 21:21:37.945030] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffefaa35270 is same with the state(5) to be set 00:08:04.749 [2024-07-15 21:21:37.945062] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2273:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:08:04.749 [2024-07-15 21:21:37.945095] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1100:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:04.749 [2024-07-15 21:21:37.945137] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffefaa35270 is same with the state(5) to be set 00:08:04.749 [2024-07-15 21:21:37.945178] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2312:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:08:04.749 [2024-07-15 21:21:37.945212] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1100:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:04.749 [2024-07-15 21:21:37.945247] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffefaa35270 is same with the state(5) to be set 00:08:04.749 [2024-07-15 21:21:37.945291] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1100:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:04.749 [2024-07-15 21:21:37.945328] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffefaa35270 is same with the state(5) to be set 00:08:04.749 [2024-07-15 21:21:37.945380] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1100:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:04.749 [2024-07-15 21:21:37.945412] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffefaa35270 is same with the state(5) to be set 00:08:04.749 [2024-07-15 21:21:37.945451] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1100:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:04.749 [2024-07-15 21:21:37.945485] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffefaa35270 is same with the state(5) to be set 00:08:04.749 [2024-07-15 21:21:37.945523] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1100:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:04.749 [2024-07-15 21:21:37.945554] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffefaa35270 is same with the state(5) to be set 00:08:04.749 [2024-07-15 21:21:37.945597] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1100:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:04.749 [2024-07-15 21:21:37.945630] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffefaa35270 is same with the state(5) to be set 00:08:04.749 [2024-07-15 21:21:37.945672] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1100:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:04.749 [2024-07-15 21:21:37.945704] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffefaa35270 is same with the state(5) to be set 00:08:04.749 passed 00:08:04.749 Test: test_nvmf_tcp_tls_add_remove_credentials ...passed 00:08:04.749 Test: test_nvmf_tcp_tls_generate_psk_id ...passed 00:08:04.749 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-07-15 21:21:37.957847] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:08:04.749 [2024-07-15 21:21:37.957903] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:08:04.749 [2024-07-15 21:21:37.958101] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:08:04.749 [2024-07-15 21:21:37.958136] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:08:04.749 passed 00:08:04.750 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-07-15 21:21:37.958271] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:08:04.750 [2024-07-15 21:21:37.958305] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:08:04.750 passed 00:08:04.750 00:08:04.750 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.750 suites 1 1 n/a 0 0 00:08:04.750 tests 17 17 17 0 0 00:08:04.750 asserts 222 222 222 0 n/a 00:08:04.750 00:08:04.750 Elapsed time = 0.096 seconds 00:08:04.750 21:21:38 unittest.unittest_nvmf -- unit/unittest.sh@113 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:08:04.750 00:08:04.750 00:08:04.750 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.750 http://cunit.sourceforge.net/ 00:08:04.750 00:08:04.750 00:08:04.750 Suite: nvmf 00:08:04.750 Test: test_nvmf_tgt_create_poll_group ...passed 00:08:04.750 00:08:04.750 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.750 suites 1 1 n/a 0 0 00:08:04.750 tests 1 1 1 0 0 00:08:04.750 asserts 17 17 17 0 n/a 00:08:04.750 00:08:04.750 Elapsed time = 0.019 seconds 00:08:04.750 00:08:04.750 real 0m0.514s 00:08:04.750 user 0m0.240s 00:08:04.750 sys 0m0.270s 00:08:04.750 21:21:38 unittest.unittest_nvmf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.750 21:21:38 unittest.unittest_nvmf -- common/autotest_common.sh@10 -- # set +x 00:08:04.750 ************************************ 00:08:04.750 END TEST unittest_nvmf 00:08:04.750 ************************************ 00:08:05.010 21:21:38 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:05.010 21:21:38 unittest -- unit/unittest.sh@262 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:05.010 21:21:38 unittest -- unit/unittest.sh@267 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:05.010 21:21:38 unittest -- unit/unittest.sh@268 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:08:05.010 21:21:38 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:05.010 21:21:38 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.010 21:21:38 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:05.010 ************************************ 00:08:05.010 START TEST unittest_nvmf_rdma 00:08:05.010 ************************************ 00:08:05.011 21:21:38 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:08:05.011 00:08:05.011 00:08:05.011 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.011 http://cunit.sourceforge.net/ 00:08:05.011 00:08:05.011 00:08:05.011 Suite: nvmf 00:08:05.011 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-07-15 21:21:38.208352] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1863:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:08:05.011 [2024-07-15 21:21:38.208767] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1913:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:08:05.011 [2024-07-15 21:21:38.208852] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1913:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:08:05.011 passed 00:08:05.011 Test: test_spdk_nvmf_rdma_request_process ...passed 00:08:05.011 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:08:05.011 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:08:05.011 Test: test_nvmf_rdma_opts_init ...passed 00:08:05.011 Test: test_nvmf_rdma_request_free_data ...passed 00:08:05.011 Test: test_nvmf_rdma_resources_create ...passed 00:08:05.011 Test: test_nvmf_rdma_qpair_compare ...passed 00:08:05.011 Test: test_nvmf_rdma_resize_cq ...[2024-07-15 21:21:38.211884] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 954:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:08:05.011 Using CQ of insufficient size may lead to CQ overrun 00:08:05.011 [2024-07-15 21:21:38.212013] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 959:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:08:05.011 [2024-07-15 21:21:38.212103] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 967:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:08:05.011 passed 00:08:05.011 00:08:05.011 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.011 suites 1 1 n/a 0 0 00:08:05.011 tests 9 9 9 0 0 00:08:05.011 asserts 579 579 579 0 n/a 00:08:05.011 00:08:05.011 Elapsed time = 0.003 seconds 00:08:05.011 00:08:05.011 real 0m0.059s 00:08:05.011 user 0m0.030s 00:08:05.011 sys 0m0.029s 00:08:05.011 21:21:38 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.011 21:21:38 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:05.011 ************************************ 00:08:05.011 END TEST unittest_nvmf_rdma 00:08:05.011 ************************************ 00:08:05.011 21:21:38 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:05.011 21:21:38 unittest -- unit/unittest.sh@271 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:05.011 21:21:38 unittest -- unit/unittest.sh@275 -- # run_test unittest_scsi unittest_scsi 00:08:05.011 21:21:38 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:05.011 21:21:38 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.011 21:21:38 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:05.011 ************************************ 00:08:05.011 START TEST unittest_scsi 00:08:05.011 ************************************ 00:08:05.011 21:21:38 unittest.unittest_scsi -- common/autotest_common.sh@1123 -- # unittest_scsi 00:08:05.011 21:21:38 unittest.unittest_scsi -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:08:05.011 00:08:05.011 00:08:05.011 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.011 http://cunit.sourceforge.net/ 00:08:05.011 00:08:05.011 00:08:05.011 Suite: dev_suite 00:08:05.011 Test: dev_destruct_null_dev ...passed 00:08:05.011 Test: dev_destruct_zero_luns ...passed 00:08:05.011 Test: dev_destruct_null_lun ...passed 00:08:05.011 Test: dev_destruct_success ...passed 00:08:05.011 Test: dev_construct_num_luns_zero ...[2024-07-15 21:21:38.329484] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:08:05.011 passed 00:08:05.011 Test: dev_construct_no_lun_zero ...[2024-07-15 21:21:38.330012] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:08:05.011 passed 00:08:05.011 Test: dev_construct_null_lun ...[2024-07-15 21:21:38.330177] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:08:05.011 passed 00:08:05.011 Test: dev_construct_name_too_long ...[2024-07-15 21:21:38.330334] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:08:05.011 passed 00:08:05.011 Test: dev_construct_success ...passed 00:08:05.011 Test: dev_construct_success_lun_zero_not_first ...passed 00:08:05.011 Test: dev_queue_mgmt_task_success ...passed 00:08:05.011 Test: dev_queue_task_success ...passed 00:08:05.011 Test: dev_stop_success ...passed 00:08:05.011 Test: dev_add_port_max_ports ...[2024-07-15 21:21:38.331155] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:08:05.011 passed 00:08:05.011 Test: dev_add_port_construct_failure1 ...[2024-07-15 21:21:38.331394] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:08:05.011 passed 00:08:05.011 Test: dev_add_port_construct_failure2 ...[2024-07-15 21:21:38.331601] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:08:05.011 passed 00:08:05.011 Test: dev_add_port_success1 ...passed 00:08:05.011 Test: dev_add_port_success2 ...passed 00:08:05.011 Test: dev_add_port_success3 ...passed 00:08:05.011 Test: dev_find_port_by_id_num_ports_zero ...passed 00:08:05.011 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:08:05.011 Test: dev_find_port_by_id_success ...passed 00:08:05.011 Test: dev_add_lun_bdev_not_found ...passed 00:08:05.011 Test: dev_add_lun_no_free_lun_id ...[2024-07-15 21:21:38.332571] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:08:05.011 passed 00:08:05.011 Test: dev_add_lun_success1 ...passed 00:08:05.011 Test: dev_add_lun_success2 ...passed 00:08:05.011 Test: dev_check_pending_tasks ...passed 00:08:05.011 Test: dev_iterate_luns ...passed 00:08:05.011 Test: dev_find_free_lun ...passed 00:08:05.011 00:08:05.011 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.011 suites 1 1 n/a 0 0 00:08:05.011 tests 29 29 29 0 0 00:08:05.011 asserts 97 97 97 0 n/a 00:08:05.011 00:08:05.011 Elapsed time = 0.003 seconds 00:08:05.011 21:21:38 unittest.unittest_scsi -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:08:05.271 00:08:05.271 00:08:05.271 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.271 http://cunit.sourceforge.net/ 00:08:05.271 00:08:05.271 00:08:05.271 Suite: lun_suite 00:08:05.271 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-07-15 21:21:38.386662] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:08:05.271 passed 00:08:05.271 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-07-15 21:21:38.387258] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:08:05.271 passed 00:08:05.271 Test: lun_task_mgmt_execute_lun_reset ...passed 00:08:05.271 Test: lun_task_mgmt_execute_target_reset ...passed 00:08:05.271 Test: lun_task_mgmt_execute_invalid_case ...[2024-07-15 21:21:38.387747] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:08:05.271 passed 00:08:05.271 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:08:05.271 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:08:05.271 Test: lun_append_task_null_lun_not_supported ...passed 00:08:05.271 Test: lun_execute_scsi_task_pending ...passed 00:08:05.271 Test: lun_execute_scsi_task_complete ...passed 00:08:05.271 Test: lun_execute_scsi_task_resize ...passed 00:08:05.271 Test: lun_destruct_success ...passed 00:08:05.271 Test: lun_construct_null_ctx ...[2024-07-15 21:21:38.388658] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:08:05.271 passed 00:08:05.271 Test: lun_construct_success ...passed 00:08:05.271 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:08:05.271 Test: lun_reset_task_suspend_scsi_task ...passed 00:08:05.271 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:08:05.271 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:08:05.271 00:08:05.271 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.271 suites 1 1 n/a 0 0 00:08:05.271 tests 18 18 18 0 0 00:08:05.271 asserts 153 153 153 0 n/a 00:08:05.271 00:08:05.271 Elapsed time = 0.002 seconds 00:08:05.271 21:21:38 unittest.unittest_scsi -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:08:05.271 00:08:05.271 00:08:05.271 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.271 http://cunit.sourceforge.net/ 00:08:05.271 00:08:05.271 00:08:05.271 Suite: scsi_suite 00:08:05.271 Test: scsi_init ...passed 00:08:05.271 00:08:05.271 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.271 suites 1 1 n/a 0 0 00:08:05.271 tests 1 1 1 0 0 00:08:05.271 asserts 1 1 1 0 n/a 00:08:05.271 00:08:05.271 Elapsed time = 0.000 seconds 00:08:05.271 21:21:38 unittest.unittest_scsi -- unit/unittest.sh@120 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:08:05.271 00:08:05.271 00:08:05.271 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.271 http://cunit.sourceforge.net/ 00:08:05.271 00:08:05.271 00:08:05.271 Suite: translation_suite 00:08:05.271 Test: mode_select_6_test ...passed 00:08:05.271 Test: mode_select_6_test2 ...passed 00:08:05.271 Test: mode_sense_6_test ...passed 00:08:05.271 Test: mode_sense_10_test ...passed 00:08:05.271 Test: inquiry_evpd_test ...passed 00:08:05.271 Test: inquiry_standard_test ...passed 00:08:05.271 Test: inquiry_overflow_test ...passed 00:08:05.271 Test: task_complete_test ...passed 00:08:05.271 Test: lba_range_test ...passed 00:08:05.271 Test: xfer_len_test ...[2024-07-15 21:21:38.480786] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:08:05.271 passed 00:08:05.271 Test: xfer_test ...passed 00:08:05.271 Test: scsi_name_padding_test ...passed 00:08:05.271 Test: get_dif_ctx_test ...passed 00:08:05.271 Test: unmap_split_test ...passed 00:08:05.271 00:08:05.271 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.271 suites 1 1 n/a 0 0 00:08:05.271 tests 14 14 14 0 0 00:08:05.271 asserts 1205 1205 1205 0 n/a 00:08:05.271 00:08:05.271 Elapsed time = 0.005 seconds 00:08:05.271 21:21:38 unittest.unittest_scsi -- unit/unittest.sh@121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:08:05.271 00:08:05.271 00:08:05.271 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.271 http://cunit.sourceforge.net/ 00:08:05.271 00:08:05.271 00:08:05.271 Suite: reservation_suite 00:08:05.271 Test: test_reservation_register ...[2024-07-15 21:21:38.527710] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:05.271 passed 00:08:05.271 Test: test_reservation_reserve ...[2024-07-15 21:21:38.528376] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:05.271 [2024-07-15 21:21:38.528536] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 215:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:08:05.271 [2024-07-15 21:21:38.528739] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 210:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:08:05.271 passed 00:08:05.271 Test: test_all_registrant_reservation_reserve ...[2024-07-15 21:21:38.528991] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:05.271 passed 00:08:05.271 Test: test_all_registrant_reservation_access ...[2024-07-15 21:21:38.529327] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:05.271 [2024-07-15 21:21:38.529474] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 865:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0x8 00:08:05.271 [2024-07-15 21:21:38.529618] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 865:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0xaa 00:08:05.271 passed 00:08:05.272 Test: test_reservation_preempt_non_all_regs ...[2024-07-15 21:21:38.529845] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:05.272 [2024-07-15 21:21:38.530042] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 464:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:08:05.272 passed 00:08:05.272 Test: test_reservation_preempt_all_regs ...[2024-07-15 21:21:38.530324] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:05.272 passed 00:08:05.272 Test: test_reservation_cmds_conflict ...[2024-07-15 21:21:38.530548] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:05.272 [2024-07-15 21:21:38.530661] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 857:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:08:05.272 [2024-07-15 21:21:38.530760] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:08:05.272 [2024-07-15 21:21:38.530823] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:08:05.272 [2024-07-15 21:21:38.530892] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:08:05.272 [2024-07-15 21:21:38.530951] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:08:05.272 passed 00:08:05.272 Test: test_scsi2_reserve_release ...passed 00:08:05.272 Test: test_pr_with_scsi2_reserve_release ...[2024-07-15 21:21:38.531183] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:05.272 passed 00:08:05.272 00:08:05.272 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.272 suites 1 1 n/a 0 0 00:08:05.272 tests 9 9 9 0 0 00:08:05.272 asserts 344 344 344 0 n/a 00:08:05.272 00:08:05.272 Elapsed time = 0.002 seconds 00:08:05.272 ************************************ 00:08:05.272 END TEST unittest_scsi 00:08:05.272 ************************************ 00:08:05.272 00:08:05.272 real 0m0.248s 00:08:05.272 user 0m0.159s 00:08:05.272 sys 0m0.084s 00:08:05.272 21:21:38 unittest.unittest_scsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.272 21:21:38 unittest.unittest_scsi -- common/autotest_common.sh@10 -- # set +x 00:08:05.272 21:21:38 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:05.272 21:21:38 unittest -- unit/unittest.sh@278 -- # uname -s 00:08:05.272 21:21:38 unittest -- unit/unittest.sh@278 -- # '[' Linux = Linux ']' 00:08:05.272 21:21:38 unittest -- unit/unittest.sh@279 -- # run_test unittest_sock unittest_sock 00:08:05.272 21:21:38 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:05.272 21:21:38 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.272 21:21:38 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:05.272 ************************************ 00:08:05.272 START TEST unittest_sock 00:08:05.272 ************************************ 00:08:05.272 21:21:38 unittest.unittest_sock -- common/autotest_common.sh@1123 -- # unittest_sock 00:08:05.272 21:21:38 unittest.unittest_sock -- unit/unittest.sh@125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:08:05.532 00:08:05.532 00:08:05.532 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.532 http://cunit.sourceforge.net/ 00:08:05.532 00:08:05.532 00:08:05.532 Suite: sock 00:08:05.532 Test: posix_sock ...passed 00:08:05.532 Test: ut_sock ...passed 00:08:05.532 Test: posix_sock_group ...passed 00:08:05.532 Test: ut_sock_group ...passed 00:08:05.532 Test: posix_sock_group_fairness ...passed 00:08:05.532 Test: _posix_sock_close ...passed 00:08:05.532 Test: sock_get_default_opts ...passed 00:08:05.532 Test: ut_sock_impl_get_set_opts ...passed 00:08:05.532 Test: posix_sock_impl_get_set_opts ...passed 00:08:05.532 Test: ut_sock_map ...passed 00:08:05.532 Test: override_impl_opts ...passed 00:08:05.532 Test: ut_sock_group_get_ctx ...passed 00:08:05.532 00:08:05.532 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.532 suites 1 1 n/a 0 0 00:08:05.532 tests 12 12 12 0 0 00:08:05.532 asserts 349 349 349 0 n/a 00:08:05.532 00:08:05.532 Elapsed time = 0.007 seconds 00:08:05.532 21:21:38 unittest.unittest_sock -- unit/unittest.sh@126 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:08:05.532 00:08:05.532 00:08:05.532 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.532 http://cunit.sourceforge.net/ 00:08:05.532 00:08:05.532 00:08:05.532 Suite: posix 00:08:05.532 Test: flush ...passed 00:08:05.532 00:08:05.532 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.532 suites 1 1 n/a 0 0 00:08:05.532 tests 1 1 1 0 0 00:08:05.532 asserts 28 28 28 0 n/a 00:08:05.532 00:08:05.532 Elapsed time = 0.000 seconds 00:08:05.532 21:21:38 unittest.unittest_sock -- unit/unittest.sh@128 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:05.532 00:08:05.532 real 0m0.142s 00:08:05.532 user 0m0.068s 00:08:05.532 sys 0m0.050s 00:08:05.532 21:21:38 unittest.unittest_sock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.532 21:21:38 unittest.unittest_sock -- common/autotest_common.sh@10 -- # set +x 00:08:05.532 ************************************ 00:08:05.532 END TEST unittest_sock 00:08:05.532 ************************************ 00:08:05.532 21:21:38 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:05.532 21:21:38 unittest -- unit/unittest.sh@281 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:08:05.532 21:21:38 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:05.532 21:21:38 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.532 21:21:38 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:05.532 ************************************ 00:08:05.532 START TEST unittest_thread 00:08:05.532 ************************************ 00:08:05.532 21:21:38 unittest.unittest_thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:08:05.532 00:08:05.532 00:08:05.532 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.532 http://cunit.sourceforge.net/ 00:08:05.532 00:08:05.532 00:08:05.532 Suite: io_channel 00:08:05.532 Test: thread_alloc ...passed 00:08:05.532 Test: thread_send_msg ...passed 00:08:05.532 Test: thread_poller ...passed 00:08:05.532 Test: poller_pause ...passed 00:08:05.532 Test: thread_for_each ...passed 00:08:05.532 Test: for_each_channel_remove ...passed 00:08:05.532 Test: for_each_channel_unreg ...[2024-07-15 21:21:38.858960] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2177:spdk_io_device_register: *ERROR*: io_device 0x7fff66478380 already registered (old:0x613000000200 new:0x6130000003c0) 00:08:05.532 passed 00:08:05.532 Test: thread_name ...passed 00:08:05.532 Test: channel ...[2024-07-15 21:21:38.861813] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2311:spdk_get_io_channel: *ERROR*: could not find io_device 0x55e8d8d7d180 00:08:05.532 passed 00:08:05.532 Test: channel_destroy_races ...passed 00:08:05.532 Test: thread_exit_test ...[2024-07-15 21:21:38.865217] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 639:thread_exit: *ERROR*: thread 0x619000007380 got timeout, and move it to the exited state forcefully 00:08:05.532 passed 00:08:05.532 Test: thread_update_stats_test ...passed 00:08:05.532 Test: nested_channel ...passed 00:08:05.532 Test: device_unregister_and_thread_exit_race ...passed 00:08:05.532 Test: cache_closest_timed_poller ...passed 00:08:05.532 Test: multi_timed_pollers_have_same_expiration ...passed 00:08:05.532 Test: io_device_lookup ...passed 00:08:05.532 Test: spdk_spin ...[2024-07-15 21:21:38.872913] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:08:05.532 [2024-07-15 21:21:38.872991] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7fff66478370 00:08:05.532 [2024-07-15 21:21:38.873084] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3120:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:08:05.532 [2024-07-15 21:21:38.874261] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:08:05.532 [2024-07-15 21:21:38.874336] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7fff66478370 00:08:05.532 [2024-07-15 21:21:38.874370] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:08:05.532 [2024-07-15 21:21:38.874408] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7fff66478370 00:08:05.532 [2024-07-15 21:21:38.874441] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:08:05.532 [2024-07-15 21:21:38.874487] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7fff66478370 00:08:05.532 [2024-07-15 21:21:38.874521] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:08:05.532 [2024-07-15 21:21:38.874571] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7fff66478370 00:08:05.532 passed 00:08:05.532 Test: for_each_channel_and_thread_exit_race ...passed 00:08:05.532 Test: for_each_thread_and_thread_exit_race ...passed 00:08:05.532 00:08:05.532 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.533 suites 1 1 n/a 0 0 00:08:05.533 tests 20 20 20 0 0 00:08:05.533 asserts 409 409 409 0 n/a 00:08:05.533 00:08:05.533 Elapsed time = 0.036 seconds 00:08:05.792 00:08:05.792 real 0m0.094s 00:08:05.792 user 0m0.050s 00:08:05.792 sys 0m0.043s 00:08:05.792 21:21:38 unittest.unittest_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.792 21:21:38 unittest.unittest_thread -- common/autotest_common.sh@10 -- # set +x 00:08:05.792 ************************************ 00:08:05.792 END TEST unittest_thread 00:08:05.792 ************************************ 00:08:05.792 21:21:38 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:05.792 21:21:38 unittest -- unit/unittest.sh@282 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:08:05.792 21:21:38 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:05.792 21:21:38 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.792 21:21:38 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:05.792 ************************************ 00:08:05.792 START TEST unittest_iobuf 00:08:05.792 ************************************ 00:08:05.792 21:21:38 unittest.unittest_iobuf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:08:05.792 00:08:05.792 00:08:05.792 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.792 http://cunit.sourceforge.net/ 00:08:05.792 00:08:05.792 00:08:05.792 Suite: io_channel 00:08:05.792 Test: iobuf ...passed 00:08:05.792 Test: iobuf_cache ...[2024-07-15 21:21:39.011045] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 360:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:08:05.792 [2024-07-15 21:21:39.011384] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 363:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:05.792 [2024-07-15 21:21:39.011587] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 372:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:08:05.792 [2024-07-15 21:21:39.011685] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 375:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:05.792 [2024-07-15 21:21:39.011828] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 360:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:08:05.792 [2024-07-15 21:21:39.011930] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 363:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:05.792 passed 00:08:05.792 00:08:05.792 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.792 suites 1 1 n/a 0 0 00:08:05.792 tests 2 2 2 0 0 00:08:05.792 asserts 107 107 107 0 n/a 00:08:05.792 00:08:05.792 Elapsed time = 0.007 seconds 00:08:05.792 00:08:05.792 real 0m0.062s 00:08:05.792 user 0m0.039s 00:08:05.792 sys 0m0.023s 00:08:05.792 21:21:39 unittest.unittest_iobuf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.792 21:21:39 unittest.unittest_iobuf -- common/autotest_common.sh@10 -- # set +x 00:08:05.792 ************************************ 00:08:05.792 END TEST unittest_iobuf 00:08:05.792 ************************************ 00:08:05.792 21:21:39 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:05.792 21:21:39 unittest -- unit/unittest.sh@283 -- # run_test unittest_util unittest_util 00:08:05.792 21:21:39 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:05.792 21:21:39 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.792 21:21:39 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:05.792 ************************************ 00:08:05.792 START TEST unittest_util 00:08:05.792 ************************************ 00:08:05.792 21:21:39 unittest.unittest_util -- common/autotest_common.sh@1123 -- # unittest_util 00:08:05.792 21:21:39 unittest.unittest_util -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:08:05.792 00:08:05.792 00:08:05.792 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.792 http://cunit.sourceforge.net/ 00:08:05.792 00:08:05.792 00:08:05.792 Suite: base64 00:08:05.792 Test: test_base64_get_encoded_strlen ...passed 00:08:05.792 Test: test_base64_get_decoded_len ...passed 00:08:05.792 Test: test_base64_encode ...passed 00:08:05.792 Test: test_base64_decode ...passed 00:08:05.792 Test: test_base64_urlsafe_encode ...passed 00:08:05.792 Test: test_base64_urlsafe_decode ...passed 00:08:05.792 00:08:05.792 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.792 suites 1 1 n/a 0 0 00:08:05.792 tests 6 6 6 0 0 00:08:05.792 asserts 112 112 112 0 n/a 00:08:05.792 00:08:05.792 Elapsed time = 0.000 seconds 00:08:05.793 21:21:39 unittest.unittest_util -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:08:06.053 00:08:06.053 00:08:06.053 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.053 http://cunit.sourceforge.net/ 00:08:06.053 00:08:06.053 00:08:06.053 Suite: bit_array 00:08:06.053 Test: test_1bit ...passed 00:08:06.053 Test: test_64bit ...passed 00:08:06.053 Test: test_find ...passed 00:08:06.053 Test: test_resize ...passed 00:08:06.053 Test: test_errors ...passed 00:08:06.053 Test: test_count ...passed 00:08:06.053 Test: test_mask_store_load ...passed 00:08:06.053 Test: test_mask_clear ...passed 00:08:06.053 00:08:06.053 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.053 suites 1 1 n/a 0 0 00:08:06.053 tests 8 8 8 0 0 00:08:06.053 asserts 5075 5075 5075 0 n/a 00:08:06.053 00:08:06.053 Elapsed time = 0.001 seconds 00:08:06.053 21:21:39 unittest.unittest_util -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:08:06.053 00:08:06.053 00:08:06.053 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.053 http://cunit.sourceforge.net/ 00:08:06.053 00:08:06.053 00:08:06.053 Suite: cpuset 00:08:06.053 Test: test_cpuset ...passed 00:08:06.053 Test: test_cpuset_parse ...[2024-07-15 21:21:39.210345] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 256:parse_list: *ERROR*: Unexpected end of core list '[' 00:08:06.053 [2024-07-15 21:21:39.210835] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:08:06.053 [2024-07-15 21:21:39.211009] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:08:06.053 [2024-07-15 21:21:39.211176] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 236:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:08:06.053 [2024-07-15 21:21:39.211263] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:08:06.053 [2024-07-15 21:21:39.211355] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:08:06.053 [2024-07-15 21:21:39.211410] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 220:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:08:06.053 [2024-07-15 21:21:39.211488] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 215:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:08:06.053 passed 00:08:06.053 Test: test_cpuset_fmt ...passed 00:08:06.053 Test: test_cpuset_foreach ...passed 00:08:06.053 00:08:06.053 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.053 suites 1 1 n/a 0 0 00:08:06.053 tests 4 4 4 0 0 00:08:06.053 asserts 90 90 90 0 n/a 00:08:06.053 00:08:06.053 Elapsed time = 0.004 seconds 00:08:06.053 21:21:39 unittest.unittest_util -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:08:06.053 00:08:06.053 00:08:06.053 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.053 http://cunit.sourceforge.net/ 00:08:06.053 00:08:06.053 00:08:06.053 Suite: crc16 00:08:06.053 Test: test_crc16_t10dif ...passed 00:08:06.053 Test: test_crc16_t10dif_seed ...passed 00:08:06.053 Test: test_crc16_t10dif_copy ...passed 00:08:06.053 00:08:06.053 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.053 suites 1 1 n/a 0 0 00:08:06.053 tests 3 3 3 0 0 00:08:06.053 asserts 5 5 5 0 n/a 00:08:06.053 00:08:06.053 Elapsed time = 0.000 seconds 00:08:06.053 21:21:39 unittest.unittest_util -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:08:06.053 00:08:06.053 00:08:06.053 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.053 http://cunit.sourceforge.net/ 00:08:06.053 00:08:06.053 00:08:06.053 Suite: crc32_ieee 00:08:06.053 Test: test_crc32_ieee ...passed 00:08:06.053 00:08:06.053 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.053 suites 1 1 n/a 0 0 00:08:06.053 tests 1 1 1 0 0 00:08:06.053 asserts 1 1 1 0 n/a 00:08:06.053 00:08:06.053 Elapsed time = 0.000 seconds 00:08:06.053 21:21:39 unittest.unittest_util -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:08:06.053 00:08:06.053 00:08:06.053 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.053 http://cunit.sourceforge.net/ 00:08:06.053 00:08:06.053 00:08:06.053 Suite: crc32c 00:08:06.053 Test: test_crc32c ...passed 00:08:06.053 Test: test_crc32c_nvme ...passed 00:08:06.053 00:08:06.053 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.053 suites 1 1 n/a 0 0 00:08:06.053 tests 2 2 2 0 0 00:08:06.053 asserts 16 16 16 0 n/a 00:08:06.053 00:08:06.053 Elapsed time = 0.000 seconds 00:08:06.053 21:21:39 unittest.unittest_util -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:08:06.053 00:08:06.053 00:08:06.053 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.053 http://cunit.sourceforge.net/ 00:08:06.053 00:08:06.053 00:08:06.053 Suite: crc64 00:08:06.053 Test: test_crc64_nvme ...passed 00:08:06.053 00:08:06.053 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.053 suites 1 1 n/a 0 0 00:08:06.053 tests 1 1 1 0 0 00:08:06.053 asserts 4 4 4 0 n/a 00:08:06.053 00:08:06.053 Elapsed time = 0.000 seconds 00:08:06.053 21:21:39 unittest.unittest_util -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:08:06.053 00:08:06.053 00:08:06.053 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.054 http://cunit.sourceforge.net/ 00:08:06.054 00:08:06.054 00:08:06.054 Suite: string 00:08:06.054 Test: test_parse_ip_addr ...passed 00:08:06.054 Test: test_str_chomp ...passed 00:08:06.054 Test: test_parse_capacity ...passed 00:08:06.054 Test: test_sprintf_append_realloc ...passed 00:08:06.054 Test: test_strtol ...passed 00:08:06.054 Test: test_strtoll ...passed 00:08:06.054 Test: test_strarray ...passed 00:08:06.054 Test: test_strcpy_replace ...passed 00:08:06.054 00:08:06.054 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.054 suites 1 1 n/a 0 0 00:08:06.054 tests 8 8 8 0 0 00:08:06.054 asserts 161 161 161 0 n/a 00:08:06.054 00:08:06.054 Elapsed time = 0.001 seconds 00:08:06.316 21:21:39 unittest.unittest_util -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:08:06.316 00:08:06.316 00:08:06.316 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.316 http://cunit.sourceforge.net/ 00:08:06.316 00:08:06.316 00:08:06.316 Suite: dif 00:08:06.316 Test: dif_generate_and_verify_test ...[2024-07-15 21:21:39.435705] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:06.316 [2024-07-15 21:21:39.436188] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:06.316 [2024-07-15 21:21:39.436461] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:06.316 [2024-07-15 21:21:39.436730] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:06.316 [2024-07-15 21:21:39.437027] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:06.316 [2024-07-15 21:21:39.437370] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:06.316 passed 00:08:06.316 Test: dif_disable_check_test ...[2024-07-15 21:21:39.438335] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:06.316 [2024-07-15 21:21:39.438622] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:06.316 [2024-07-15 21:21:39.438883] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:06.316 passed 00:08:06.316 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-07-15 21:21:39.439862] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:08:06.316 [2024-07-15 21:21:39.440154] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:08:06.316 [2024-07-15 21:21:39.440446] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:08:06.317 [2024-07-15 21:21:39.440778] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:08:06.317 [2024-07-15 21:21:39.441080] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:06.317 [2024-07-15 21:21:39.441378] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:06.317 [2024-07-15 21:21:39.441669] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:06.317 [2024-07-15 21:21:39.441946] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:06.317 [2024-07-15 21:21:39.442230] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:06.317 [2024-07-15 21:21:39.442534] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:06.317 [2024-07-15 21:21:39.442830] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:06.317 passed 00:08:06.317 Test: dif_apptag_mask_test ...[2024-07-15 21:21:39.443162] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:08:06.317 [2024-07-15 21:21:39.443447] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:08:06.317 passed 00:08:06.317 Test: dif_sec_512_md_0_error_test ...[2024-07-15 21:21:39.443683] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:06.317 passed 00:08:06.317 Test: dif_sec_4096_md_0_error_test ...[2024-07-15 21:21:39.443776] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:06.317 [2024-07-15 21:21:39.443825] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:06.317 passed 00:08:06.317 Test: dif_sec_4100_md_128_error_test ...[2024-07-15 21:21:39.443927] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:08:06.317 [2024-07-15 21:21:39.443971] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:08:06.317 passed 00:08:06.317 Test: dif_guard_seed_test ...passed 00:08:06.317 Test: dif_guard_value_test ...passed 00:08:06.317 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:08:06.317 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:08:06.317 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:06.317 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:06.317 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:06.317 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:08:06.317 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:08:06.317 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:08:06.317 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:08:06.317 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:06.317 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:08:06.317 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:08:06.317 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:08:06.317 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:08:06.317 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:08:06.317 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:08:06.317 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:06.317 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:06.317 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-15 21:21:39.472135] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=f94c, Actual=fd4c 00:08:06.317 [2024-07-15 21:21:39.473738] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fa21, Actual=fe21 00:08:06.317 [2024-07-15 21:21:39.475320] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=488 00:08:06.317 [2024-07-15 21:21:39.476864] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=488 00:08:06.317 [2024-07-15 21:21:39.478415] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=400005d 00:08:06.317 [2024-07-15 21:21:39.479955] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=400005d 00:08:06.317 [2024-07-15 21:21:39.481556] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd4c, Actual=3ae8 00:08:06.317 [2024-07-15 21:21:39.482774] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fe21, Actual=2475 00:08:06.317 [2024-07-15 21:21:39.483919] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1eb753ed, Actual=1ab753ed 00:08:06.317 [2024-07-15 21:21:39.485534] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=3c574660, Actual=38574660 00:08:06.317 [2024-07-15 21:21:39.487258] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=488 00:08:06.317 [2024-07-15 21:21:39.488830] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=488 00:08:06.317 [2024-07-15 21:21:39.490537] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=400005d 00:08:06.317 [2024-07-15 21:21:39.492207] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=400005d 00:08:06.317 [2024-07-15 21:21:39.493943] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753ed, Actual=a380ca0b 00:08:06.317 [2024-07-15 21:21:39.495235] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=38574660, Actual=f0a730ec 00:08:06.317 [2024-07-15 21:21:39.496428] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728acc20d3, Actual=a576a7728ecc20d3 00:08:06.317 [2024-07-15 21:21:39.498155] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=88010a2d4c37a266, Actual=88010a2d4837a266 00:08:06.317 [2024-07-15 21:21:39.499859] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=488 00:08:06.317 [2024-07-15 21:21:39.501586] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=488 00:08:06.317 [2024-07-15 21:21:39.503270] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=4000000005d 00:08:06.317 [2024-07-15 21:21:39.505002] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=4000000005d 00:08:06.317 [2024-07-15 21:21:39.506715] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc20d3, Actual=fbf8b6a4899956fc 00:08:06.317 [2024-07-15 21:21:39.507984] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=88010a2d4837a266, Actual=cc83b76e96554bd7 00:08:06.317 passed 00:08:06.317 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-07-15 21:21:39.508707] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:08:06.317 [2024-07-15 21:21:39.508939] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:08:06.317 [2024-07-15 21:21:39.509157] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:06.317 [2024-07-15 21:21:39.509387] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:06.317 [2024-07-15 21:21:39.509631] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:06.317 [2024-07-15 21:21:39.509851] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:06.317 [2024-07-15 21:21:39.510070] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3ae8 00:08:06.317 [2024-07-15 21:21:39.510234] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2475 00:08:06.317 [2024-07-15 21:21:39.510409] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1eb753ed, Actual=1ab753ed 00:08:06.317 [2024-07-15 21:21:39.510606] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3c574660, Actual=38574660 00:08:06.317 [2024-07-15 21:21:39.510818] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:06.318 [2024-07-15 21:21:39.511016] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:06.318 [2024-07-15 21:21:39.511213] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:06.318 [2024-07-15 21:21:39.511404] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:06.318 [2024-07-15 21:21:39.511600] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=a380ca0b 00:08:06.318 [2024-07-15 21:21:39.511743] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=f0a730ec 00:08:06.318 [2024-07-15 21:21:39.511898] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728acc20d3, Actual=a576a7728ecc20d3 00:08:06.318 [2024-07-15 21:21:39.512092] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4c37a266, Actual=88010a2d4837a266 00:08:06.318 [2024-07-15 21:21:39.512287] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:06.318 [2024-07-15 21:21:39.512478] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:06.318 [2024-07-15 21:21:39.512680] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:08:06.318 [2024-07-15 21:21:39.512912] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:08:06.318 [2024-07-15 21:21:39.513142] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=fbf8b6a4899956fc 00:08:06.318 [2024-07-15 21:21:39.513321] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=cc83b76e96554bd7 00:08:06.318 passed 00:08:06.318 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-07-15 21:21:39.513556] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:08:06.318 [2024-07-15 21:21:39.513777] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:08:06.318 [2024-07-15 21:21:39.513993] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:06.318 [2024-07-15 21:21:39.514202] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:06.318 [2024-07-15 21:21:39.514409] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:06.318 [2024-07-15 21:21:39.514606] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:06.318 [2024-07-15 21:21:39.514799] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3ae8 00:08:06.318 [2024-07-15 21:21:39.514947] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2475 00:08:06.318 [2024-07-15 21:21:39.515093] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1eb753ed, Actual=1ab753ed 00:08:06.318 [2024-07-15 21:21:39.515318] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3c574660, Actual=38574660 00:08:06.318 [2024-07-15 21:21:39.515534] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:06.318 [2024-07-15 21:21:39.515752] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:06.318 [2024-07-15 21:21:39.515971] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:06.318 [2024-07-15 21:21:39.516189] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:06.318 [2024-07-15 21:21:39.516423] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=a380ca0b 00:08:06.318 [2024-07-15 21:21:39.516573] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=f0a730ec 00:08:06.318 [2024-07-15 21:21:39.516764] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728acc20d3, Actual=a576a7728ecc20d3 00:08:06.318 [2024-07-15 21:21:39.516983] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4c37a266, Actual=88010a2d4837a266 00:08:06.318 [2024-07-15 21:21:39.517204] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:06.318 [2024-07-15 21:21:39.517435] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:06.318 [2024-07-15 21:21:39.517661] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:08:06.318 [2024-07-15 21:21:39.517877] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:08:06.318 [2024-07-15 21:21:39.518096] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=fbf8b6a4899956fc 00:08:06.318 [2024-07-15 21:21:39.518237] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=cc83b76e96554bd7 00:08:06.318 passed 00:08:06.318 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-07-15 21:21:39.518448] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:08:06.318 [2024-07-15 21:21:39.518653] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:08:06.318 [2024-07-15 21:21:39.518850] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:06.318 [2024-07-15 21:21:39.519041] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:06.318 [2024-07-15 21:21:39.519254] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:06.318 [2024-07-15 21:21:39.519446] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:06.318 [2024-07-15 21:21:39.519641] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3ae8 00:08:06.318 [2024-07-15 21:21:39.519786] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2475 00:08:06.318 [2024-07-15 21:21:39.519936] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1eb753ed, Actual=1ab753ed 00:08:06.318 [2024-07-15 21:21:39.520127] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3c574660, Actual=38574660 00:08:06.318 [2024-07-15 21:21:39.520331] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:06.318 [2024-07-15 21:21:39.520530] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:06.318 [2024-07-15 21:21:39.520751] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:06.318 [2024-07-15 21:21:39.520974] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:06.318 [2024-07-15 21:21:39.521194] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=a380ca0b 00:08:06.318 [2024-07-15 21:21:39.521372] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=f0a730ec 00:08:06.318 [2024-07-15 21:21:39.521545] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728acc20d3, Actual=a576a7728ecc20d3 00:08:06.318 [2024-07-15 21:21:39.521767] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4c37a266, Actual=88010a2d4837a266 00:08:06.318 [2024-07-15 21:21:39.521983] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:06.318 [2024-07-15 21:21:39.522203] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:06.318 [2024-07-15 21:21:39.522439] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:08:06.318 [2024-07-15 21:21:39.522637] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:08:06.318 [2024-07-15 21:21:39.522842] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=fbf8b6a4899956fc 00:08:06.318 [2024-07-15 21:21:39.522993] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=cc83b76e96554bd7 00:08:06.318 passed 00:08:06.318 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-07-15 21:21:39.523220] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:08:06.319 [2024-07-15 21:21:39.523438] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:08:06.319 [2024-07-15 21:21:39.523658] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:06.319 [2024-07-15 21:21:39.523882] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:06.319 [2024-07-15 21:21:39.524117] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:06.319 [2024-07-15 21:21:39.524333] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:06.319 [2024-07-15 21:21:39.524553] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3ae8 00:08:06.319 [2024-07-15 21:21:39.524721] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2475 00:08:06.319 passed 00:08:06.319 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-07-15 21:21:39.524968] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1eb753ed, Actual=1ab753ed 00:08:06.319 [2024-07-15 21:21:39.525189] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3c574660, Actual=38574660 00:08:06.319 [2024-07-15 21:21:39.525428] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:06.319 [2024-07-15 21:21:39.525646] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:06.319 [2024-07-15 21:21:39.525873] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:06.319 [2024-07-15 21:21:39.526085] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:06.319 [2024-07-15 21:21:39.526281] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=a380ca0b 00:08:06.319 [2024-07-15 21:21:39.526434] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=f0a730ec 00:08:06.319 [2024-07-15 21:21:39.526613] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728acc20d3, Actual=a576a7728ecc20d3 00:08:06.319 [2024-07-15 21:21:39.526812] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4c37a266, Actual=88010a2d4837a266 00:08:06.319 [2024-07-15 21:21:39.527005] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:06.319 [2024-07-15 21:21:39.527200] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:06.319 [2024-07-15 21:21:39.527393] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:08:06.319 [2024-07-15 21:21:39.527588] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:08:06.319 [2024-07-15 21:21:39.527789] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=fbf8b6a4899956fc 00:08:06.319 [2024-07-15 21:21:39.527937] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=cc83b76e96554bd7 00:08:06.319 passed 00:08:06.319 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-07-15 21:21:39.528139] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:08:06.319 [2024-07-15 21:21:39.528339] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:08:06.319 [2024-07-15 21:21:39.528533] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:06.319 [2024-07-15 21:21:39.528757] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:06.319 [2024-07-15 21:21:39.528996] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:06.319 [2024-07-15 21:21:39.529218] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:06.319 [2024-07-15 21:21:39.529446] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3ae8 00:08:06.319 [2024-07-15 21:21:39.529609] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2475 00:08:06.319 passed 00:08:06.319 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-07-15 21:21:39.529838] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1eb753ed, Actual=1ab753ed 00:08:06.319 [2024-07-15 21:21:39.530059] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3c574660, Actual=38574660 00:08:06.319 [2024-07-15 21:21:39.530262] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:06.319 [2024-07-15 21:21:39.530460] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:06.319 [2024-07-15 21:21:39.530640] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:06.319 [2024-07-15 21:21:39.530836] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:06.319 [2024-07-15 21:21:39.531033] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=a380ca0b 00:08:06.319 [2024-07-15 21:21:39.531177] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=f0a730ec 00:08:06.319 [2024-07-15 21:21:39.531358] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728acc20d3, Actual=a576a7728ecc20d3 00:08:06.319 [2024-07-15 21:21:39.531555] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4c37a266, Actual=88010a2d4837a266 00:08:06.319 [2024-07-15 21:21:39.531753] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:06.319 [2024-07-15 21:21:39.531944] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:06.319 [2024-07-15 21:21:39.532141] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:08:06.319 [2024-07-15 21:21:39.532333] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:08:06.319 [2024-07-15 21:21:39.532537] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=fbf8b6a4899956fc 00:08:06.319 [2024-07-15 21:21:39.532723] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=cc83b76e96554bd7 00:08:06.319 passed 00:08:06.319 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:08:06.319 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:06.319 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:08:06.319 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:06.319 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:08:06.319 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:08:06.319 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:06.319 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:08:06.319 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:06.319 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-15 21:21:39.562806] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=f94c, Actual=fd4c 00:08:06.319 [2024-07-15 21:21:39.563616] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1152, Actual=1552 00:08:06.319 [2024-07-15 21:21:39.564402] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=488 00:08:06.319 [2024-07-15 21:21:39.565245] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=488 00:08:06.319 [2024-07-15 21:21:39.566097] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=400005d 00:08:06.319 [2024-07-15 21:21:39.566882] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=400005d 00:08:06.319 [2024-07-15 21:21:39.567663] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd4c, Actual=3ae8 00:08:06.319 [2024-07-15 21:21:39.568444] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=f141, Actual=2b15 00:08:06.319 [2024-07-15 21:21:39.569256] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1eb753ed, Actual=1ab753ed 00:08:06.319 [2024-07-15 21:21:39.570125] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=f5591558, Actual=f1591558 00:08:06.320 [2024-07-15 21:21:39.570919] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=488 00:08:06.320 [2024-07-15 21:21:39.571808] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=488 00:08:06.320 [2024-07-15 21:21:39.572644] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=400005d 00:08:06.320 [2024-07-15 21:21:39.573513] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=400005d 00:08:06.320 [2024-07-15 21:21:39.574352] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753ed, Actual=a380ca0b 00:08:06.320 [2024-07-15 21:21:39.575161] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=5a3d6598, Actual=92cd1314 00:08:06.320 [2024-07-15 21:21:39.575939] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728acc20d3, Actual=a576a7728ecc20d3 00:08:06.320 [2024-07-15 21:21:39.576748] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=68e28e2383cd7652, Actual=68e28e2387cd7652 00:08:06.320 [2024-07-15 21:21:39.577565] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=488 00:08:06.320 [2024-07-15 21:21:39.578417] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=488 00:08:06.320 [2024-07-15 21:21:39.579214] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=4000000005d 00:08:06.320 [2024-07-15 21:21:39.580001] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=4000000005d 00:08:06.320 [2024-07-15 21:21:39.580786] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc20d3, Actual=fbf8b6a4899956fc 00:08:06.320 [2024-07-15 21:21:39.581638] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=bdcaeff86fabb130, Actual=f94852bbb1c95881 00:08:06.320 passed 00:08:06.320 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-15 21:21:39.581970] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=f94c, Actual=fd4c 00:08:06.320 [2024-07-15 21:21:39.582189] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=72d3, Actual=76d3 00:08:06.320 [2024-07-15 21:21:39.582403] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=488 00:08:06.320 [2024-07-15 21:21:39.582612] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=488 00:08:06.320 [2024-07-15 21:21:39.582828] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4000059 00:08:06.320 [2024-07-15 21:21:39.583055] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4000059 00:08:06.320 [2024-07-15 21:21:39.583228] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=3ae8 00:08:06.320 [2024-07-15 21:21:39.583405] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=4894 00:08:06.320 [2024-07-15 21:21:39.583577] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1eb753ed, Actual=1ab753ed 00:08:06.320 [2024-07-15 21:21:39.583753] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=34d920ad, Actual=30d920ad 00:08:06.320 [2024-07-15 21:21:39.583939] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=488 00:08:06.320 [2024-07-15 21:21:39.584119] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=488 00:08:06.320 [2024-07-15 21:21:39.584292] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4000059 00:08:06.320 [2024-07-15 21:21:39.584471] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4000059 00:08:06.320 [2024-07-15 21:21:39.584647] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=a380ca0b 00:08:06.320 [2024-07-15 21:21:39.584870] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=534d26e1 00:08:06.320 [2024-07-15 21:21:39.585088] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728acc20d3, Actual=a576a7728ecc20d3 00:08:06.320 [2024-07-15 21:21:39.585291] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9d0081b0a522f20d, Actual=9d0081b0a122f20d 00:08:06.320 [2024-07-15 21:21:39.585494] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=488 00:08:06.320 [2024-07-15 21:21:39.585692] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=488 00:08:06.320 [2024-07-15 21:21:39.585898] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000000059 00:08:06.320 [2024-07-15 21:21:39.586095] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000000059 00:08:06.320 [2024-07-15 21:21:39.586305] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=fbf8b6a4899956fc 00:08:06.320 [2024-07-15 21:21:39.586511] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=caa5d289726dcde 00:08:06.320 passed 00:08:06.320 Test: dix_sec_512_md_0_error ...[2024-07-15 21:21:39.586614] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:06.320 passed 00:08:06.320 Test: dix_sec_512_md_8_prchk_0_single_iov ...passed 00:08:06.320 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:06.320 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:08:06.320 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:06.320 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:08:06.320 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:08:06.320 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:06.320 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:08:06.320 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:06.320 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-15 21:21:39.615318] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=f94c, Actual=fd4c 00:08:06.320 [2024-07-15 21:21:39.616034] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1152, Actual=1552 00:08:06.320 [2024-07-15 21:21:39.616750] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=488 00:08:06.320 [2024-07-15 21:21:39.617532] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=488 00:08:06.320 [2024-07-15 21:21:39.618330] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=400005d 00:08:06.320 [2024-07-15 21:21:39.619055] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=400005d 00:08:06.320 [2024-07-15 21:21:39.619738] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd4c, Actual=3ae8 00:08:06.320 [2024-07-15 21:21:39.620430] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=f141, Actual=2b15 00:08:06.320 [2024-07-15 21:21:39.621192] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1eb753ed, Actual=1ab753ed 00:08:06.320 [2024-07-15 21:21:39.621982] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=f5591558, Actual=f1591558 00:08:06.320 [2024-07-15 21:21:39.622740] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=488 00:08:06.320 [2024-07-15 21:21:39.623431] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=488 00:08:06.320 [2024-07-15 21:21:39.624119] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=400005d 00:08:06.320 [2024-07-15 21:21:39.624851] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=400005d 00:08:06.320 [2024-07-15 21:21:39.625647] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753ed, Actual=a380ca0b 00:08:06.320 [2024-07-15 21:21:39.626387] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=5a3d6598, Actual=92cd1314 00:08:06.320 [2024-07-15 21:21:39.627092] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728acc20d3, Actual=a576a7728ecc20d3 00:08:06.321 [2024-07-15 21:21:39.627779] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=68e28e2383cd7652, Actual=68e28e2387cd7652 00:08:06.321 [2024-07-15 21:21:39.628468] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=488 00:08:06.321 [2024-07-15 21:21:39.629238] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=488 00:08:06.321 [2024-07-15 21:21:39.630031] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=4000000005d 00:08:06.321 [2024-07-15 21:21:39.630777] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=4000000005d 00:08:06.321 [2024-07-15 21:21:39.631485] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc20d3, Actual=fbf8b6a4899956fc 00:08:06.321 [2024-07-15 21:21:39.632172] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=bdcaeff86fabb130, Actual=f94852bbb1c95881 00:08:06.321 passed 00:08:06.321 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-15 21:21:39.632459] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=f94c, Actual=fd4c 00:08:06.321 [2024-07-15 21:21:39.632638] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=72d3, Actual=76d3 00:08:06.321 [2024-07-15 21:21:39.632861] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=488 00:08:06.321 [2024-07-15 21:21:39.633069] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=488 00:08:06.321 [2024-07-15 21:21:39.633294] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4000059 00:08:06.321 [2024-07-15 21:21:39.633492] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4000059 00:08:06.321 [2024-07-15 21:21:39.633693] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=3ae8 00:08:06.321 [2024-07-15 21:21:39.633888] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=4894 00:08:06.321 [2024-07-15 21:21:39.634089] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1eb753ed, Actual=1ab753ed 00:08:06.321 [2024-07-15 21:21:39.634264] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=34d920ad, Actual=30d920ad 00:08:06.321 [2024-07-15 21:21:39.634449] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=488 00:08:06.321 [2024-07-15 21:21:39.634630] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=488 00:08:06.321 [2024-07-15 21:21:39.634803] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4000059 00:08:06.321 [2024-07-15 21:21:39.634981] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4000059 00:08:06.321 [2024-07-15 21:21:39.635158] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=a380ca0b 00:08:06.321 [2024-07-15 21:21:39.635337] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=534d26e1 00:08:06.321 [2024-07-15 21:21:39.635522] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728acc20d3, Actual=a576a7728ecc20d3 00:08:06.321 [2024-07-15 21:21:39.635702] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9d0081b0a522f20d, Actual=9d0081b0a122f20d 00:08:06.321 [2024-07-15 21:21:39.635876] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=488 00:08:06.321 [2024-07-15 21:21:39.636055] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=488 00:08:06.321 [2024-07-15 21:21:39.636229] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000000059 00:08:06.321 [2024-07-15 21:21:39.636409] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000000059 00:08:06.321 [2024-07-15 21:21:39.636588] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=fbf8b6a4899956fc 00:08:06.321 [2024-07-15 21:21:39.636797] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=caa5d289726dcde 00:08:06.321 passed 00:08:06.321 Test: set_md_interleave_iovs_test ...passed 00:08:06.321 Test: set_md_interleave_iovs_split_test ...passed 00:08:06.321 Test: dif_generate_stream_pi_16_test ...passed 00:08:06.321 Test: dif_generate_stream_test ...passed 00:08:06.321 Test: set_md_interleave_iovs_alignment_test ...passed 00:08:06.321 Test: dif_generate_split_test ...[2024-07-15 21:21:39.642204] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1822:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:08:06.321 passed 00:08:06.321 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:08:06.321 Test: dif_verify_split_test ...passed 00:08:06.321 Test: dif_verify_stream_multi_segments_test ...passed 00:08:06.321 Test: update_crc32c_pi_16_test ...passed 00:08:06.321 Test: update_crc32c_test ...passed 00:08:06.321 Test: dif_update_crc32c_split_test ...passed 00:08:06.321 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:08:06.321 Test: get_range_with_md_test ...passed 00:08:06.321 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:08:06.321 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:08:06.321 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:08:06.321 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:08:06.321 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:08:06.321 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:08:06.321 Test: dif_generate_and_verify_unmap_test ...passed 00:08:06.321 00:08:06.321 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.321 suites 1 1 n/a 0 0 00:08:06.321 tests 79 79 79 0 0 00:08:06.321 asserts 3584 3584 3584 0 n/a 00:08:06.321 00:08:06.321 Elapsed time = 0.230 seconds 00:08:06.590 21:21:39 unittest.unittest_util -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:08:06.590 00:08:06.590 00:08:06.590 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.590 http://cunit.sourceforge.net/ 00:08:06.590 00:08:06.590 00:08:06.590 Suite: iov 00:08:06.590 Test: test_single_iov ...passed 00:08:06.590 Test: test_simple_iov ...passed 00:08:06.590 Test: test_complex_iov ...passed 00:08:06.590 Test: test_iovs_to_buf ...passed 00:08:06.590 Test: test_buf_to_iovs ...passed 00:08:06.590 Test: test_memset ...passed 00:08:06.590 Test: test_iov_one ...passed 00:08:06.590 Test: test_iov_xfer ...passed 00:08:06.590 00:08:06.590 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.591 suites 1 1 n/a 0 0 00:08:06.591 tests 8 8 8 0 0 00:08:06.591 asserts 156 156 156 0 n/a 00:08:06.591 00:08:06.591 Elapsed time = 0.000 seconds 00:08:06.591 21:21:39 unittest.unittest_util -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:08:06.591 00:08:06.591 00:08:06.591 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.591 http://cunit.sourceforge.net/ 00:08:06.591 00:08:06.591 00:08:06.591 Suite: math 00:08:06.591 Test: test_serial_number_arithmetic ...passed 00:08:06.591 Suite: erase 00:08:06.591 Test: test_memset_s ...passed 00:08:06.591 00:08:06.591 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.591 suites 2 2 n/a 0 0 00:08:06.591 tests 2 2 2 0 0 00:08:06.591 asserts 18 18 18 0 n/a 00:08:06.591 00:08:06.591 Elapsed time = 0.000 seconds 00:08:06.591 21:21:39 unittest.unittest_util -- unit/unittest.sh@145 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:08:06.591 00:08:06.591 00:08:06.591 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.591 http://cunit.sourceforge.net/ 00:08:06.591 00:08:06.591 00:08:06.591 Suite: pipe 00:08:06.591 Test: test_create_destroy ...passed 00:08:06.591 Test: test_write_get_buffer ...passed 00:08:06.591 Test: test_write_advance ...passed 00:08:06.591 Test: test_read_get_buffer ...passed 00:08:06.591 Test: test_read_advance ...passed 00:08:06.591 Test: test_data ...passed 00:08:06.591 00:08:06.591 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.591 suites 1 1 n/a 0 0 00:08:06.591 tests 6 6 6 0 0 00:08:06.591 asserts 251 251 251 0 n/a 00:08:06.591 00:08:06.591 Elapsed time = 0.000 seconds 00:08:06.591 21:21:39 unittest.unittest_util -- unit/unittest.sh@146 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:08:06.591 00:08:06.591 00:08:06.591 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.591 http://cunit.sourceforge.net/ 00:08:06.591 00:08:06.591 00:08:06.591 Suite: xor 00:08:06.591 Test: test_xor_gen ...passed 00:08:06.591 00:08:06.591 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.591 suites 1 1 n/a 0 0 00:08:06.591 tests 1 1 1 0 0 00:08:06.591 asserts 17 17 17 0 n/a 00:08:06.591 00:08:06.591 Elapsed time = 0.009 seconds 00:08:06.591 00:08:06.591 real 0m0.794s 00:08:06.591 user 0m0.512s 00:08:06.591 sys 0m0.276s 00:08:06.591 21:21:39 unittest.unittest_util -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.591 21:21:39 unittest.unittest_util -- common/autotest_common.sh@10 -- # set +x 00:08:06.591 ************************************ 00:08:06.591 END TEST unittest_util 00:08:06.591 ************************************ 00:08:06.867 21:21:39 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:06.867 21:21:39 unittest -- unit/unittest.sh@284 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:06.867 21:21:39 unittest -- unit/unittest.sh@285 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:08:06.867 21:21:39 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:06.867 21:21:39 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.867 21:21:39 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:06.867 ************************************ 00:08:06.867 START TEST unittest_vhost 00:08:06.867 ************************************ 00:08:06.867 21:21:39 unittest.unittest_vhost -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:08:06.867 00:08:06.867 00:08:06.867 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.867 http://cunit.sourceforge.net/ 00:08:06.867 00:08:06.867 00:08:06.867 Suite: vhost_suite 00:08:06.867 Test: desc_to_iov_test ...[2024-07-15 21:21:40.001405] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 620:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:08:06.867 passed 00:08:06.867 Test: create_controller_test ...[2024-07-15 21:21:40.006035] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:08:06.867 [2024-07-15 21:21:40.006199] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:08:06.867 [2024-07-15 21:21:40.006358] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:08:06.867 [2024-07-15 21:21:40.006467] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:08:06.867 [2024-07-15 21:21:40.006540] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:08:06.867 [2024-07-15 21:21:40.006924] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1781:vhost_user_dev_init: *ERROR*: Resulting socket path for controller is too long: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 00:08:06.867 [2024-07-15 21:21:40.011516] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 137:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:08:06.867 passed 00:08:06.867 Test: session_find_by_vid_test ...passed 00:08:06.867 Test: remove_controller_test ...[2024-07-15 21:21:40.018041] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1866:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:08:06.867 passed 00:08:06.867 Test: vq_avail_ring_get_test ...passed 00:08:06.867 Test: vq_packed_ring_test ...passed 00:08:06.867 Test: vhost_blk_construct_test ...passed 00:08:06.867 00:08:06.867 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.867 suites 1 1 n/a 0 0 00:08:06.867 tests 7 7 7 0 0 00:08:06.867 asserts 147 147 147 0 n/a 00:08:06.867 00:08:06.867 Elapsed time = 0.021 seconds 00:08:06.867 00:08:06.867 real 0m0.085s 00:08:06.867 user 0m0.036s 00:08:06.867 sys 0m0.045s 00:08:06.867 21:21:40 unittest.unittest_vhost -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.867 21:21:40 unittest.unittest_vhost -- common/autotest_common.sh@10 -- # set +x 00:08:06.867 ************************************ 00:08:06.867 END TEST unittest_vhost 00:08:06.867 ************************************ 00:08:06.867 21:21:40 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:06.867 21:21:40 unittest -- unit/unittest.sh@287 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:08:06.867 21:21:40 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:06.867 21:21:40 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.867 21:21:40 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:06.867 ************************************ 00:08:06.867 START TEST unittest_dma 00:08:06.867 ************************************ 00:08:06.867 21:21:40 unittest.unittest_dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:08:06.867 00:08:06.867 00:08:06.867 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.867 http://cunit.sourceforge.net/ 00:08:06.867 00:08:06.867 00:08:06.867 Suite: dma_suite 00:08:06.867 Test: test_dma ...[2024-07-15 21:21:40.146846] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 56:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:08:06.867 passed 00:08:06.867 00:08:06.867 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.867 suites 1 1 n/a 0 0 00:08:06.867 tests 1 1 1 0 0 00:08:06.867 asserts 54 54 54 0 n/a 00:08:06.867 00:08:06.867 Elapsed time = 0.001 seconds 00:08:06.867 00:08:06.867 real 0m0.049s 00:08:06.867 user 0m0.020s 00:08:06.867 sys 0m0.029s 00:08:06.867 21:21:40 unittest.unittest_dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.867 21:21:40 unittest.unittest_dma -- common/autotest_common.sh@10 -- # set +x 00:08:06.867 ************************************ 00:08:06.867 END TEST unittest_dma 00:08:06.867 ************************************ 00:08:06.867 21:21:40 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:06.867 21:21:40 unittest -- unit/unittest.sh@289 -- # run_test unittest_init unittest_init 00:08:06.867 21:21:40 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:06.867 21:21:40 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.867 21:21:40 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:07.126 ************************************ 00:08:07.126 START TEST unittest_init 00:08:07.126 ************************************ 00:08:07.126 21:21:40 unittest.unittest_init -- common/autotest_common.sh@1123 -- # unittest_init 00:08:07.126 21:21:40 unittest.unittest_init -- unit/unittest.sh@150 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:08:07.126 00:08:07.126 00:08:07.126 CUnit - A unit testing framework for C - Version 2.1-3 00:08:07.126 http://cunit.sourceforge.net/ 00:08:07.126 00:08:07.126 00:08:07.126 Suite: subsystem_suite 00:08:07.126 Test: subsystem_sort_test_depends_on_single ...passed 00:08:07.126 Test: subsystem_sort_test_depends_on_multiple ...passed 00:08:07.126 Test: subsystem_sort_test_missing_dependency ...[2024-07-15 21:21:40.265438] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 196:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:08:07.126 [2024-07-15 21:21:40.265806] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:08:07.126 passed 00:08:07.126 00:08:07.126 Run Summary: Type Total Ran Passed Failed Inactive 00:08:07.126 suites 1 1 n/a 0 0 00:08:07.126 tests 3 3 3 0 0 00:08:07.126 asserts 20 20 20 0 n/a 00:08:07.126 00:08:07.126 Elapsed time = 0.001 seconds 00:08:07.126 00:08:07.126 real 0m0.045s 00:08:07.126 user 0m0.022s 00:08:07.126 sys 0m0.022s 00:08:07.126 21:21:40 unittest.unittest_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:07.126 21:21:40 unittest.unittest_init -- common/autotest_common.sh@10 -- # set +x 00:08:07.126 ************************************ 00:08:07.126 END TEST unittest_init 00:08:07.126 ************************************ 00:08:07.126 21:21:40 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:07.126 21:21:40 unittest -- unit/unittest.sh@290 -- # run_test unittest_keyring /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:08:07.126 21:21:40 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:07.126 21:21:40 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.126 21:21:40 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:07.126 ************************************ 00:08:07.126 START TEST unittest_keyring 00:08:07.126 ************************************ 00:08:07.126 21:21:40 unittest.unittest_keyring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:08:07.126 00:08:07.126 00:08:07.126 CUnit - A unit testing framework for C - Version 2.1-3 00:08:07.126 http://cunit.sourceforge.net/ 00:08:07.126 00:08:07.126 00:08:07.126 Suite: keyring 00:08:07.126 Test: test_keyring_add_remove ...[2024-07-15 21:21:40.372740] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists 00:08:07.126 [2024-07-15 21:21:40.373225] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists 00:08:07.126 [2024-07-15 21:21:40.373407] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:08:07.126 passed 00:08:07.126 Test: test_keyring_get_put ...passed 00:08:07.126 00:08:07.126 Run Summary: Type Total Ran Passed Failed Inactive 00:08:07.126 suites 1 1 n/a 0 0 00:08:07.126 tests 2 2 2 0 0 00:08:07.126 asserts 44 44 44 0 n/a 00:08:07.126 00:08:07.126 Elapsed time = 0.001 seconds 00:08:07.126 00:08:07.126 real 0m0.047s 00:08:07.126 user 0m0.024s 00:08:07.126 sys 0m0.022s 00:08:07.126 21:21:40 unittest.unittest_keyring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:07.126 21:21:40 unittest.unittest_keyring -- common/autotest_common.sh@10 -- # set +x 00:08:07.126 ************************************ 00:08:07.126 END TEST unittest_keyring 00:08:07.126 ************************************ 00:08:07.126 21:21:40 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:07.126 21:21:40 unittest -- unit/unittest.sh@292 -- # '[' yes = yes ']' 00:08:07.126 21:21:40 unittest -- unit/unittest.sh@292 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:08:07.126 21:21:40 unittest -- unit/unittest.sh@293 -- # hostname 00:08:07.126 21:21:40 unittest -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2004-cloud-1712646987-2220 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:08:07.384 geninfo: WARNING: invalid characters removed from testname! 00:08:39.450 21:22:09 unittest -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:08:41.989 21:22:14 unittest -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:44.528 21:22:17 unittest -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:47.083 21:22:20 unittest -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:49.621 21:22:22 unittest -- unit/unittest.sh@298 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:52.169 21:22:25 unittest -- unit/unittest.sh@299 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:54.701 21:22:27 unittest -- unit/unittest.sh@300 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:57.229 21:22:30 unittest -- unit/unittest.sh@301 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:08:57.229 21:22:30 unittest -- unit/unittest.sh@302 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:08:57.795 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:57.795 Found 324 entries. 00:08:57.795 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:08:57.795 Writing .css and .png files. 00:08:57.795 Generating output. 00:08:57.795 Processing file include/linux/virtio_ring.h 00:08:58.053 Processing file include/spdk/base64.h 00:08:58.053 Processing file include/spdk/endian.h 00:08:58.053 Processing file include/spdk/mmio.h 00:08:58.053 Processing file include/spdk/histogram_data.h 00:08:58.053 Processing file include/spdk/bdev_module.h 00:08:58.053 Processing file include/spdk/trace.h 00:08:58.053 Processing file include/spdk/nvme.h 00:08:58.053 Processing file include/spdk/nvme_spec.h 00:08:58.053 Processing file include/spdk/nvmf_transport.h 00:08:58.053 Processing file include/spdk/util.h 00:08:58.053 Processing file include/spdk/thread.h 00:08:58.311 Processing file include/spdk_internal/rdma_utils.h 00:08:58.311 Processing file include/spdk_internal/virtio.h 00:08:58.311 Processing file include/spdk_internal/sgl.h 00:08:58.311 Processing file include/spdk_internal/sock.h 00:08:58.311 Processing file include/spdk_internal/nvme_tcp.h 00:08:58.311 Processing file include/spdk_internal/utf.h 00:08:58.311 Processing file lib/accel/accel_rpc.c 00:08:58.311 Processing file lib/accel/accel.c 00:08:58.311 Processing file lib/accel/accel_sw.c 00:08:58.572 Processing file lib/bdev/scsi_nvme.c 00:08:58.572 Processing file lib/bdev/bdev_rpc.c 00:08:58.572 Processing file lib/bdev/bdev_zone.c 00:08:58.572 Processing file lib/bdev/bdev.c 00:08:58.572 Processing file lib/bdev/part.c 00:08:58.831 Processing file lib/blob/zeroes.c 00:08:58.831 Processing file lib/blob/request.c 00:08:58.831 Processing file lib/blob/blob_bs_dev.c 00:08:58.831 Processing file lib/blob/blobstore.h 00:08:58.831 Processing file lib/blob/blobstore.c 00:08:59.090 Processing file lib/blobfs/tree.c 00:08:59.090 Processing file lib/blobfs/blobfs.c 00:08:59.090 Processing file lib/conf/conf.c 00:08:59.090 Processing file lib/dma/dma.c 00:08:59.349 Processing file lib/env_dpdk/pci_virtio.c 00:08:59.349 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:08:59.349 Processing file lib/env_dpdk/env.c 00:08:59.349 Processing file lib/env_dpdk/pci_dpdk.c 00:08:59.349 Processing file lib/env_dpdk/sigbus_handler.c 00:08:59.349 Processing file lib/env_dpdk/pci_idxd.c 00:08:59.349 Processing file lib/env_dpdk/pci_ioat.c 00:08:59.349 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:08:59.349 Processing file lib/env_dpdk/pci.c 00:08:59.349 Processing file lib/env_dpdk/pci_event.c 00:08:59.349 Processing file lib/env_dpdk/threads.c 00:08:59.349 Processing file lib/env_dpdk/pci_vmd.c 00:08:59.349 Processing file lib/env_dpdk/init.c 00:08:59.349 Processing file lib/env_dpdk/memory.c 00:08:59.608 Processing file lib/event/scheduler_static.c 00:08:59.608 Processing file lib/event/reactor.c 00:08:59.608 Processing file lib/event/app_rpc.c 00:08:59.608 Processing file lib/event/log_rpc.c 00:08:59.608 Processing file lib/event/app.c 00:08:59.867 Processing file lib/ftl/ftl_writer.c 00:08:59.867 Processing file lib/ftl/ftl_sb.c 00:08:59.867 Processing file lib/ftl/ftl_l2p.c 00:08:59.867 Processing file lib/ftl/ftl_io.h 00:08:59.867 Processing file lib/ftl/ftl_p2l.c 00:08:59.867 Processing file lib/ftl/ftl_nv_cache.c 00:08:59.867 Processing file lib/ftl/ftl_l2p_cache.c 00:08:59.867 Processing file lib/ftl/ftl_io.c 00:08:59.867 Processing file lib/ftl/ftl_core.c 00:08:59.867 Processing file lib/ftl/ftl_trace.c 00:08:59.867 Processing file lib/ftl/ftl_layout.c 00:08:59.867 Processing file lib/ftl/ftl_band.c 00:08:59.867 Processing file lib/ftl/ftl_debug.h 00:08:59.867 Processing file lib/ftl/ftl_writer.h 00:08:59.867 Processing file lib/ftl/ftl_rq.c 00:08:59.867 Processing file lib/ftl/ftl_init.c 00:08:59.867 Processing file lib/ftl/ftl_core.h 00:08:59.867 Processing file lib/ftl/ftl_nv_cache.h 00:08:59.867 Processing file lib/ftl/ftl_nv_cache_io.h 00:08:59.867 Processing file lib/ftl/ftl_debug.c 00:08:59.867 Processing file lib/ftl/ftl_reloc.c 00:08:59.867 Processing file lib/ftl/ftl_band_ops.c 00:08:59.867 Processing file lib/ftl/ftl_band.h 00:08:59.867 Processing file lib/ftl/ftl_l2p_flat.c 00:09:00.124 Processing file lib/ftl/base/ftl_base_dev.c 00:09:00.124 Processing file lib/ftl/base/ftl_base_bdev.c 00:09:00.400 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:09:00.400 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:09:00.400 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:09:00.400 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:09:00.400 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:09:00.400 Processing file lib/ftl/mngt/ftl_mngt.c 00:09:00.400 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:09:00.400 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:09:00.400 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:09:00.400 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:09:00.400 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:09:00.400 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:09:00.400 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:09:00.400 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:09:00.400 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:09:00.400 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:09:00.400 Processing file lib/ftl/upgrade/ftl_chunk_upgrade.c 00:09:00.400 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:09:00.400 Processing file lib/ftl/upgrade/ftl_p2l_upgrade.c 00:09:00.400 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:09:00.400 Processing file lib/ftl/upgrade/ftl_band_upgrade.c 00:09:00.400 Processing file lib/ftl/upgrade/ftl_trim_upgrade.c 00:09:00.400 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:09:00.664 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:09:00.664 Processing file lib/ftl/utils/ftl_mempool.c 00:09:00.664 Processing file lib/ftl/utils/ftl_property.c 00:09:00.664 Processing file lib/ftl/utils/ftl_property.h 00:09:00.664 Processing file lib/ftl/utils/ftl_bitmap.c 00:09:00.664 Processing file lib/ftl/utils/ftl_df.h 00:09:00.664 Processing file lib/ftl/utils/ftl_conf.c 00:09:00.664 Processing file lib/ftl/utils/ftl_addr_utils.h 00:09:00.664 Processing file lib/ftl/utils/ftl_md.c 00:09:00.664 Processing file lib/idxd/idxd.c 00:09:00.664 Processing file lib/idxd/idxd_user.c 00:09:00.664 Processing file lib/idxd/idxd_internal.h 00:09:00.923 Processing file lib/init/subsystem_rpc.c 00:09:00.923 Processing file lib/init/subsystem.c 00:09:00.923 Processing file lib/init/rpc.c 00:09:00.923 Processing file lib/init/json_config.c 00:09:00.923 Processing file lib/ioat/ioat.c 00:09:00.923 Processing file lib/ioat/ioat_internal.h 00:09:01.183 Processing file lib/iscsi/portal_grp.c 00:09:01.183 Processing file lib/iscsi/iscsi.h 00:09:01.183 Processing file lib/iscsi/iscsi_rpc.c 00:09:01.183 Processing file lib/iscsi/iscsi_subsystem.c 00:09:01.183 Processing file lib/iscsi/task.h 00:09:01.183 Processing file lib/iscsi/init_grp.c 00:09:01.183 Processing file lib/iscsi/md5.c 00:09:01.183 Processing file lib/iscsi/task.c 00:09:01.183 Processing file lib/iscsi/conn.c 00:09:01.183 Processing file lib/iscsi/iscsi.c 00:09:01.183 Processing file lib/iscsi/tgt_node.c 00:09:01.183 Processing file lib/iscsi/param.c 00:09:01.442 Processing file lib/json/json_util.c 00:09:01.442 Processing file lib/json/json_write.c 00:09:01.442 Processing file lib/json/json_parse.c 00:09:01.442 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:09:01.442 Processing file lib/jsonrpc/jsonrpc_server.c 00:09:01.442 Processing file lib/jsonrpc/jsonrpc_client.c 00:09:01.442 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:09:01.442 Processing file lib/keyring/keyring_rpc.c 00:09:01.442 Processing file lib/keyring/keyring.c 00:09:01.442 Processing file lib/log/log_flags.c 00:09:01.442 Processing file lib/log/log.c 00:09:01.442 Processing file lib/log/log_deprecated.c 00:09:01.700 Processing file lib/lvol/lvol.c 00:09:01.700 Processing file lib/nbd/nbd.c 00:09:01.700 Processing file lib/nbd/nbd_rpc.c 00:09:01.700 Processing file lib/notify/notify.c 00:09:01.700 Processing file lib/notify/notify_rpc.c 00:09:02.638 Processing file lib/nvme/nvme_ctrlr.c 00:09:02.638 Processing file lib/nvme/nvme_qpair.c 00:09:02.638 Processing file lib/nvme/nvme_pcie_common.c 00:09:02.638 Processing file lib/nvme/nvme_transport.c 00:09:02.638 Processing file lib/nvme/nvme_ns.c 00:09:02.638 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:09:02.638 Processing file lib/nvme/nvme_quirks.c 00:09:02.638 Processing file lib/nvme/nvme_zns.c 00:09:02.638 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:09:02.638 Processing file lib/nvme/nvme_opal.c 00:09:02.638 Processing file lib/nvme/nvme_poll_group.c 00:09:02.638 Processing file lib/nvme/nvme_discovery.c 00:09:02.638 Processing file lib/nvme/nvme_cuse.c 00:09:02.638 Processing file lib/nvme/nvme_fabric.c 00:09:02.638 Processing file lib/nvme/nvme_stubs.c 00:09:02.638 Processing file lib/nvme/nvme_ns_cmd.c 00:09:02.638 Processing file lib/nvme/nvme_auth.c 00:09:02.638 Processing file lib/nvme/nvme.c 00:09:02.638 Processing file lib/nvme/nvme_rdma.c 00:09:02.638 Processing file lib/nvme/nvme_io_msg.c 00:09:02.638 Processing file lib/nvme/nvme_tcp.c 00:09:02.638 Processing file lib/nvme/nvme_pcie_internal.h 00:09:02.638 Processing file lib/nvme/nvme_internal.h 00:09:02.638 Processing file lib/nvme/nvme_pcie.c 00:09:02.638 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:09:02.897 Processing file lib/nvmf/ctrlr_bdev.c 00:09:02.897 Processing file lib/nvmf/auth.c 00:09:02.897 Processing file lib/nvmf/nvmf.c 00:09:02.897 Processing file lib/nvmf/nvmf_internal.h 00:09:02.897 Processing file lib/nvmf/ctrlr_discovery.c 00:09:02.897 Processing file lib/nvmf/stubs.c 00:09:02.897 Processing file lib/nvmf/transport.c 00:09:02.897 Processing file lib/nvmf/tcp.c 00:09:02.897 Processing file lib/nvmf/rdma.c 00:09:02.897 Processing file lib/nvmf/subsystem.c 00:09:02.897 Processing file lib/nvmf/ctrlr.c 00:09:02.898 Processing file lib/nvmf/nvmf_rpc.c 00:09:02.898 Processing file lib/rdma_provider/common.c 00:09:02.898 Processing file lib/rdma_provider/rdma_provider_verbs.c 00:09:03.156 Processing file lib/rdma_utils/rdma_utils.c 00:09:03.156 Processing file lib/rpc/rpc.c 00:09:03.416 Processing file lib/scsi/dev.c 00:09:03.416 Processing file lib/scsi/scsi_rpc.c 00:09:03.416 Processing file lib/scsi/scsi_pr.c 00:09:03.416 Processing file lib/scsi/lun.c 00:09:03.416 Processing file lib/scsi/scsi.c 00:09:03.416 Processing file lib/scsi/port.c 00:09:03.416 Processing file lib/scsi/scsi_bdev.c 00:09:03.416 Processing file lib/scsi/task.c 00:09:03.416 Processing file lib/sock/sock_rpc.c 00:09:03.416 Processing file lib/sock/sock.c 00:09:03.675 Processing file lib/thread/iobuf.c 00:09:03.675 Processing file lib/thread/thread.c 00:09:03.675 Processing file lib/trace/trace_rpc.c 00:09:03.675 Processing file lib/trace/trace_flags.c 00:09:03.675 Processing file lib/trace/trace.c 00:09:03.675 Processing file lib/trace_parser/trace.cpp 00:09:03.675 Processing file lib/ut/ut.c 00:09:03.934 Processing file lib/ut_mock/mock.c 00:09:04.192 Processing file lib/util/uuid.c 00:09:04.192 Processing file lib/util/hexlify.c 00:09:04.192 Processing file lib/util/crc32c.c 00:09:04.192 Processing file lib/util/base64.c 00:09:04.192 Processing file lib/util/bit_array.c 00:09:04.192 Processing file lib/util/math.c 00:09:04.192 Processing file lib/util/crc16.c 00:09:04.192 Processing file lib/util/fd_group.c 00:09:04.192 Processing file lib/util/file.c 00:09:04.192 Processing file lib/util/pipe.c 00:09:04.192 Processing file lib/util/crc32.c 00:09:04.192 Processing file lib/util/dif.c 00:09:04.192 Processing file lib/util/strerror_tls.c 00:09:04.192 Processing file lib/util/crc32_ieee.c 00:09:04.192 Processing file lib/util/xor.c 00:09:04.192 Processing file lib/util/fd.c 00:09:04.192 Processing file lib/util/cpuset.c 00:09:04.192 Processing file lib/util/crc64.c 00:09:04.192 Processing file lib/util/net.c 00:09:04.192 Processing file lib/util/zipf.c 00:09:04.192 Processing file lib/util/iov.c 00:09:04.192 Processing file lib/util/string.c 00:09:04.192 Processing file lib/vfio_user/host/vfio_user_pci.c 00:09:04.192 Processing file lib/vfio_user/host/vfio_user.c 00:09:04.454 Processing file lib/vhost/vhost_blk.c 00:09:04.454 Processing file lib/vhost/vhost_scsi.c 00:09:04.454 Processing file lib/vhost/rte_vhost_user.c 00:09:04.454 Processing file lib/vhost/vhost.c 00:09:04.454 Processing file lib/vhost/vhost_internal.h 00:09:04.454 Processing file lib/vhost/vhost_rpc.c 00:09:04.713 Processing file lib/virtio/virtio.c 00:09:04.713 Processing file lib/virtio/virtio_vfio_user.c 00:09:04.713 Processing file lib/virtio/virtio_pci.c 00:09:04.713 Processing file lib/virtio/virtio_vhost_user.c 00:09:04.713 Processing file lib/vmd/led.c 00:09:04.713 Processing file lib/vmd/vmd.c 00:09:04.713 Processing file module/accel/dsa/accel_dsa_rpc.c 00:09:04.713 Processing file module/accel/dsa/accel_dsa.c 00:09:04.973 Processing file module/accel/error/accel_error_rpc.c 00:09:04.973 Processing file module/accel/error/accel_error.c 00:09:04.973 Processing file module/accel/iaa/accel_iaa_rpc.c 00:09:04.973 Processing file module/accel/iaa/accel_iaa.c 00:09:04.973 Processing file module/accel/ioat/accel_ioat_rpc.c 00:09:04.973 Processing file module/accel/ioat/accel_ioat.c 00:09:05.235 Processing file module/bdev/aio/bdev_aio.c 00:09:05.235 Processing file module/bdev/aio/bdev_aio_rpc.c 00:09:05.235 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:09:05.235 Processing file module/bdev/delay/vbdev_delay.c 00:09:05.235 Processing file module/bdev/error/vbdev_error_rpc.c 00:09:05.235 Processing file module/bdev/error/vbdev_error.c 00:09:05.506 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:09:05.506 Processing file module/bdev/ftl/bdev_ftl.c 00:09:05.506 Processing file module/bdev/gpt/gpt.c 00:09:05.506 Processing file module/bdev/gpt/gpt.h 00:09:05.506 Processing file module/bdev/gpt/vbdev_gpt.c 00:09:05.506 Processing file module/bdev/iscsi/bdev_iscsi.c 00:09:05.506 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:09:05.765 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:09:05.765 Processing file module/bdev/lvol/vbdev_lvol.c 00:09:05.765 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:09:05.765 Processing file module/bdev/malloc/bdev_malloc.c 00:09:05.765 Processing file module/bdev/null/bdev_null_rpc.c 00:09:05.765 Processing file module/bdev/null/bdev_null.c 00:09:06.024 Processing file module/bdev/nvme/bdev_mdns_client.c 00:09:06.024 Processing file module/bdev/nvme/nvme_rpc.c 00:09:06.024 Processing file module/bdev/nvme/bdev_nvme.c 00:09:06.024 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:09:06.024 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:09:06.024 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:09:06.024 Processing file module/bdev/nvme/vbdev_opal.c 00:09:06.283 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:09:06.283 Processing file module/bdev/passthru/vbdev_passthru.c 00:09:06.543 Processing file module/bdev/raid/concat.c 00:09:06.543 Processing file module/bdev/raid/raid1.c 00:09:06.543 Processing file module/bdev/raid/raid0.c 00:09:06.543 Processing file module/bdev/raid/raid5f.c 00:09:06.543 Processing file module/bdev/raid/bdev_raid.c 00:09:06.543 Processing file module/bdev/raid/bdev_raid.h 00:09:06.543 Processing file module/bdev/raid/bdev_raid_sb.c 00:09:06.543 Processing file module/bdev/raid/bdev_raid_rpc.c 00:09:06.543 Processing file module/bdev/split/vbdev_split_rpc.c 00:09:06.543 Processing file module/bdev/split/vbdev_split.c 00:09:06.802 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:09:06.802 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:09:06.802 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:09:06.802 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:09:06.802 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:09:06.802 Processing file module/blob/bdev/blob_bdev.c 00:09:07.062 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:09:07.062 Processing file module/blobfs/bdev/blobfs_bdev.c 00:09:07.062 Processing file module/env_dpdk/env_dpdk_rpc.c 00:09:07.062 Processing file module/event/subsystems/accel/accel.c 00:09:07.062 Processing file module/event/subsystems/bdev/bdev.c 00:09:07.062 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:09:07.062 Processing file module/event/subsystems/iobuf/iobuf.c 00:09:07.321 Processing file module/event/subsystems/iscsi/iscsi.c 00:09:07.321 Processing file module/event/subsystems/keyring/keyring.c 00:09:07.321 Processing file module/event/subsystems/nbd/nbd.c 00:09:07.321 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:09:07.321 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:09:07.581 Processing file module/event/subsystems/scheduler/scheduler.c 00:09:07.581 Processing file module/event/subsystems/scsi/scsi.c 00:09:07.581 Processing file module/event/subsystems/sock/sock.c 00:09:07.581 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:09:07.581 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:09:07.840 Processing file module/event/subsystems/vmd/vmd.c 00:09:07.840 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:09:07.840 Processing file module/keyring/file/keyring.c 00:09:07.840 Processing file module/keyring/file/keyring_rpc.c 00:09:07.840 Processing file module/keyring/linux/keyring_rpc.c 00:09:07.840 Processing file module/keyring/linux/keyring.c 00:09:08.099 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:09:08.099 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:09:08.099 Processing file module/scheduler/gscheduler/gscheduler.c 00:09:08.358 Processing file module/sock/posix/posix.c 00:09:08.358 Writing directory view page. 00:09:08.358 Overall coverage rate: 00:09:08.358 lines......: 38.9% (40918 of 105195 lines) 00:09:08.358 functions..: 42.4% (3727 of 8793 functions) 00:09:08.358 00:09:08.358 00:09:08.358 ===================== 00:09:08.358 All unit tests passed 00:09:08.358 ===================== 00:09:08.358 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:09:08.358 21:22:41 unittest -- unit/unittest.sh@305 -- # set +x 00:09:08.358 00:09:08.358 00:09:08.358 ************************************ 00:09:08.358 END TEST unittest 00:09:08.358 ************************************ 00:09:08.358 00:09:08.358 real 3m40.111s 00:09:08.358 user 3m12.254s 00:09:08.358 sys 0m19.208s 00:09:08.358 21:22:41 unittest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:08.358 21:22:41 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:08.359 21:22:41 -- common/autotest_common.sh@1142 -- # return 0 00:09:08.359 21:22:41 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:09:08.359 21:22:41 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:09:08.359 21:22:41 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:09:08.359 21:22:41 -- spdk/autotest.sh@162 -- # timing_enter lib 00:09:08.359 21:22:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:08.359 21:22:41 -- common/autotest_common.sh@10 -- # set +x 00:09:08.359 21:22:41 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:09:08.359 21:22:41 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:08.359 21:22:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:08.359 21:22:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.359 21:22:41 -- common/autotest_common.sh@10 -- # set +x 00:09:08.359 ************************************ 00:09:08.359 START TEST env 00:09:08.359 ************************************ 00:09:08.359 21:22:41 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:08.359 * Looking for test storage... 00:09:08.359 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:09:08.359 21:22:41 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:08.359 21:22:41 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:08.359 21:22:41 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.359 21:22:41 env -- common/autotest_common.sh@10 -- # set +x 00:09:08.359 ************************************ 00:09:08.359 START TEST env_memory 00:09:08.359 ************************************ 00:09:08.359 21:22:41 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:08.359 00:09:08.359 00:09:08.359 CUnit - A unit testing framework for C - Version 2.1-3 00:09:08.359 http://cunit.sourceforge.net/ 00:09:08.359 00:09:08.359 00:09:08.359 Suite: memory 00:09:08.618 Test: alloc and free memory map ...[2024-07-15 21:22:41.742613] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:08.618 passed 00:09:08.618 Test: mem map translation ...[2024-07-15 21:22:41.776229] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:08.618 [2024-07-15 21:22:41.776466] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:08.618 [2024-07-15 21:22:41.776639] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:08.618 [2024-07-15 21:22:41.776788] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:08.618 passed 00:09:08.618 Test: mem map registration ...[2024-07-15 21:22:41.834411] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:09:08.618 [2024-07-15 21:22:41.834679] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:09:08.618 passed 00:09:08.618 Test: mem map adjacent registrations ...passed 00:09:08.618 00:09:08.618 Run Summary: Type Total Ran Passed Failed Inactive 00:09:08.618 suites 1 1 n/a 0 0 00:09:08.618 tests 4 4 4 0 0 00:09:08.618 asserts 152 152 152 0 n/a 00:09:08.618 00:09:08.618 Elapsed time = 0.197 seconds 00:09:08.618 00:09:08.618 real 0m0.245s 00:09:08.618 user 0m0.218s 00:09:08.618 sys 0m0.025s 00:09:08.618 21:22:41 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:08.618 21:22:41 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:09:08.618 ************************************ 00:09:08.618 END TEST env_memory 00:09:08.618 ************************************ 00:09:08.618 21:22:41 env -- common/autotest_common.sh@1142 -- # return 0 00:09:08.618 21:22:41 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:08.618 21:22:41 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:08.618 21:22:41 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.618 21:22:41 env -- common/autotest_common.sh@10 -- # set +x 00:09:08.878 ************************************ 00:09:08.878 START TEST env_vtophys 00:09:08.878 ************************************ 00:09:08.878 21:22:41 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:08.878 EAL: lib.eal log level changed from notice to debug 00:09:08.878 EAL: Detected lcore 0 as core 0 on socket 0 00:09:08.878 EAL: Detected lcore 1 as core 0 on socket 0 00:09:08.878 EAL: Detected lcore 2 as core 0 on socket 0 00:09:08.878 EAL: Detected lcore 3 as core 0 on socket 0 00:09:08.878 EAL: Detected lcore 4 as core 0 on socket 0 00:09:08.878 EAL: Detected lcore 5 as core 0 on socket 0 00:09:08.878 EAL: Detected lcore 6 as core 0 on socket 0 00:09:08.878 EAL: Detected lcore 7 as core 0 on socket 0 00:09:08.878 EAL: Detected lcore 8 as core 0 on socket 0 00:09:08.878 EAL: Detected lcore 9 as core 0 on socket 0 00:09:08.878 EAL: Maximum logical cores by configuration: 128 00:09:08.878 EAL: Detected CPU lcores: 10 00:09:08.878 EAL: Detected NUMA nodes: 1 00:09:08.878 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:09:08.878 EAL: Checking presence of .so 'librte_eal.so.24' 00:09:08.878 EAL: Checking presence of .so 'librte_eal.so' 00:09:08.878 EAL: Detected static linkage of DPDK 00:09:08.878 EAL: No shared files mode enabled, IPC will be disabled 00:09:08.878 EAL: Selected IOVA mode 'PA' 00:09:08.878 EAL: Probing VFIO support... 00:09:08.878 EAL: IOMMU type 1 (Type 1) is supported 00:09:08.878 EAL: IOMMU type 7 (sPAPR) is not supported 00:09:08.878 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:09:08.878 EAL: VFIO support initialized 00:09:08.878 EAL: Ask a virtual area of 0x2e000 bytes 00:09:08.878 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:08.878 EAL: Setting up physically contiguous memory... 00:09:08.878 EAL: Setting maximum number of open files to 1048576 00:09:08.879 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:08.879 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:08.879 EAL: Ask a virtual area of 0x61000 bytes 00:09:08.879 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:08.879 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:08.879 EAL: Ask a virtual area of 0x400000000 bytes 00:09:08.879 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:08.879 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:08.879 EAL: Ask a virtual area of 0x61000 bytes 00:09:08.879 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:08.879 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:08.879 EAL: Ask a virtual area of 0x400000000 bytes 00:09:08.879 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:08.879 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:08.879 EAL: Ask a virtual area of 0x61000 bytes 00:09:08.879 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:08.879 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:08.879 EAL: Ask a virtual area of 0x400000000 bytes 00:09:08.879 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:08.879 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:08.879 EAL: Ask a virtual area of 0x61000 bytes 00:09:08.879 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:08.879 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:08.879 EAL: Ask a virtual area of 0x400000000 bytes 00:09:08.879 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:08.879 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:08.879 EAL: Hugepages will be freed exactly as allocated. 00:09:08.879 EAL: No shared files mode enabled, IPC is disabled 00:09:08.879 EAL: No shared files mode enabled, IPC is disabled 00:09:08.879 EAL: TSC frequency is ~2290000 KHz 00:09:08.879 EAL: Main lcore 0 is ready (tid=7f556490ea40;cpuset=[0]) 00:09:08.879 EAL: Trying to obtain current memory policy. 00:09:08.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:08.879 EAL: Restoring previous memory policy: 0 00:09:08.879 EAL: request: mp_malloc_sync 00:09:08.879 EAL: No shared files mode enabled, IPC is disabled 00:09:08.879 EAL: Heap on socket 0 was expanded by 2MB 00:09:08.879 EAL: No shared files mode enabled, IPC is disabled 00:09:08.879 EAL: Mem event callback 'spdk:(nil)' registered 00:09:08.879 00:09:08.879 00:09:08.879 CUnit - A unit testing framework for C - Version 2.1-3 00:09:08.879 http://cunit.sourceforge.net/ 00:09:08.879 00:09:08.879 00:09:08.879 Suite: components_suite 00:09:09.448 Test: vtophys_malloc_test ...passed 00:09:09.448 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:09.448 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:09.448 EAL: Restoring previous memory policy: 0 00:09:09.448 EAL: Calling mem event callback 'spdk:(nil)' 00:09:09.448 EAL: request: mp_malloc_sync 00:09:09.448 EAL: No shared files mode enabled, IPC is disabled 00:09:09.448 EAL: Heap on socket 0 was expanded by 4MB 00:09:09.448 EAL: Calling mem event callback 'spdk:(nil)' 00:09:09.448 EAL: request: mp_malloc_sync 00:09:09.448 EAL: No shared files mode enabled, IPC is disabled 00:09:09.448 EAL: Heap on socket 0 was shrunk by 4MB 00:09:09.448 EAL: Trying to obtain current memory policy. 00:09:09.448 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:09.448 EAL: Restoring previous memory policy: 0 00:09:09.448 EAL: Calling mem event callback 'spdk:(nil)' 00:09:09.448 EAL: request: mp_malloc_sync 00:09:09.448 EAL: No shared files mode enabled, IPC is disabled 00:09:09.448 EAL: Heap on socket 0 was expanded by 6MB 00:09:09.448 EAL: Calling mem event callback 'spdk:(nil)' 00:09:09.448 EAL: request: mp_malloc_sync 00:09:09.448 EAL: No shared files mode enabled, IPC is disabled 00:09:09.448 EAL: Heap on socket 0 was shrunk by 6MB 00:09:09.449 EAL: Trying to obtain current memory policy. 00:09:09.449 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:09.449 EAL: Restoring previous memory policy: 0 00:09:09.449 EAL: Calling mem event callback 'spdk:(nil)' 00:09:09.449 EAL: request: mp_malloc_sync 00:09:09.449 EAL: No shared files mode enabled, IPC is disabled 00:09:09.449 EAL: Heap on socket 0 was expanded by 10MB 00:09:09.449 EAL: Calling mem event callback 'spdk:(nil)' 00:09:09.449 EAL: request: mp_malloc_sync 00:09:09.449 EAL: No shared files mode enabled, IPC is disabled 00:09:09.449 EAL: Heap on socket 0 was shrunk by 10MB 00:09:09.449 EAL: Trying to obtain current memory policy. 00:09:09.449 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:09.449 EAL: Restoring previous memory policy: 0 00:09:09.449 EAL: Calling mem event callback 'spdk:(nil)' 00:09:09.449 EAL: request: mp_malloc_sync 00:09:09.449 EAL: No shared files mode enabled, IPC is disabled 00:09:09.449 EAL: Heap on socket 0 was expanded by 18MB 00:09:09.449 EAL: Calling mem event callback 'spdk:(nil)' 00:09:09.449 EAL: request: mp_malloc_sync 00:09:09.449 EAL: No shared files mode enabled, IPC is disabled 00:09:09.449 EAL: Heap on socket 0 was shrunk by 18MB 00:09:09.449 EAL: Trying to obtain current memory policy. 00:09:09.449 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:09.449 EAL: Restoring previous memory policy: 0 00:09:09.449 EAL: Calling mem event callback 'spdk:(nil)' 00:09:09.449 EAL: request: mp_malloc_sync 00:09:09.449 EAL: No shared files mode enabled, IPC is disabled 00:09:09.449 EAL: Heap on socket 0 was expanded by 34MB 00:09:09.449 EAL: Calling mem event callback 'spdk:(nil)' 00:09:09.449 EAL: request: mp_malloc_sync 00:09:09.449 EAL: No shared files mode enabled, IPC is disabled 00:09:09.449 EAL: Heap on socket 0 was shrunk by 34MB 00:09:09.449 EAL: Trying to obtain current memory policy. 00:09:09.449 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:09.708 EAL: Restoring previous memory policy: 0 00:09:09.708 EAL: Calling mem event callback 'spdk:(nil)' 00:09:09.708 EAL: request: mp_malloc_sync 00:09:09.708 EAL: No shared files mode enabled, IPC is disabled 00:09:09.708 EAL: Heap on socket 0 was expanded by 66MB 00:09:09.708 EAL: Calling mem event callback 'spdk:(nil)' 00:09:09.708 EAL: request: mp_malloc_sync 00:09:09.708 EAL: No shared files mode enabled, IPC is disabled 00:09:09.708 EAL: Heap on socket 0 was shrunk by 66MB 00:09:09.708 EAL: Trying to obtain current memory policy. 00:09:09.708 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:09.973 EAL: Restoring previous memory policy: 0 00:09:09.973 EAL: Calling mem event callback 'spdk:(nil)' 00:09:09.973 EAL: request: mp_malloc_sync 00:09:09.973 EAL: No shared files mode enabled, IPC is disabled 00:09:09.973 EAL: Heap on socket 0 was expanded by 130MB 00:09:09.973 EAL: Calling mem event callback 'spdk:(nil)' 00:09:09.973 EAL: request: mp_malloc_sync 00:09:09.973 EAL: No shared files mode enabled, IPC is disabled 00:09:09.973 EAL: Heap on socket 0 was shrunk by 130MB 00:09:10.233 EAL: Trying to obtain current memory policy. 00:09:10.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:10.233 EAL: Restoring previous memory policy: 0 00:09:10.233 EAL: Calling mem event callback 'spdk:(nil)' 00:09:10.233 EAL: request: mp_malloc_sync 00:09:10.233 EAL: No shared files mode enabled, IPC is disabled 00:09:10.233 EAL: Heap on socket 0 was expanded by 258MB 00:09:10.801 EAL: Calling mem event callback 'spdk:(nil)' 00:09:10.801 EAL: request: mp_malloc_sync 00:09:10.801 EAL: No shared files mode enabled, IPC is disabled 00:09:10.801 EAL: Heap on socket 0 was shrunk by 258MB 00:09:11.370 EAL: Trying to obtain current memory policy. 00:09:11.370 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:11.370 EAL: Restoring previous memory policy: 0 00:09:11.370 EAL: Calling mem event callback 'spdk:(nil)' 00:09:11.370 EAL: request: mp_malloc_sync 00:09:11.370 EAL: No shared files mode enabled, IPC is disabled 00:09:11.370 EAL: Heap on socket 0 was expanded by 514MB 00:09:12.351 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.351 EAL: request: mp_malloc_sync 00:09:12.351 EAL: No shared files mode enabled, IPC is disabled 00:09:12.351 EAL: Heap on socket 0 was shrunk by 514MB 00:09:13.288 EAL: Trying to obtain current memory policy. 00:09:13.288 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:13.545 EAL: Restoring previous memory policy: 0 00:09:13.545 EAL: Calling mem event callback 'spdk:(nil)' 00:09:13.545 EAL: request: mp_malloc_sync 00:09:13.545 EAL: No shared files mode enabled, IPC is disabled 00:09:13.545 EAL: Heap on socket 0 was expanded by 1026MB 00:09:15.459 EAL: Calling mem event callback 'spdk:(nil)' 00:09:15.459 EAL: request: mp_malloc_sync 00:09:15.459 EAL: No shared files mode enabled, IPC is disabled 00:09:15.459 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:17.372 passed 00:09:17.372 00:09:17.372 Run Summary: Type Total Ran Passed Failed Inactive 00:09:17.372 suites 1 1 n/a 0 0 00:09:17.372 tests 2 2 2 0 0 00:09:17.372 asserts 6545 6545 6545 0 n/a 00:09:17.372 00:09:17.372 Elapsed time = 8.003 seconds 00:09:17.372 EAL: Calling mem event callback 'spdk:(nil)' 00:09:17.372 EAL: request: mp_malloc_sync 00:09:17.372 EAL: No shared files mode enabled, IPC is disabled 00:09:17.372 EAL: Heap on socket 0 was shrunk by 2MB 00:09:17.372 EAL: No shared files mode enabled, IPC is disabled 00:09:17.372 EAL: No shared files mode enabled, IPC is disabled 00:09:17.372 EAL: No shared files mode enabled, IPC is disabled 00:09:17.372 ************************************ 00:09:17.372 END TEST env_vtophys 00:09:17.372 ************************************ 00:09:17.372 00:09:17.372 real 0m8.327s 00:09:17.372 user 0m7.391s 00:09:17.372 sys 0m0.785s 00:09:17.372 21:22:50 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:17.372 21:22:50 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:09:17.372 21:22:50 env -- common/autotest_common.sh@1142 -- # return 0 00:09:17.372 21:22:50 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:17.372 21:22:50 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:17.372 21:22:50 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:17.372 21:22:50 env -- common/autotest_common.sh@10 -- # set +x 00:09:17.372 ************************************ 00:09:17.372 START TEST env_pci 00:09:17.372 ************************************ 00:09:17.372 21:22:50 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:17.372 00:09:17.372 00:09:17.372 CUnit - A unit testing framework for C - Version 2.1-3 00:09:17.372 http://cunit.sourceforge.net/ 00:09:17.372 00:09:17.372 00:09:17.372 Suite: pci 00:09:17.372 Test: pci_hook ...[2024-07-15 21:22:50.428620] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 111100 has claimed it 00:09:17.372 EAL: Cannot find device (10000:00:01.0) 00:09:17.372 EAL: Failed to attach device on primary process 00:09:17.372 passed 00:09:17.372 00:09:17.372 Run Summary: Type Total Ran Passed Failed Inactive 00:09:17.372 suites 1 1 n/a 0 0 00:09:17.372 tests 1 1 1 0 0 00:09:17.372 asserts 25 25 25 0 n/a 00:09:17.372 00:09:17.372 Elapsed time = 0.008 seconds 00:09:17.372 00:09:17.372 real 0m0.125s 00:09:17.372 user 0m0.085s 00:09:17.372 sys 0m0.040s 00:09:17.372 21:22:50 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:17.372 21:22:50 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:09:17.372 ************************************ 00:09:17.372 END TEST env_pci 00:09:17.372 ************************************ 00:09:17.372 21:22:50 env -- common/autotest_common.sh@1142 -- # return 0 00:09:17.372 21:22:50 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:17.372 21:22:50 env -- env/env.sh@15 -- # uname 00:09:17.372 21:22:50 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:17.372 21:22:50 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:17.372 21:22:50 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:17.372 21:22:50 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:17.372 21:22:50 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:17.372 21:22:50 env -- common/autotest_common.sh@10 -- # set +x 00:09:17.372 ************************************ 00:09:17.372 START TEST env_dpdk_post_init 00:09:17.372 ************************************ 00:09:17.372 21:22:50 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:17.372 EAL: Detected CPU lcores: 10 00:09:17.372 EAL: Detected NUMA nodes: 1 00:09:17.372 EAL: Detected static linkage of DPDK 00:09:17.372 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:17.372 EAL: Selected IOVA mode 'PA' 00:09:17.372 EAL: VFIO support initialized 00:09:17.631 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:17.631 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:09:17.631 Starting DPDK initialization... 00:09:17.631 Starting SPDK post initialization... 00:09:17.631 SPDK NVMe probe 00:09:17.631 Attaching to 0000:00:10.0 00:09:17.631 Attached to 0000:00:10.0 00:09:17.631 Cleaning up... 00:09:17.631 00:09:17.631 real 0m0.269s 00:09:17.631 user 0m0.068s 00:09:17.631 sys 0m0.102s 00:09:17.631 21:22:50 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:17.631 21:22:50 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:09:17.631 ************************************ 00:09:17.631 END TEST env_dpdk_post_init 00:09:17.631 ************************************ 00:09:17.631 21:22:50 env -- common/autotest_common.sh@1142 -- # return 0 00:09:17.631 21:22:50 env -- env/env.sh@26 -- # uname 00:09:17.631 21:22:50 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:17.631 21:22:50 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:17.631 21:22:50 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:17.631 21:22:50 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:17.631 21:22:50 env -- common/autotest_common.sh@10 -- # set +x 00:09:17.631 ************************************ 00:09:17.631 START TEST env_mem_callbacks 00:09:17.631 ************************************ 00:09:17.631 21:22:50 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:17.631 EAL: Detected CPU lcores: 10 00:09:17.631 EAL: Detected NUMA nodes: 1 00:09:17.631 EAL: Detected static linkage of DPDK 00:09:17.631 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:17.631 EAL: Selected IOVA mode 'PA' 00:09:17.631 EAL: VFIO support initialized 00:09:17.890 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:17.890 00:09:17.890 00:09:17.890 CUnit - A unit testing framework for C - Version 2.1-3 00:09:17.890 http://cunit.sourceforge.net/ 00:09:17.890 00:09:17.890 00:09:17.890 Suite: memory 00:09:17.890 Test: test ... 00:09:17.890 register 0x200000200000 2097152 00:09:17.890 malloc 3145728 00:09:17.890 register 0x200000400000 4194304 00:09:17.890 buf 0x2000004fffc0 len 3145728 PASSED 00:09:17.890 malloc 64 00:09:17.890 buf 0x2000004ffec0 len 64 PASSED 00:09:17.890 malloc 4194304 00:09:17.890 register 0x200000800000 6291456 00:09:17.890 buf 0x2000009fffc0 len 4194304 PASSED 00:09:17.890 free 0x2000004fffc0 3145728 00:09:17.890 free 0x2000004ffec0 64 00:09:17.890 unregister 0x200000400000 4194304 PASSED 00:09:17.890 free 0x2000009fffc0 4194304 00:09:17.890 unregister 0x200000800000 6291456 PASSED 00:09:17.890 malloc 8388608 00:09:17.890 register 0x200000400000 10485760 00:09:17.890 buf 0x2000005fffc0 len 8388608 PASSED 00:09:17.890 free 0x2000005fffc0 8388608 00:09:17.890 unregister 0x200000400000 10485760 PASSED 00:09:17.890 passed 00:09:17.890 00:09:17.890 Run Summary: Type Total Ran Passed Failed Inactive 00:09:17.890 suites 1 1 n/a 0 0 00:09:17.890 tests 1 1 1 0 0 00:09:17.890 asserts 15 15 15 0 n/a 00:09:17.890 00:09:17.890 Elapsed time = 0.090 seconds 00:09:17.890 ************************************ 00:09:17.890 END TEST env_mem_callbacks 00:09:17.890 ************************************ 00:09:17.890 00:09:17.890 real 0m0.321s 00:09:17.890 user 0m0.138s 00:09:17.890 sys 0m0.082s 00:09:17.890 21:22:51 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:17.890 21:22:51 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:09:18.150 21:22:51 env -- common/autotest_common.sh@1142 -- # return 0 00:09:18.150 00:09:18.150 real 0m9.724s 00:09:18.150 user 0m8.111s 00:09:18.150 sys 0m1.283s 00:09:18.150 21:22:51 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:18.150 ************************************ 00:09:18.150 END TEST env 00:09:18.150 ************************************ 00:09:18.150 21:22:51 env -- common/autotest_common.sh@10 -- # set +x 00:09:18.150 21:22:51 -- common/autotest_common.sh@1142 -- # return 0 00:09:18.150 21:22:51 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:18.150 21:22:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:18.150 21:22:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:18.150 21:22:51 -- common/autotest_common.sh@10 -- # set +x 00:09:18.150 ************************************ 00:09:18.150 START TEST rpc 00:09:18.150 ************************************ 00:09:18.150 21:22:51 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:18.150 * Looking for test storage... 00:09:18.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:18.150 21:22:51 rpc -- rpc/rpc.sh@65 -- # spdk_pid=111232 00:09:18.150 21:22:51 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:18.150 21:22:51 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:18.150 21:22:51 rpc -- rpc/rpc.sh@67 -- # waitforlisten 111232 00:09:18.150 21:22:51 rpc -- common/autotest_common.sh@829 -- # '[' -z 111232 ']' 00:09:18.150 21:22:51 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.150 21:22:51 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:18.150 21:22:51 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.150 21:22:51 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:18.150 21:22:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.407 [2024-07-15 21:22:51.539157] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:09:18.407 [2024-07-15 21:22:51.539379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111232 ] 00:09:18.407 [2024-07-15 21:22:51.697491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.664 [2024-07-15 21:22:51.955328] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:18.664 [2024-07-15 21:22:51.955469] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 111232' to capture a snapshot of events at runtime. 00:09:18.664 [2024-07-15 21:22:51.955536] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:18.664 [2024-07-15 21:22:51.955576] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:18.664 [2024-07-15 21:22:51.955620] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid111232 for offline analysis/debug. 00:09:18.664 [2024-07-15 21:22:51.955742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.069 21:22:53 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:20.069 21:22:53 rpc -- common/autotest_common.sh@862 -- # return 0 00:09:20.069 21:22:53 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:20.069 21:22:53 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:20.069 21:22:53 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:20.069 21:22:53 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:20.069 21:22:53 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:20.069 21:22:53 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.069 21:22:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.069 ************************************ 00:09:20.069 START TEST rpc_integrity 00:09:20.069 ************************************ 00:09:20.069 21:22:53 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:09:20.069 21:22:53 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:20.069 21:22:53 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.069 21:22:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:20.069 21:22:53 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.069 21:22:53 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:20.069 21:22:53 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:20.069 21:22:53 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:20.069 21:22:53 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:20.069 21:22:53 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.069 21:22:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:20.069 21:22:53 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.069 21:22:53 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:20.069 21:22:53 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:20.069 21:22:53 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.069 21:22:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:20.069 21:22:53 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.069 21:22:53 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:20.069 { 00:09:20.069 "name": "Malloc0", 00:09:20.069 "aliases": [ 00:09:20.069 "fb17dbd6-93fe-4151-aae8-924b2ba86b5e" 00:09:20.069 ], 00:09:20.069 "product_name": "Malloc disk", 00:09:20.069 "block_size": 512, 00:09:20.069 "num_blocks": 16384, 00:09:20.069 "uuid": "fb17dbd6-93fe-4151-aae8-924b2ba86b5e", 00:09:20.069 "assigned_rate_limits": { 00:09:20.069 "rw_ios_per_sec": 0, 00:09:20.069 "rw_mbytes_per_sec": 0, 00:09:20.069 "r_mbytes_per_sec": 0, 00:09:20.069 "w_mbytes_per_sec": 0 00:09:20.069 }, 00:09:20.069 "claimed": false, 00:09:20.069 "zoned": false, 00:09:20.069 "supported_io_types": { 00:09:20.069 "read": true, 00:09:20.069 "write": true, 00:09:20.069 "unmap": true, 00:09:20.069 "flush": true, 00:09:20.069 "reset": true, 00:09:20.069 "nvme_admin": false, 00:09:20.069 "nvme_io": false, 00:09:20.069 "nvme_io_md": false, 00:09:20.069 "write_zeroes": true, 00:09:20.069 "zcopy": true, 00:09:20.069 "get_zone_info": false, 00:09:20.069 "zone_management": false, 00:09:20.069 "zone_append": false, 00:09:20.069 "compare": false, 00:09:20.069 "compare_and_write": false, 00:09:20.069 "abort": true, 00:09:20.069 "seek_hole": false, 00:09:20.069 "seek_data": false, 00:09:20.069 "copy": true, 00:09:20.069 "nvme_iov_md": false 00:09:20.069 }, 00:09:20.069 "memory_domains": [ 00:09:20.069 { 00:09:20.069 "dma_device_id": "system", 00:09:20.069 "dma_device_type": 1 00:09:20.069 }, 00:09:20.069 { 00:09:20.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.069 "dma_device_type": 2 00:09:20.069 } 00:09:20.069 ], 00:09:20.069 "driver_specific": {} 00:09:20.069 } 00:09:20.069 ]' 00:09:20.069 21:22:53 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:20.069 21:22:53 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:20.069 21:22:53 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:20.069 21:22:53 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.069 21:22:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:20.069 [2024-07-15 21:22:53.267909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:20.069 [2024-07-15 21:22:53.268056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.069 [2024-07-15 21:22:53.268145] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:20.069 [2024-07-15 21:22:53.268194] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.070 [2024-07-15 21:22:53.270488] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.070 [2024-07-15 21:22:53.270585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:20.070 Passthru0 00:09:20.070 21:22:53 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.070 21:22:53 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:20.070 21:22:53 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.070 21:22:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:20.070 21:22:53 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.070 21:22:53 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:20.070 { 00:09:20.070 "name": "Malloc0", 00:09:20.070 "aliases": [ 00:09:20.070 "fb17dbd6-93fe-4151-aae8-924b2ba86b5e" 00:09:20.070 ], 00:09:20.070 "product_name": "Malloc disk", 00:09:20.070 "block_size": 512, 00:09:20.070 "num_blocks": 16384, 00:09:20.070 "uuid": "fb17dbd6-93fe-4151-aae8-924b2ba86b5e", 00:09:20.070 "assigned_rate_limits": { 00:09:20.070 "rw_ios_per_sec": 0, 00:09:20.070 "rw_mbytes_per_sec": 0, 00:09:20.070 "r_mbytes_per_sec": 0, 00:09:20.070 "w_mbytes_per_sec": 0 00:09:20.070 }, 00:09:20.070 "claimed": true, 00:09:20.070 "claim_type": "exclusive_write", 00:09:20.070 "zoned": false, 00:09:20.070 "supported_io_types": { 00:09:20.070 "read": true, 00:09:20.070 "write": true, 00:09:20.070 "unmap": true, 00:09:20.070 "flush": true, 00:09:20.070 "reset": true, 00:09:20.070 "nvme_admin": false, 00:09:20.070 "nvme_io": false, 00:09:20.070 "nvme_io_md": false, 00:09:20.070 "write_zeroes": true, 00:09:20.070 "zcopy": true, 00:09:20.070 "get_zone_info": false, 00:09:20.070 "zone_management": false, 00:09:20.070 "zone_append": false, 00:09:20.070 "compare": false, 00:09:20.070 "compare_and_write": false, 00:09:20.070 "abort": true, 00:09:20.070 "seek_hole": false, 00:09:20.070 "seek_data": false, 00:09:20.070 "copy": true, 00:09:20.070 "nvme_iov_md": false 00:09:20.070 }, 00:09:20.070 "memory_domains": [ 00:09:20.070 { 00:09:20.070 "dma_device_id": "system", 00:09:20.070 "dma_device_type": 1 00:09:20.070 }, 00:09:20.070 { 00:09:20.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.070 "dma_device_type": 2 00:09:20.070 } 00:09:20.070 ], 00:09:20.070 "driver_specific": {} 00:09:20.070 }, 00:09:20.070 { 00:09:20.070 "name": "Passthru0", 00:09:20.070 "aliases": [ 00:09:20.070 "a89ad1cb-f0d1-5e05-9b75-fc0265485a9f" 00:09:20.070 ], 00:09:20.070 "product_name": "passthru", 00:09:20.070 "block_size": 512, 00:09:20.070 "num_blocks": 16384, 00:09:20.070 "uuid": "a89ad1cb-f0d1-5e05-9b75-fc0265485a9f", 00:09:20.070 "assigned_rate_limits": { 00:09:20.070 "rw_ios_per_sec": 0, 00:09:20.070 "rw_mbytes_per_sec": 0, 00:09:20.070 "r_mbytes_per_sec": 0, 00:09:20.070 "w_mbytes_per_sec": 0 00:09:20.070 }, 00:09:20.070 "claimed": false, 00:09:20.070 "zoned": false, 00:09:20.070 "supported_io_types": { 00:09:20.070 "read": true, 00:09:20.070 "write": true, 00:09:20.070 "unmap": true, 00:09:20.070 "flush": true, 00:09:20.070 "reset": true, 00:09:20.070 "nvme_admin": false, 00:09:20.070 "nvme_io": false, 00:09:20.070 "nvme_io_md": false, 00:09:20.070 "write_zeroes": true, 00:09:20.070 "zcopy": true, 00:09:20.070 "get_zone_info": false, 00:09:20.070 "zone_management": false, 00:09:20.070 "zone_append": false, 00:09:20.070 "compare": false, 00:09:20.070 "compare_and_write": false, 00:09:20.070 "abort": true, 00:09:20.070 "seek_hole": false, 00:09:20.070 "seek_data": false, 00:09:20.070 "copy": true, 00:09:20.070 "nvme_iov_md": false 00:09:20.070 }, 00:09:20.070 "memory_domains": [ 00:09:20.070 { 00:09:20.070 "dma_device_id": "system", 00:09:20.070 "dma_device_type": 1 00:09:20.070 }, 00:09:20.070 { 00:09:20.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.070 "dma_device_type": 2 00:09:20.070 } 00:09:20.070 ], 00:09:20.070 "driver_specific": { 00:09:20.070 "passthru": { 00:09:20.070 "name": "Passthru0", 00:09:20.070 "base_bdev_name": "Malloc0" 00:09:20.070 } 00:09:20.070 } 00:09:20.070 } 00:09:20.070 ]' 00:09:20.070 21:22:53 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:20.070 21:22:53 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:20.070 21:22:53 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:20.070 21:22:53 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.070 21:22:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:20.070 21:22:53 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.070 21:22:53 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:20.070 21:22:53 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.070 21:22:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:20.070 21:22:53 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.070 21:22:53 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:20.070 21:22:53 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.070 21:22:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:20.070 21:22:53 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.070 21:22:53 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:20.070 21:22:53 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:20.329 ************************************ 00:09:20.329 END TEST rpc_integrity 00:09:20.329 ************************************ 00:09:20.329 21:22:53 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:20.329 00:09:20.329 real 0m0.395s 00:09:20.329 user 0m0.226s 00:09:20.329 sys 0m0.047s 00:09:20.329 21:22:53 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:20.329 21:22:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:20.329 21:22:53 rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:20.329 21:22:53 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:20.329 21:22:53 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:20.329 21:22:53 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.329 21:22:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.329 ************************************ 00:09:20.329 START TEST rpc_plugins 00:09:20.329 ************************************ 00:09:20.329 21:22:53 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:09:20.329 21:22:53 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:20.329 21:22:53 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.329 21:22:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:20.329 21:22:53 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.329 21:22:53 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:20.329 21:22:53 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:20.329 21:22:53 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.329 21:22:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:20.329 21:22:53 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.329 21:22:53 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:20.329 { 00:09:20.329 "name": "Malloc1", 00:09:20.329 "aliases": [ 00:09:20.329 "a09d6d5a-c47e-485a-95c5-b30812478b67" 00:09:20.329 ], 00:09:20.329 "product_name": "Malloc disk", 00:09:20.329 "block_size": 4096, 00:09:20.329 "num_blocks": 256, 00:09:20.329 "uuid": "a09d6d5a-c47e-485a-95c5-b30812478b67", 00:09:20.329 "assigned_rate_limits": { 00:09:20.329 "rw_ios_per_sec": 0, 00:09:20.329 "rw_mbytes_per_sec": 0, 00:09:20.329 "r_mbytes_per_sec": 0, 00:09:20.329 "w_mbytes_per_sec": 0 00:09:20.329 }, 00:09:20.329 "claimed": false, 00:09:20.329 "zoned": false, 00:09:20.329 "supported_io_types": { 00:09:20.329 "read": true, 00:09:20.329 "write": true, 00:09:20.329 "unmap": true, 00:09:20.329 "flush": true, 00:09:20.329 "reset": true, 00:09:20.329 "nvme_admin": false, 00:09:20.329 "nvme_io": false, 00:09:20.329 "nvme_io_md": false, 00:09:20.329 "write_zeroes": true, 00:09:20.329 "zcopy": true, 00:09:20.329 "get_zone_info": false, 00:09:20.329 "zone_management": false, 00:09:20.329 "zone_append": false, 00:09:20.329 "compare": false, 00:09:20.329 "compare_and_write": false, 00:09:20.329 "abort": true, 00:09:20.329 "seek_hole": false, 00:09:20.329 "seek_data": false, 00:09:20.329 "copy": true, 00:09:20.329 "nvme_iov_md": false 00:09:20.329 }, 00:09:20.329 "memory_domains": [ 00:09:20.329 { 00:09:20.329 "dma_device_id": "system", 00:09:20.329 "dma_device_type": 1 00:09:20.329 }, 00:09:20.329 { 00:09:20.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.329 "dma_device_type": 2 00:09:20.329 } 00:09:20.329 ], 00:09:20.329 "driver_specific": {} 00:09:20.329 } 00:09:20.329 ]' 00:09:20.329 21:22:53 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:09:20.329 21:22:53 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:20.329 21:22:53 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:20.329 21:22:53 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.329 21:22:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:20.329 21:22:53 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.329 21:22:53 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:20.329 21:22:53 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.329 21:22:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:20.329 21:22:53 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.329 21:22:53 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:20.329 21:22:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:20.588 21:22:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:20.588 00:09:20.588 real 0m0.172s 00:09:20.588 user 0m0.118s 00:09:20.588 sys 0m0.011s 00:09:20.588 21:22:53 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:20.588 21:22:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:20.588 ************************************ 00:09:20.588 END TEST rpc_plugins 00:09:20.588 ************************************ 00:09:20.588 21:22:53 rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:20.588 21:22:53 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:20.588 21:22:53 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:20.588 21:22:53 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.588 21:22:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.588 ************************************ 00:09:20.588 START TEST rpc_trace_cmd_test 00:09:20.588 ************************************ 00:09:20.588 21:22:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:09:20.588 21:22:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:20.588 21:22:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:20.588 21:22:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.588 21:22:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.588 21:22:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.588 21:22:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:20.588 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid111232", 00:09:20.588 "tpoint_group_mask": "0x8", 00:09:20.588 "iscsi_conn": { 00:09:20.588 "mask": "0x2", 00:09:20.588 "tpoint_mask": "0x0" 00:09:20.588 }, 00:09:20.588 "scsi": { 00:09:20.588 "mask": "0x4", 00:09:20.588 "tpoint_mask": "0x0" 00:09:20.588 }, 00:09:20.588 "bdev": { 00:09:20.588 "mask": "0x8", 00:09:20.588 "tpoint_mask": "0xffffffffffffffff" 00:09:20.588 }, 00:09:20.588 "nvmf_rdma": { 00:09:20.588 "mask": "0x10", 00:09:20.588 "tpoint_mask": "0x0" 00:09:20.588 }, 00:09:20.588 "nvmf_tcp": { 00:09:20.588 "mask": "0x20", 00:09:20.588 "tpoint_mask": "0x0" 00:09:20.588 }, 00:09:20.588 "ftl": { 00:09:20.588 "mask": "0x40", 00:09:20.588 "tpoint_mask": "0x0" 00:09:20.588 }, 00:09:20.588 "blobfs": { 00:09:20.588 "mask": "0x80", 00:09:20.588 "tpoint_mask": "0x0" 00:09:20.588 }, 00:09:20.588 "dsa": { 00:09:20.588 "mask": "0x200", 00:09:20.588 "tpoint_mask": "0x0" 00:09:20.588 }, 00:09:20.588 "thread": { 00:09:20.588 "mask": "0x400", 00:09:20.588 "tpoint_mask": "0x0" 00:09:20.588 }, 00:09:20.588 "nvme_pcie": { 00:09:20.588 "mask": "0x800", 00:09:20.588 "tpoint_mask": "0x0" 00:09:20.588 }, 00:09:20.588 "iaa": { 00:09:20.588 "mask": "0x1000", 00:09:20.588 "tpoint_mask": "0x0" 00:09:20.588 }, 00:09:20.588 "nvme_tcp": { 00:09:20.588 "mask": "0x2000", 00:09:20.588 "tpoint_mask": "0x0" 00:09:20.588 }, 00:09:20.588 "bdev_nvme": { 00:09:20.588 "mask": "0x4000", 00:09:20.588 "tpoint_mask": "0x0" 00:09:20.588 }, 00:09:20.588 "sock": { 00:09:20.588 "mask": "0x8000", 00:09:20.588 "tpoint_mask": "0x0" 00:09:20.588 } 00:09:20.588 }' 00:09:20.588 21:22:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:20.588 21:22:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:09:20.588 21:22:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:20.588 21:22:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:20.588 21:22:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:20.847 21:22:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:20.847 21:22:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:20.847 21:22:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:20.847 21:22:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:20.847 21:22:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:20.847 00:09:20.847 real 0m0.301s 00:09:20.847 user 0m0.260s 00:09:20.847 sys 0m0.032s 00:09:20.847 21:22:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:20.847 21:22:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.847 ************************************ 00:09:20.847 END TEST rpc_trace_cmd_test 00:09:20.847 ************************************ 00:09:20.847 21:22:54 rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:20.847 21:22:54 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:20.847 21:22:54 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:20.847 21:22:54 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:20.847 21:22:54 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:20.847 21:22:54 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.847 21:22:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.847 ************************************ 00:09:20.847 START TEST rpc_daemon_integrity 00:09:20.847 ************************************ 00:09:20.847 21:22:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:09:20.847 21:22:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:20.847 21:22:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.847 21:22:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:20.847 21:22:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.847 21:22:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:20.847 21:22:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:21.106 21:22:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:21.106 21:22:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:21.106 21:22:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.106 21:22:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:21.106 21:22:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.106 21:22:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:21.106 21:22:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:21.106 21:22:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.106 21:22:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:21.106 21:22:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.106 21:22:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:21.106 { 00:09:21.106 "name": "Malloc2", 00:09:21.106 "aliases": [ 00:09:21.106 "628d389c-7f2f-4f58-a83d-cf4b358837cc" 00:09:21.106 ], 00:09:21.106 "product_name": "Malloc disk", 00:09:21.106 "block_size": 512, 00:09:21.106 "num_blocks": 16384, 00:09:21.106 "uuid": "628d389c-7f2f-4f58-a83d-cf4b358837cc", 00:09:21.106 "assigned_rate_limits": { 00:09:21.106 "rw_ios_per_sec": 0, 00:09:21.106 "rw_mbytes_per_sec": 0, 00:09:21.106 "r_mbytes_per_sec": 0, 00:09:21.106 "w_mbytes_per_sec": 0 00:09:21.106 }, 00:09:21.106 "claimed": false, 00:09:21.106 "zoned": false, 00:09:21.106 "supported_io_types": { 00:09:21.106 "read": true, 00:09:21.106 "write": true, 00:09:21.106 "unmap": true, 00:09:21.106 "flush": true, 00:09:21.106 "reset": true, 00:09:21.106 "nvme_admin": false, 00:09:21.106 "nvme_io": false, 00:09:21.106 "nvme_io_md": false, 00:09:21.106 "write_zeroes": true, 00:09:21.106 "zcopy": true, 00:09:21.106 "get_zone_info": false, 00:09:21.106 "zone_management": false, 00:09:21.106 "zone_append": false, 00:09:21.106 "compare": false, 00:09:21.106 "compare_and_write": false, 00:09:21.106 "abort": true, 00:09:21.106 "seek_hole": false, 00:09:21.106 "seek_data": false, 00:09:21.106 "copy": true, 00:09:21.106 "nvme_iov_md": false 00:09:21.106 }, 00:09:21.106 "memory_domains": [ 00:09:21.106 { 00:09:21.106 "dma_device_id": "system", 00:09:21.106 "dma_device_type": 1 00:09:21.106 }, 00:09:21.106 { 00:09:21.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.106 "dma_device_type": 2 00:09:21.106 } 00:09:21.106 ], 00:09:21.106 "driver_specific": {} 00:09:21.106 } 00:09:21.106 ]' 00:09:21.106 21:22:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:21.106 21:22:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:21.106 21:22:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:21.106 21:22:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.106 21:22:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:21.106 [2024-07-15 21:22:54.317532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:21.106 [2024-07-15 21:22:54.317650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.106 [2024-07-15 21:22:54.317728] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:21.106 [2024-07-15 21:22:54.317774] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.106 [2024-07-15 21:22:54.320055] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.106 [2024-07-15 21:22:54.320143] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:21.106 Passthru0 00:09:21.106 21:22:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.106 21:22:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:21.106 21:22:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.106 21:22:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:21.106 21:22:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.106 21:22:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:21.106 { 00:09:21.106 "name": "Malloc2", 00:09:21.106 "aliases": [ 00:09:21.106 "628d389c-7f2f-4f58-a83d-cf4b358837cc" 00:09:21.106 ], 00:09:21.106 "product_name": "Malloc disk", 00:09:21.106 "block_size": 512, 00:09:21.106 "num_blocks": 16384, 00:09:21.106 "uuid": "628d389c-7f2f-4f58-a83d-cf4b358837cc", 00:09:21.106 "assigned_rate_limits": { 00:09:21.106 "rw_ios_per_sec": 0, 00:09:21.106 "rw_mbytes_per_sec": 0, 00:09:21.106 "r_mbytes_per_sec": 0, 00:09:21.106 "w_mbytes_per_sec": 0 00:09:21.106 }, 00:09:21.106 "claimed": true, 00:09:21.106 "claim_type": "exclusive_write", 00:09:21.106 "zoned": false, 00:09:21.106 "supported_io_types": { 00:09:21.106 "read": true, 00:09:21.106 "write": true, 00:09:21.106 "unmap": true, 00:09:21.106 "flush": true, 00:09:21.106 "reset": true, 00:09:21.106 "nvme_admin": false, 00:09:21.106 "nvme_io": false, 00:09:21.106 "nvme_io_md": false, 00:09:21.106 "write_zeroes": true, 00:09:21.106 "zcopy": true, 00:09:21.106 "get_zone_info": false, 00:09:21.106 "zone_management": false, 00:09:21.106 "zone_append": false, 00:09:21.106 "compare": false, 00:09:21.106 "compare_and_write": false, 00:09:21.106 "abort": true, 00:09:21.106 "seek_hole": false, 00:09:21.106 "seek_data": false, 00:09:21.106 "copy": true, 00:09:21.106 "nvme_iov_md": false 00:09:21.106 }, 00:09:21.106 "memory_domains": [ 00:09:21.106 { 00:09:21.106 "dma_device_id": "system", 00:09:21.106 "dma_device_type": 1 00:09:21.106 }, 00:09:21.106 { 00:09:21.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.106 "dma_device_type": 2 00:09:21.106 } 00:09:21.106 ], 00:09:21.106 "driver_specific": {} 00:09:21.106 }, 00:09:21.106 { 00:09:21.106 "name": "Passthru0", 00:09:21.106 "aliases": [ 00:09:21.106 "b92cdc8d-b657-502c-b176-0ac42baaa03e" 00:09:21.106 ], 00:09:21.106 "product_name": "passthru", 00:09:21.106 "block_size": 512, 00:09:21.106 "num_blocks": 16384, 00:09:21.106 "uuid": "b92cdc8d-b657-502c-b176-0ac42baaa03e", 00:09:21.106 "assigned_rate_limits": { 00:09:21.106 "rw_ios_per_sec": 0, 00:09:21.106 "rw_mbytes_per_sec": 0, 00:09:21.106 "r_mbytes_per_sec": 0, 00:09:21.106 "w_mbytes_per_sec": 0 00:09:21.106 }, 00:09:21.106 "claimed": false, 00:09:21.106 "zoned": false, 00:09:21.106 "supported_io_types": { 00:09:21.106 "read": true, 00:09:21.106 "write": true, 00:09:21.106 "unmap": true, 00:09:21.106 "flush": true, 00:09:21.106 "reset": true, 00:09:21.106 "nvme_admin": false, 00:09:21.106 "nvme_io": false, 00:09:21.106 "nvme_io_md": false, 00:09:21.106 "write_zeroes": true, 00:09:21.106 "zcopy": true, 00:09:21.106 "get_zone_info": false, 00:09:21.106 "zone_management": false, 00:09:21.106 "zone_append": false, 00:09:21.106 "compare": false, 00:09:21.106 "compare_and_write": false, 00:09:21.106 "abort": true, 00:09:21.106 "seek_hole": false, 00:09:21.106 "seek_data": false, 00:09:21.106 "copy": true, 00:09:21.106 "nvme_iov_md": false 00:09:21.106 }, 00:09:21.106 "memory_domains": [ 00:09:21.106 { 00:09:21.106 "dma_device_id": "system", 00:09:21.106 "dma_device_type": 1 00:09:21.106 }, 00:09:21.106 { 00:09:21.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.106 "dma_device_type": 2 00:09:21.106 } 00:09:21.106 ], 00:09:21.106 "driver_specific": { 00:09:21.106 "passthru": { 00:09:21.106 "name": "Passthru0", 00:09:21.106 "base_bdev_name": "Malloc2" 00:09:21.106 } 00:09:21.106 } 00:09:21.106 } 00:09:21.106 ]' 00:09:21.106 21:22:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:21.106 21:22:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:21.106 21:22:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:21.106 21:22:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.107 21:22:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:21.107 21:22:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.107 21:22:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:21.107 21:22:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.107 21:22:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:21.107 21:22:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.107 21:22:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:21.107 21:22:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.107 21:22:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:21.107 21:22:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.107 21:22:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:21.107 21:22:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:21.364 ************************************ 00:09:21.364 END TEST rpc_daemon_integrity 00:09:21.364 ************************************ 00:09:21.364 21:22:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:21.364 00:09:21.364 real 0m0.367s 00:09:21.364 user 0m0.226s 00:09:21.364 sys 0m0.023s 00:09:21.364 21:22:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:21.364 21:22:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:21.364 21:22:54 rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:21.364 21:22:54 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:21.364 21:22:54 rpc -- rpc/rpc.sh@84 -- # killprocess 111232 00:09:21.364 21:22:54 rpc -- common/autotest_common.sh@948 -- # '[' -z 111232 ']' 00:09:21.364 21:22:54 rpc -- common/autotest_common.sh@952 -- # kill -0 111232 00:09:21.364 21:22:54 rpc -- common/autotest_common.sh@953 -- # uname 00:09:21.364 21:22:54 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:21.364 21:22:54 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 111232 00:09:21.364 killing process with pid 111232 00:09:21.364 21:22:54 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:21.364 21:22:54 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:21.364 21:22:54 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 111232' 00:09:21.364 21:22:54 rpc -- common/autotest_common.sh@967 -- # kill 111232 00:09:21.364 21:22:54 rpc -- common/autotest_common.sh@972 -- # wait 111232 00:09:24.676 ************************************ 00:09:24.676 END TEST rpc 00:09:24.676 ************************************ 00:09:24.676 00:09:24.676 real 0m6.499s 00:09:24.676 user 0m7.302s 00:09:24.676 sys 0m0.735s 00:09:24.676 21:22:57 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:24.676 21:22:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.676 21:22:57 -- common/autotest_common.sh@1142 -- # return 0 00:09:24.676 21:22:57 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:24.676 21:22:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:24.676 21:22:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:24.676 21:22:57 -- common/autotest_common.sh@10 -- # set +x 00:09:24.676 ************************************ 00:09:24.676 START TEST skip_rpc 00:09:24.676 ************************************ 00:09:24.676 21:22:57 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:24.676 * Looking for test storage... 00:09:24.676 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:24.676 21:22:58 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:24.676 21:22:58 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:24.676 21:22:58 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:24.676 21:22:58 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:24.676 21:22:58 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:24.676 21:22:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.676 ************************************ 00:09:24.676 START TEST skip_rpc 00:09:24.676 ************************************ 00:09:24.676 21:22:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:09:24.676 21:22:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=111515 00:09:24.676 21:22:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:24.676 21:22:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:24.676 21:22:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:24.935 [2024-07-15 21:22:58.096202] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:09:24.935 [2024-07-15 21:22:58.096428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111515 ] 00:09:24.935 [2024-07-15 21:22:58.261514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.193 [2024-07-15 21:22:58.515234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.500 21:23:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:30.500 21:23:03 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:30.500 21:23:03 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:30.500 21:23:03 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:09:30.500 21:23:03 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:30.500 21:23:03 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:09:30.500 21:23:03 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:30.500 21:23:03 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:09:30.500 21:23:03 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.500 21:23:03 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.500 21:23:03 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:30.500 21:23:03 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:30.500 21:23:03 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:30.500 21:23:03 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:30.500 21:23:03 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:30.500 21:23:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:30.500 21:23:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 111515 00:09:30.500 21:23:03 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 111515 ']' 00:09:30.500 21:23:03 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 111515 00:09:30.500 21:23:03 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:09:30.500 21:23:03 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:30.500 21:23:03 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 111515 00:09:30.500 21:23:03 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:30.500 21:23:03 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:30.500 killing process with pid 111515 00:09:30.500 21:23:03 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 111515' 00:09:30.500 21:23:03 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 111515 00:09:30.500 21:23:03 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 111515 00:09:33.031 ************************************ 00:09:33.031 END TEST skip_rpc 00:09:33.031 ************************************ 00:09:33.031 00:09:33.031 real 0m8.038s 00:09:33.031 user 0m7.591s 00:09:33.031 sys 0m0.356s 00:09:33.031 21:23:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:33.031 21:23:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.031 21:23:06 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:33.031 21:23:06 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:33.031 21:23:06 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:33.031 21:23:06 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.031 21:23:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.031 ************************************ 00:09:33.031 START TEST skip_rpc_with_json 00:09:33.031 ************************************ 00:09:33.031 21:23:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:09:33.031 21:23:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:33.031 21:23:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=111655 00:09:33.031 21:23:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:33.031 21:23:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:33.031 21:23:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 111655 00:09:33.031 21:23:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 111655 ']' 00:09:33.031 21:23:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.031 21:23:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:33.031 21:23:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.031 21:23:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:33.031 21:23:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:33.031 [2024-07-15 21:23:06.196518] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:09:33.031 [2024-07-15 21:23:06.196766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111655 ] 00:09:33.031 [2024-07-15 21:23:06.387574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.290 [2024-07-15 21:23:06.617469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.665 21:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:34.665 21:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:09:34.665 21:23:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:34.665 21:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.665 21:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:34.665 [2024-07-15 21:23:07.620747] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:34.665 request: 00:09:34.665 { 00:09:34.665 "trtype": "tcp", 00:09:34.665 "method": "nvmf_get_transports", 00:09:34.665 "req_id": 1 00:09:34.665 } 00:09:34.665 Got JSON-RPC error response 00:09:34.665 response: 00:09:34.665 { 00:09:34.665 "code": -19, 00:09:34.665 "message": "No such device" 00:09:34.665 } 00:09:34.665 21:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:34.665 21:23:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:34.665 21:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.665 21:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:34.665 [2024-07-15 21:23:07.632855] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:34.665 21:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.665 21:23:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:34.665 21:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.665 21:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:34.665 21:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.665 21:23:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:34.665 { 00:09:34.665 "subsystems": [ 00:09:34.665 { 00:09:34.665 "subsystem": "scheduler", 00:09:34.665 "config": [ 00:09:34.665 { 00:09:34.665 "method": "framework_set_scheduler", 00:09:34.665 "params": { 00:09:34.665 "name": "static" 00:09:34.665 } 00:09:34.665 } 00:09:34.665 ] 00:09:34.665 }, 00:09:34.665 { 00:09:34.665 "subsystem": "vmd", 00:09:34.665 "config": [] 00:09:34.665 }, 00:09:34.665 { 00:09:34.665 "subsystem": "sock", 00:09:34.665 "config": [ 00:09:34.665 { 00:09:34.665 "method": "sock_set_default_impl", 00:09:34.665 "params": { 00:09:34.665 "impl_name": "posix" 00:09:34.665 } 00:09:34.665 }, 00:09:34.665 { 00:09:34.665 "method": "sock_impl_set_options", 00:09:34.665 "params": { 00:09:34.665 "impl_name": "ssl", 00:09:34.665 "recv_buf_size": 4096, 00:09:34.665 "send_buf_size": 4096, 00:09:34.665 "enable_recv_pipe": true, 00:09:34.665 "enable_quickack": false, 00:09:34.665 "enable_placement_id": 0, 00:09:34.665 "enable_zerocopy_send_server": true, 00:09:34.665 "enable_zerocopy_send_client": false, 00:09:34.665 "zerocopy_threshold": 0, 00:09:34.665 "tls_version": 0, 00:09:34.665 "enable_ktls": false 00:09:34.665 } 00:09:34.665 }, 00:09:34.665 { 00:09:34.665 "method": "sock_impl_set_options", 00:09:34.665 "params": { 00:09:34.665 "impl_name": "posix", 00:09:34.665 "recv_buf_size": 2097152, 00:09:34.665 "send_buf_size": 2097152, 00:09:34.665 "enable_recv_pipe": true, 00:09:34.665 "enable_quickack": false, 00:09:34.665 "enable_placement_id": 0, 00:09:34.665 "enable_zerocopy_send_server": true, 00:09:34.665 "enable_zerocopy_send_client": false, 00:09:34.665 "zerocopy_threshold": 0, 00:09:34.665 "tls_version": 0, 00:09:34.665 "enable_ktls": false 00:09:34.665 } 00:09:34.665 } 00:09:34.665 ] 00:09:34.665 }, 00:09:34.665 { 00:09:34.665 "subsystem": "iobuf", 00:09:34.665 "config": [ 00:09:34.665 { 00:09:34.665 "method": "iobuf_set_options", 00:09:34.665 "params": { 00:09:34.665 "small_pool_count": 8192, 00:09:34.665 "large_pool_count": 1024, 00:09:34.665 "small_bufsize": 8192, 00:09:34.665 "large_bufsize": 135168 00:09:34.665 } 00:09:34.665 } 00:09:34.665 ] 00:09:34.665 }, 00:09:34.665 { 00:09:34.665 "subsystem": "keyring", 00:09:34.665 "config": [] 00:09:34.665 }, 00:09:34.665 { 00:09:34.665 "subsystem": "accel", 00:09:34.665 "config": [ 00:09:34.665 { 00:09:34.665 "method": "accel_set_options", 00:09:34.665 "params": { 00:09:34.665 "small_cache_size": 128, 00:09:34.665 "large_cache_size": 16, 00:09:34.665 "task_count": 2048, 00:09:34.665 "sequence_count": 2048, 00:09:34.665 "buf_count": 2048 00:09:34.665 } 00:09:34.665 } 00:09:34.665 ] 00:09:34.665 }, 00:09:34.665 { 00:09:34.665 "subsystem": "bdev", 00:09:34.665 "config": [ 00:09:34.665 { 00:09:34.665 "method": "bdev_set_options", 00:09:34.665 "params": { 00:09:34.665 "bdev_io_pool_size": 65535, 00:09:34.665 "bdev_io_cache_size": 256, 00:09:34.665 "bdev_auto_examine": true, 00:09:34.665 "iobuf_small_cache_size": 128, 00:09:34.665 "iobuf_large_cache_size": 16 00:09:34.665 } 00:09:34.665 }, 00:09:34.665 { 00:09:34.665 "method": "bdev_raid_set_options", 00:09:34.665 "params": { 00:09:34.665 "process_window_size_kb": 1024 00:09:34.665 } 00:09:34.665 }, 00:09:34.665 { 00:09:34.665 "method": "bdev_nvme_set_options", 00:09:34.665 "params": { 00:09:34.665 "action_on_timeout": "none", 00:09:34.665 "timeout_us": 0, 00:09:34.665 "timeout_admin_us": 0, 00:09:34.665 "keep_alive_timeout_ms": 10000, 00:09:34.665 "arbitration_burst": 0, 00:09:34.665 "low_priority_weight": 0, 00:09:34.665 "medium_priority_weight": 0, 00:09:34.665 "high_priority_weight": 0, 00:09:34.665 "nvme_adminq_poll_period_us": 10000, 00:09:34.665 "nvme_ioq_poll_period_us": 0, 00:09:34.665 "io_queue_requests": 0, 00:09:34.665 "delay_cmd_submit": true, 00:09:34.665 "transport_retry_count": 4, 00:09:34.665 "bdev_retry_count": 3, 00:09:34.665 "transport_ack_timeout": 0, 00:09:34.665 "ctrlr_loss_timeout_sec": 0, 00:09:34.665 "reconnect_delay_sec": 0, 00:09:34.665 "fast_io_fail_timeout_sec": 0, 00:09:34.665 "disable_auto_failback": false, 00:09:34.665 "generate_uuids": false, 00:09:34.665 "transport_tos": 0, 00:09:34.665 "nvme_error_stat": false, 00:09:34.665 "rdma_srq_size": 0, 00:09:34.665 "io_path_stat": false, 00:09:34.665 "allow_accel_sequence": false, 00:09:34.665 "rdma_max_cq_size": 0, 00:09:34.665 "rdma_cm_event_timeout_ms": 0, 00:09:34.665 "dhchap_digests": [ 00:09:34.665 "sha256", 00:09:34.665 "sha384", 00:09:34.665 "sha512" 00:09:34.665 ], 00:09:34.665 "dhchap_dhgroups": [ 00:09:34.665 "null", 00:09:34.665 "ffdhe2048", 00:09:34.665 "ffdhe3072", 00:09:34.665 "ffdhe4096", 00:09:34.665 "ffdhe6144", 00:09:34.665 "ffdhe8192" 00:09:34.665 ] 00:09:34.665 } 00:09:34.665 }, 00:09:34.665 { 00:09:34.665 "method": "bdev_nvme_set_hotplug", 00:09:34.665 "params": { 00:09:34.665 "period_us": 100000, 00:09:34.665 "enable": false 00:09:34.665 } 00:09:34.665 }, 00:09:34.665 { 00:09:34.665 "method": "bdev_iscsi_set_options", 00:09:34.665 "params": { 00:09:34.665 "timeout_sec": 30 00:09:34.665 } 00:09:34.665 }, 00:09:34.665 { 00:09:34.665 "method": "bdev_wait_for_examine" 00:09:34.666 } 00:09:34.666 ] 00:09:34.666 }, 00:09:34.666 { 00:09:34.666 "subsystem": "nvmf", 00:09:34.666 "config": [ 00:09:34.666 { 00:09:34.666 "method": "nvmf_set_config", 00:09:34.666 "params": { 00:09:34.666 "discovery_filter": "match_any", 00:09:34.666 "admin_cmd_passthru": { 00:09:34.666 "identify_ctrlr": false 00:09:34.666 } 00:09:34.666 } 00:09:34.666 }, 00:09:34.666 { 00:09:34.666 "method": "nvmf_set_max_subsystems", 00:09:34.666 "params": { 00:09:34.666 "max_subsystems": 1024 00:09:34.666 } 00:09:34.666 }, 00:09:34.666 { 00:09:34.666 "method": "nvmf_set_crdt", 00:09:34.666 "params": { 00:09:34.666 "crdt1": 0, 00:09:34.666 "crdt2": 0, 00:09:34.666 "crdt3": 0 00:09:34.666 } 00:09:34.666 }, 00:09:34.666 { 00:09:34.666 "method": "nvmf_create_transport", 00:09:34.666 "params": { 00:09:34.666 "trtype": "TCP", 00:09:34.666 "max_queue_depth": 128, 00:09:34.666 "max_io_qpairs_per_ctrlr": 127, 00:09:34.666 "in_capsule_data_size": 4096, 00:09:34.666 "max_io_size": 131072, 00:09:34.666 "io_unit_size": 131072, 00:09:34.666 "max_aq_depth": 128, 00:09:34.666 "num_shared_buffers": 511, 00:09:34.666 "buf_cache_size": 4294967295, 00:09:34.666 "dif_insert_or_strip": false, 00:09:34.666 "zcopy": false, 00:09:34.666 "c2h_success": true, 00:09:34.666 "sock_priority": 0, 00:09:34.666 "abort_timeout_sec": 1, 00:09:34.666 "ack_timeout": 0, 00:09:34.666 "data_wr_pool_size": 0 00:09:34.666 } 00:09:34.666 } 00:09:34.666 ] 00:09:34.666 }, 00:09:34.666 { 00:09:34.666 "subsystem": "nbd", 00:09:34.666 "config": [] 00:09:34.666 }, 00:09:34.666 { 00:09:34.666 "subsystem": "vhost_blk", 00:09:34.666 "config": [] 00:09:34.666 }, 00:09:34.666 { 00:09:34.666 "subsystem": "scsi", 00:09:34.666 "config": null 00:09:34.666 }, 00:09:34.666 { 00:09:34.666 "subsystem": "iscsi", 00:09:34.666 "config": [ 00:09:34.666 { 00:09:34.666 "method": "iscsi_set_options", 00:09:34.666 "params": { 00:09:34.666 "node_base": "iqn.2016-06.io.spdk", 00:09:34.666 "max_sessions": 128, 00:09:34.666 "max_connections_per_session": 2, 00:09:34.666 "max_queue_depth": 64, 00:09:34.666 "default_time2wait": 2, 00:09:34.666 "default_time2retain": 20, 00:09:34.666 "first_burst_length": 8192, 00:09:34.666 "immediate_data": true, 00:09:34.666 "allow_duplicated_isid": false, 00:09:34.666 "error_recovery_level": 0, 00:09:34.666 "nop_timeout": 60, 00:09:34.666 "nop_in_interval": 30, 00:09:34.666 "disable_chap": false, 00:09:34.666 "require_chap": false, 00:09:34.666 "mutual_chap": false, 00:09:34.666 "chap_group": 0, 00:09:34.666 "max_large_datain_per_connection": 64, 00:09:34.666 "max_r2t_per_connection": 4, 00:09:34.666 "pdu_pool_size": 36864, 00:09:34.666 "immediate_data_pool_size": 16384, 00:09:34.666 "data_out_pool_size": 2048 00:09:34.666 } 00:09:34.666 } 00:09:34.666 ] 00:09:34.666 }, 00:09:34.666 { 00:09:34.666 "subsystem": "vhost_scsi", 00:09:34.666 "config": [] 00:09:34.666 } 00:09:34.666 ] 00:09:34.666 } 00:09:34.666 21:23:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:34.666 21:23:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 111655 00:09:34.666 21:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 111655 ']' 00:09:34.666 21:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 111655 00:09:34.666 21:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:09:34.666 21:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:34.666 21:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 111655 00:09:34.666 21:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:34.666 21:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:34.666 21:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 111655' 00:09:34.666 killing process with pid 111655 00:09:34.666 21:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 111655 00:09:34.666 21:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 111655 00:09:37.941 21:23:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=111719 00:09:37.941 21:23:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:37.941 21:23:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:43.205 21:23:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 111719 00:09:43.205 21:23:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 111719 ']' 00:09:43.205 21:23:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 111719 00:09:43.205 21:23:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:09:43.205 21:23:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:43.205 21:23:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 111719 00:09:43.205 killing process with pid 111719 00:09:43.205 21:23:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:43.205 21:23:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:43.206 21:23:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 111719' 00:09:43.206 21:23:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 111719 00:09:43.206 21:23:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 111719 00:09:45.109 21:23:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:45.109 21:23:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:45.109 ************************************ 00:09:45.109 00:09:45.109 real 0m12.327s 00:09:45.109 user 0m11.869s 00:09:45.109 sys 0m0.748s 00:09:45.109 21:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:45.109 21:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:45.109 END TEST skip_rpc_with_json 00:09:45.109 ************************************ 00:09:45.368 21:23:18 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:45.368 21:23:18 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:45.368 21:23:18 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:45.368 21:23:18 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:45.368 21:23:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:45.368 ************************************ 00:09:45.368 START TEST skip_rpc_with_delay 00:09:45.368 ************************************ 00:09:45.368 21:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:09:45.368 21:23:18 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:45.368 21:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:09:45.368 21:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:45.368 21:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:45.368 21:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:45.368 21:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:45.368 21:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:45.368 21:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:45.368 21:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:45.368 21:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:45.368 21:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:45.368 21:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:45.368 [2024-07-15 21:23:18.584335] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:45.368 [2024-07-15 21:23:18.584533] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:09:45.368 21:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:09:45.368 21:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:45.368 21:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:45.368 21:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:45.368 00:09:45.368 real 0m0.127s 00:09:45.368 user 0m0.075s 00:09:45.368 sys 0m0.049s 00:09:45.368 21:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:45.368 21:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:45.368 ************************************ 00:09:45.368 END TEST skip_rpc_with_delay 00:09:45.368 ************************************ 00:09:45.368 21:23:18 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:45.368 21:23:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:45.368 21:23:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:45.368 21:23:18 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:45.368 21:23:18 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:45.368 21:23:18 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:45.368 21:23:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:45.368 ************************************ 00:09:45.368 START TEST exit_on_failed_rpc_init 00:09:45.368 ************************************ 00:09:45.368 21:23:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:09:45.368 21:23:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=111891 00:09:45.368 21:23:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:45.368 21:23:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 111891 00:09:45.368 21:23:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 111891 ']' 00:09:45.368 21:23:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.368 21:23:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:45.368 21:23:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.368 21:23:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:45.368 21:23:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:45.626 [2024-07-15 21:23:18.784600] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:09:45.626 [2024-07-15 21:23:18.784890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111891 ] 00:09:45.626 [2024-07-15 21:23:18.949406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.885 [2024-07-15 21:23:19.171160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.819 21:23:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:46.819 21:23:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:09:46.819 21:23:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:46.819 21:23:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:46.819 21:23:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:09:46.819 21:23:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:46.819 21:23:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:46.819 21:23:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:46.819 21:23:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:46.819 21:23:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:46.819 21:23:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:46.819 21:23:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:46.819 21:23:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:46.819 21:23:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:46.819 21:23:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:46.819 [2024-07-15 21:23:20.182553] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:09:46.819 [2024-07-15 21:23:20.182826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111921 ] 00:09:47.077 [2024-07-15 21:23:20.347237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.335 [2024-07-15 21:23:20.563982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.335 [2024-07-15 21:23:20.564166] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:47.335 [2024-07-15 21:23:20.564233] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:47.335 [2024-07-15 21:23:20.564271] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:47.902 21:23:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:09:47.902 21:23:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:47.902 21:23:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:09:47.902 21:23:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:09:47.902 21:23:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:09:47.902 21:23:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:47.902 21:23:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:47.902 21:23:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 111891 00:09:47.902 21:23:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 111891 ']' 00:09:47.902 21:23:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 111891 00:09:47.902 21:23:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:09:47.902 21:23:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:47.902 21:23:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 111891 00:09:47.902 killing process with pid 111891 00:09:47.902 21:23:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:47.902 21:23:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:47.902 21:23:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 111891' 00:09:47.902 21:23:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 111891 00:09:47.902 21:23:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 111891 00:09:50.431 ************************************ 00:09:50.431 END TEST exit_on_failed_rpc_init 00:09:50.431 ************************************ 00:09:50.431 00:09:50.431 real 0m5.086s 00:09:50.431 user 0m5.622s 00:09:50.431 sys 0m0.605s 00:09:50.431 21:23:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:50.431 21:23:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:50.687 21:23:23 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:50.688 21:23:23 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:50.688 00:09:50.688 real 0m25.953s 00:09:50.688 user 0m25.346s 00:09:50.688 sys 0m1.963s 00:09:50.688 ************************************ 00:09:50.688 END TEST skip_rpc 00:09:50.688 ************************************ 00:09:50.688 21:23:23 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:50.688 21:23:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.688 21:23:23 -- common/autotest_common.sh@1142 -- # return 0 00:09:50.688 21:23:23 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:50.688 21:23:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:50.688 21:23:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:50.688 21:23:23 -- common/autotest_common.sh@10 -- # set +x 00:09:50.688 ************************************ 00:09:50.688 START TEST rpc_client 00:09:50.688 ************************************ 00:09:50.688 21:23:23 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:50.688 * Looking for test storage... 00:09:50.688 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:50.688 21:23:24 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:50.945 OK 00:09:50.945 21:23:24 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:50.945 ************************************ 00:09:50.945 END TEST rpc_client 00:09:50.945 ************************************ 00:09:50.945 00:09:50.945 real 0m0.222s 00:09:50.945 user 0m0.115s 00:09:50.945 sys 0m0.107s 00:09:50.945 21:23:24 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:50.945 21:23:24 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:50.945 21:23:24 -- common/autotest_common.sh@1142 -- # return 0 00:09:50.945 21:23:24 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:50.945 21:23:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:50.945 21:23:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:50.945 21:23:24 -- common/autotest_common.sh@10 -- # set +x 00:09:50.945 ************************************ 00:09:50.945 START TEST json_config 00:09:50.945 ************************************ 00:09:50.945 21:23:24 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:50.945 21:23:24 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:50.945 21:23:24 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:50.945 21:23:24 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.945 21:23:24 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.945 21:23:24 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.945 21:23:24 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.945 21:23:24 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.945 21:23:24 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.945 21:23:24 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.945 21:23:24 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.945 21:23:24 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.945 21:23:24 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.945 21:23:24 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b386e86a-9773-4e24-8aaf-84838a2cc75a 00:09:50.945 21:23:24 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=b386e86a-9773-4e24-8aaf-84838a2cc75a 00:09:50.945 21:23:24 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.945 21:23:24 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.945 21:23:24 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:50.945 21:23:24 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.945 21:23:24 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:50.945 21:23:24 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.945 21:23:24 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.945 21:23:24 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.945 21:23:24 json_config -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:50.945 21:23:24 json_config -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:50.945 21:23:24 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:50.945 21:23:24 json_config -- paths/export.sh@5 -- # export PATH 00:09:50.945 21:23:24 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:50.945 21:23:24 json_config -- nvmf/common.sh@47 -- # : 0 00:09:50.945 21:23:24 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:50.945 21:23:24 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:50.945 21:23:24 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.945 21:23:24 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.945 21:23:24 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.945 21:23:24 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:50.945 21:23:24 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:50.945 21:23:24 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:51.203 21:23:24 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:51.203 21:23:24 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:51.203 21:23:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:51.203 21:23:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:51.203 21:23:24 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:51.203 21:23:24 json_config -- json_config/json_config.sh@31 -- # app_pid=([target]="" [initiator]="") 00:09:51.203 21:23:24 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:09:51.203 21:23:24 json_config -- json_config/json_config.sh@32 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock' [initiator]='/var/tmp/spdk_initiator.sock') 00:09:51.203 21:23:24 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:09:51.203 21:23:24 json_config -- json_config/json_config.sh@33 -- # app_params=([target]='-m 0x1 -s 1024' [initiator]='-m 0x2 -g -u -s 1024') 00:09:51.203 21:23:24 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:09:51.203 21:23:24 json_config -- json_config/json_config.sh@34 -- # configs_path=([target]="$rootdir/spdk_tgt_config.json" [initiator]="$rootdir/spdk_initiator_config.json") 00:09:51.203 21:23:24 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:09:51.203 21:23:24 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:09:51.203 21:23:24 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:51.203 21:23:24 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:09:51.203 INFO: JSON configuration test init 00:09:51.203 21:23:24 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:09:51.203 21:23:24 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:09:51.203 21:23:24 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:51.203 21:23:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:51.203 21:23:24 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:09:51.203 21:23:24 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:51.203 21:23:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:51.203 21:23:24 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:09:51.203 21:23:24 json_config -- json_config/common.sh@9 -- # local app=target 00:09:51.204 21:23:24 json_config -- json_config/common.sh@10 -- # shift 00:09:51.204 21:23:24 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:51.204 21:23:24 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:51.204 21:23:24 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:51.204 21:23:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:51.204 21:23:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:51.204 21:23:24 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=112091 00:09:51.204 21:23:24 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:51.204 Waiting for target to run... 00:09:51.204 21:23:24 json_config -- json_config/common.sh@25 -- # waitforlisten 112091 /var/tmp/spdk_tgt.sock 00:09:51.204 21:23:24 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:51.204 21:23:24 json_config -- common/autotest_common.sh@829 -- # '[' -z 112091 ']' 00:09:51.204 21:23:24 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:51.204 21:23:24 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:51.204 21:23:24 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:51.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:51.204 21:23:24 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:51.204 21:23:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:51.204 [2024-07-15 21:23:24.411509] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:09:51.204 [2024-07-15 21:23:24.411807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112091 ] 00:09:51.814 [2024-07-15 21:23:24.982164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.073 [2024-07-15 21:23:25.182257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.073 00:09:52.073 21:23:25 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:52.073 21:23:25 json_config -- common/autotest_common.sh@862 -- # return 0 00:09:52.073 21:23:25 json_config -- json_config/common.sh@26 -- # echo '' 00:09:52.073 21:23:25 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:09:52.073 21:23:25 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:09:52.073 21:23:25 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:52.073 21:23:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:52.073 21:23:25 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:09:52.073 21:23:25 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:09:52.073 21:23:25 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:52.073 21:23:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:52.073 21:23:25 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:09:52.073 21:23:25 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:09:52.073 21:23:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:09:53.007 21:23:26 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:09:53.007 21:23:26 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:09:53.007 21:23:26 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:53.007 21:23:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:53.007 21:23:26 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:09:53.007 21:23:26 json_config -- json_config/json_config.sh@46 -- # enabled_types=("bdev_register" "bdev_unregister") 00:09:53.007 21:23:26 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:09:53.007 21:23:26 json_config -- json_config/json_config.sh@48 -- # get_types=($(tgt_rpc notify_get_types | jq -r '.[]')) 00:09:53.007 21:23:26 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:09:53.266 21:23:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:09:53.266 21:23:26 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:09:53.266 21:23:26 json_config -- json_config/json_config.sh@48 -- # local get_types 00:09:53.266 21:23:26 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:09:53.266 21:23:26 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:09:53.266 21:23:26 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:53.266 21:23:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:53.266 21:23:26 json_config -- json_config/json_config.sh@55 -- # return 0 00:09:53.266 21:23:26 json_config -- json_config/json_config.sh@278 -- # [[ 1 -eq 1 ]] 00:09:53.266 21:23:26 json_config -- json_config/json_config.sh@279 -- # create_bdev_subsystem_config 00:09:53.266 21:23:26 json_config -- json_config/json_config.sh@105 -- # timing_enter create_bdev_subsystem_config 00:09:53.266 21:23:26 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:53.266 21:23:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:53.266 21:23:26 json_config -- json_config/json_config.sh@107 -- # expected_notifications=() 00:09:53.266 21:23:26 json_config -- json_config/json_config.sh@107 -- # local expected_notifications 00:09:53.266 21:23:26 json_config -- json_config/json_config.sh@111 -- # expected_notifications+=($(get_notifications)) 00:09:53.266 21:23:26 json_config -- json_config/json_config.sh@111 -- # get_notifications 00:09:53.266 21:23:26 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:09:53.266 21:23:26 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:53.266 21:23:26 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:53.266 21:23:26 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:09:53.266 21:23:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:09:53.266 21:23:26 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:09:53.524 21:23:26 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:09:53.524 21:23:26 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:53.524 21:23:26 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:53.525 21:23:26 json_config -- json_config/json_config.sh@113 -- # [[ 1 -eq 1 ]] 00:09:53.525 21:23:26 json_config -- json_config/json_config.sh@114 -- # local lvol_store_base_bdev=Nvme0n1 00:09:53.525 21:23:26 json_config -- json_config/json_config.sh@116 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:09:53.525 21:23:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:09:53.783 Nvme0n1p0 Nvme0n1p1 00:09:53.783 21:23:27 json_config -- json_config/json_config.sh@117 -- # tgt_rpc bdev_split_create Malloc0 3 00:09:53.783 21:23:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:09:54.041 [2024-07-15 21:23:27.202488] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:54.041 [2024-07-15 21:23:27.202654] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:54.041 00:09:54.041 21:23:27 json_config -- json_config/json_config.sh@118 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:09:54.041 21:23:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:09:54.041 Malloc3 00:09:54.299 21:23:27 json_config -- json_config/json_config.sh@119 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:09:54.299 21:23:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:09:54.299 [2024-07-15 21:23:27.600167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:54.299 [2024-07-15 21:23:27.600306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.299 [2024-07-15 21:23:27.600355] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:54.299 [2024-07-15 21:23:27.600388] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.299 [2024-07-15 21:23:27.602327] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.299 [2024-07-15 21:23:27.602428] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:09:54.299 PTBdevFromMalloc3 00:09:54.299 21:23:27 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_null_create Null0 32 512 00:09:54.299 21:23:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:09:54.557 Null0 00:09:54.557 21:23:27 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:09:54.558 21:23:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:09:54.816 Malloc0 00:09:54.816 21:23:28 json_config -- json_config/json_config.sh@124 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:09:54.816 21:23:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:09:55.075 Malloc1 00:09:55.075 21:23:28 json_config -- json_config/json_config.sh@137 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:09:55.075 21:23:28 json_config -- json_config/json_config.sh@140 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:09:55.075 102400+0 records in 00:09:55.075 102400+0 records out 00:09:55.075 104857600 bytes (105 MB, 100 MiB) copied, 0.178719 s, 587 MB/s 00:09:55.075 21:23:28 json_config -- json_config/json_config.sh@141 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:09:55.075 21:23:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:09:55.333 aio_disk 00:09:55.333 21:23:28 json_config -- json_config/json_config.sh@142 -- # expected_notifications+=(bdev_register:aio_disk) 00:09:55.333 21:23:28 json_config -- json_config/json_config.sh@147 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:09:55.333 21:23:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:09:55.591 96ce65c4-e317-4c65-96c5-2d406c6ba216 00:09:55.591 21:23:28 json_config -- json_config/json_config.sh@154 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:09:55.591 21:23:28 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:09:55.591 21:23:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:09:55.849 21:23:29 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:09:55.849 21:23:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:09:56.140 21:23:29 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:09:56.140 21:23:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:09:56.140 21:23:29 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:09:56.140 21:23:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:09:56.397 21:23:29 json_config -- json_config/json_config.sh@157 -- # [[ 0 -eq 1 ]] 00:09:56.397 21:23:29 json_config -- json_config/json_config.sh@172 -- # [[ 0 -eq 1 ]] 00:09:56.397 21:23:29 json_config -- json_config/json_config.sh@178 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:d3d53430-8e38-4d90-ab38-47ef51a39526 bdev_register:6b6e2b88-5d79-4238-af45-fb08d4f42ea1 bdev_register:487d04ff-ae35-4694-a18f-1e306bfb6224 bdev_register:a34b82d5-495c-4d10-9052-76c38f64f1f7 00:09:56.397 21:23:29 json_config -- json_config/json_config.sh@67 -- # local events_to_check 00:09:56.397 21:23:29 json_config -- json_config/json_config.sh@68 -- # local recorded_events 00:09:56.397 21:23:29 json_config -- json_config/json_config.sh@71 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:09:56.397 21:23:29 json_config -- json_config/json_config.sh@71 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:d3d53430-8e38-4d90-ab38-47ef51a39526 bdev_register:6b6e2b88-5d79-4238-af45-fb08d4f42ea1 bdev_register:487d04ff-ae35-4694-a18f-1e306bfb6224 bdev_register:a34b82d5-495c-4d10-9052-76c38f64f1f7 00:09:56.397 21:23:29 json_config -- json_config/json_config.sh@71 -- # sort 00:09:56.397 21:23:29 json_config -- json_config/json_config.sh@72 -- # recorded_events=($(get_notifications | sort)) 00:09:56.397 21:23:29 json_config -- json_config/json_config.sh@72 -- # get_notifications 00:09:56.397 21:23:29 json_config -- json_config/json_config.sh@72 -- # sort 00:09:56.397 21:23:29 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:09:56.397 21:23:29 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.397 21:23:29 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.397 21:23:29 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:09:56.397 21:23:29 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:09:56.397 21:23:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p1 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p0 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc3 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:PTBdevFromMalloc3 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Null0 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p2 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p1 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p0 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc1 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:aio_disk 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:d3d53430-8e38-4d90-ab38-47ef51a39526 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:6b6e2b88-5d79-4238-af45-fb08d4f42ea1 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:487d04ff-ae35-4694-a18f-1e306bfb6224 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:a34b82d5-495c-4d10-9052-76c38f64f1f7 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@74 -- # [[ bdev_register:487d04ff-ae35-4694-a18f-1e306bfb6224 bdev_register:6b6e2b88-5d79-4238-af45-fb08d4f42ea1 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:a34b82d5-495c-4d10-9052-76c38f64f1f7 bdev_register:aio_disk bdev_register:d3d53430-8e38-4d90-ab38-47ef51a39526 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\4\8\7\d\0\4\f\f\-\a\e\3\5\-\4\6\9\4\-\a\1\8\f\-\1\e\3\0\6\b\f\b\6\2\2\4\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\6\b\6\e\2\b\8\8\-\5\d\7\9\-\4\2\3\8\-\a\f\4\5\-\f\b\0\8\d\4\f\4\2\e\a\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\3\4\b\8\2\d\5\-\4\9\5\c\-\4\d\1\0\-\9\0\5\2\-\7\6\c\3\8\f\6\4\f\1\f\7\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\d\3\d\5\3\4\3\0\-\8\e\3\8\-\4\d\9\0\-\a\b\3\8\-\4\7\e\f\5\1\a\3\9\5\2\6 ]] 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@86 -- # cat 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@86 -- # printf ' %s\n' bdev_register:487d04ff-ae35-4694-a18f-1e306bfb6224 bdev_register:6b6e2b88-5d79-4238-af45-fb08d4f42ea1 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:a34b82d5-495c-4d10-9052-76c38f64f1f7 bdev_register:aio_disk bdev_register:d3d53430-8e38-4d90-ab38-47ef51a39526 00:09:56.675 Expected events matched: 00:09:56.675 bdev_register:487d04ff-ae35-4694-a18f-1e306bfb6224 00:09:56.675 bdev_register:6b6e2b88-5d79-4238-af45-fb08d4f42ea1 00:09:56.675 bdev_register:Malloc0 00:09:56.675 bdev_register:Malloc0p0 00:09:56.675 bdev_register:Malloc0p1 00:09:56.675 bdev_register:Malloc0p2 00:09:56.675 bdev_register:Malloc1 00:09:56.675 bdev_register:Malloc3 00:09:56.675 bdev_register:Null0 00:09:56.675 bdev_register:Nvme0n1 00:09:56.675 bdev_register:Nvme0n1p0 00:09:56.675 bdev_register:Nvme0n1p1 00:09:56.675 bdev_register:PTBdevFromMalloc3 00:09:56.675 bdev_register:a34b82d5-495c-4d10-9052-76c38f64f1f7 00:09:56.675 bdev_register:aio_disk 00:09:56.675 bdev_register:d3d53430-8e38-4d90-ab38-47ef51a39526 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@180 -- # timing_exit create_bdev_subsystem_config 00:09:56.675 21:23:29 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:56.675 21:23:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:09:56.675 21:23:29 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:09:56.675 21:23:29 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:56.675 21:23:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:56.675 21:23:30 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:09:56.675 21:23:30 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:56.675 21:23:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:56.934 MallocBdevForConfigChangeCheck 00:09:56.934 21:23:30 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:09:56.934 21:23:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:56.934 21:23:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:56.934 21:23:30 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:09:56.934 21:23:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:57.500 21:23:30 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:09:57.500 INFO: shutting down applications... 00:09:57.500 21:23:30 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:09:57.500 21:23:30 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:09:57.500 21:23:30 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:09:57.500 21:23:30 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:09:57.500 [2024-07-15 21:23:30.711174] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:09:57.758 Calling clear_vhost_scsi_subsystem 00:09:57.758 Calling clear_iscsi_subsystem 00:09:57.758 Calling clear_vhost_blk_subsystem 00:09:57.758 Calling clear_nbd_subsystem 00:09:57.758 Calling clear_nvmf_subsystem 00:09:57.758 Calling clear_bdev_subsystem 00:09:57.758 21:23:30 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:09:57.758 21:23:30 json_config -- json_config/json_config.sh@343 -- # count=100 00:09:57.758 21:23:30 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:09:57.758 21:23:30 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:57.758 21:23:30 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:09:57.758 21:23:30 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:09:58.016 21:23:31 json_config -- json_config/json_config.sh@345 -- # break 00:09:58.016 21:23:31 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:09:58.016 21:23:31 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:09:58.016 21:23:31 json_config -- json_config/common.sh@31 -- # local app=target 00:09:58.016 21:23:31 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:58.016 21:23:31 json_config -- json_config/common.sh@35 -- # [[ -n 112091 ]] 00:09:58.016 21:23:31 json_config -- json_config/common.sh@38 -- # kill -SIGINT 112091 00:09:58.016 21:23:31 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:58.016 21:23:31 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:58.016 21:23:31 json_config -- json_config/common.sh@41 -- # kill -0 112091 00:09:58.016 21:23:31 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:09:58.582 21:23:31 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:09:58.582 21:23:31 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:58.582 21:23:31 json_config -- json_config/common.sh@41 -- # kill -0 112091 00:09:58.582 21:23:31 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:09:59.148 21:23:32 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:09:59.148 21:23:32 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:59.148 21:23:32 json_config -- json_config/common.sh@41 -- # kill -0 112091 00:09:59.148 21:23:32 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:09:59.714 SPDK target shutdown done 00:09:59.714 INFO: relaunching applications... 00:09:59.714 21:23:32 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:09:59.714 21:23:32 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:59.714 21:23:32 json_config -- json_config/common.sh@41 -- # kill -0 112091 00:09:59.714 21:23:32 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:59.714 21:23:32 json_config -- json_config/common.sh@43 -- # break 00:09:59.714 21:23:32 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:59.714 21:23:32 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:59.714 21:23:32 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:09:59.714 21:23:32 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:59.714 21:23:32 json_config -- json_config/common.sh@9 -- # local app=target 00:09:59.714 Waiting for target to run... 00:09:59.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:59.714 21:23:32 json_config -- json_config/common.sh@10 -- # shift 00:09:59.714 21:23:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:59.714 21:23:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:59.714 21:23:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:59.714 21:23:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:59.714 21:23:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:59.714 21:23:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=112362 00:09:59.714 21:23:32 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:59.714 21:23:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:59.714 21:23:32 json_config -- json_config/common.sh@25 -- # waitforlisten 112362 /var/tmp/spdk_tgt.sock 00:09:59.714 21:23:32 json_config -- common/autotest_common.sh@829 -- # '[' -z 112362 ']' 00:09:59.714 21:23:32 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:59.714 21:23:32 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:59.714 21:23:32 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:59.714 21:23:32 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:59.714 21:23:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:59.714 [2024-07-15 21:23:32.865815] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:09:59.714 [2024-07-15 21:23:32.866110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112362 ] 00:10:00.284 [2024-07-15 21:23:33.453669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.542 [2024-07-15 21:23:33.661732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.477 [2024-07-15 21:23:34.535047] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:10:01.477 [2024-07-15 21:23:34.535181] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:10:01.477 [2024-07-15 21:23:34.543000] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:01.477 [2024-07-15 21:23:34.543099] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:01.477 [2024-07-15 21:23:34.551015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:01.477 [2024-07-15 21:23:34.551086] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:01.477 [2024-07-15 21:23:34.551120] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:01.477 [2024-07-15 21:23:34.651479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:01.477 [2024-07-15 21:23:34.651698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.477 [2024-07-15 21:23:34.651761] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:10:01.477 [2024-07-15 21:23:34.651805] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.477 [2024-07-15 21:23:34.652294] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.477 [2024-07-15 21:23:34.652360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:10:01.477 00:10:01.477 INFO: Checking if target configuration is the same... 00:10:01.477 21:23:34 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:01.477 21:23:34 json_config -- common/autotest_common.sh@862 -- # return 0 00:10:01.477 21:23:34 json_config -- json_config/common.sh@26 -- # echo '' 00:10:01.477 21:23:34 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:10:01.477 21:23:34 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:10:01.477 21:23:34 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:01.477 21:23:34 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:10:01.477 21:23:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:01.477 + '[' 2 -ne 2 ']' 00:10:01.477 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:10:01.477 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:10:01.477 + rootdir=/home/vagrant/spdk_repo/spdk 00:10:01.477 +++ basename /dev/fd/62 00:10:01.477 ++ mktemp /tmp/62.XXX 00:10:01.477 + tmp_file_1=/tmp/62.cat 00:10:01.477 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:01.477 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:01.734 + tmp_file_2=/tmp/spdk_tgt_config.json.DIx 00:10:01.734 + ret=0 00:10:01.734 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:01.991 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:01.991 + diff -u /tmp/62.cat /tmp/spdk_tgt_config.json.DIx 00:10:01.991 INFO: JSON config files are the same 00:10:01.991 + echo 'INFO: JSON config files are the same' 00:10:01.991 + rm /tmp/62.cat /tmp/spdk_tgt_config.json.DIx 00:10:01.991 + exit 0 00:10:01.991 INFO: changing configuration and checking if this can be detected... 00:10:01.991 21:23:35 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:10:01.991 21:23:35 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:10:01.991 21:23:35 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:01.991 21:23:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:02.248 21:23:35 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:02.248 21:23:35 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:10:02.248 21:23:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:02.248 + '[' 2 -ne 2 ']' 00:10:02.248 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:10:02.248 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:10:02.248 + rootdir=/home/vagrant/spdk_repo/spdk 00:10:02.248 +++ basename /dev/fd/62 00:10:02.248 ++ mktemp /tmp/62.XXX 00:10:02.248 + tmp_file_1=/tmp/62.pOX 00:10:02.248 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:02.248 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:02.248 + tmp_file_2=/tmp/spdk_tgt_config.json.iF1 00:10:02.248 + ret=0 00:10:02.248 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:02.506 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:02.506 + diff -u /tmp/62.pOX /tmp/spdk_tgt_config.json.iF1 00:10:02.506 + ret=1 00:10:02.506 + echo '=== Start of file: /tmp/62.pOX ===' 00:10:02.506 + cat /tmp/62.pOX 00:10:02.506 + echo '=== End of file: /tmp/62.pOX ===' 00:10:02.506 + echo '' 00:10:02.506 + echo '=== Start of file: /tmp/spdk_tgt_config.json.iF1 ===' 00:10:02.506 + cat /tmp/spdk_tgt_config.json.iF1 00:10:02.506 + echo '=== End of file: /tmp/spdk_tgt_config.json.iF1 ===' 00:10:02.506 + echo '' 00:10:02.506 + rm /tmp/62.pOX /tmp/spdk_tgt_config.json.iF1 00:10:02.506 + exit 1 00:10:02.506 INFO: configuration change detected. 00:10:02.506 21:23:35 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:10:02.506 21:23:35 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:10:02.506 21:23:35 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:10:02.506 21:23:35 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:02.506 21:23:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:02.506 21:23:35 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:10:02.506 21:23:35 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:10:02.506 21:23:35 json_config -- json_config/json_config.sh@317 -- # [[ -n 112362 ]] 00:10:02.506 21:23:35 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:10:02.506 21:23:35 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:10:02.506 21:23:35 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:02.506 21:23:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:02.506 21:23:35 json_config -- json_config/json_config.sh@186 -- # [[ 1 -eq 1 ]] 00:10:02.506 21:23:35 json_config -- json_config/json_config.sh@187 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:10:02.506 21:23:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:10:02.765 21:23:36 json_config -- json_config/json_config.sh@188 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:10:02.765 21:23:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:10:03.021 21:23:36 json_config -- json_config/json_config.sh@189 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:10:03.021 21:23:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:10:03.277 21:23:36 json_config -- json_config/json_config.sh@190 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:10:03.277 21:23:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:10:03.278 21:23:36 json_config -- json_config/json_config.sh@193 -- # uname -s 00:10:03.535 21:23:36 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:10:03.535 21:23:36 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:10:03.535 21:23:36 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:10:03.535 21:23:36 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:10:03.535 21:23:36 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:03.535 21:23:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:03.535 21:23:36 json_config -- json_config/json_config.sh@323 -- # killprocess 112362 00:10:03.535 21:23:36 json_config -- common/autotest_common.sh@948 -- # '[' -z 112362 ']' 00:10:03.535 21:23:36 json_config -- common/autotest_common.sh@952 -- # kill -0 112362 00:10:03.535 21:23:36 json_config -- common/autotest_common.sh@953 -- # uname 00:10:03.535 21:23:36 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:03.535 21:23:36 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112362 00:10:03.535 21:23:36 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:03.535 21:23:36 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:03.535 21:23:36 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112362' 00:10:03.535 killing process with pid 112362 00:10:03.535 21:23:36 json_config -- common/autotest_common.sh@967 -- # kill 112362 00:10:03.535 21:23:36 json_config -- common/autotest_common.sh@972 -- # wait 112362 00:10:04.947 21:23:38 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:04.947 21:23:38 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:10:04.947 21:23:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:04.947 21:23:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:04.947 21:23:38 json_config -- json_config/json_config.sh@328 -- # return 0 00:10:04.947 21:23:38 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:10:04.947 INFO: Success 00:10:04.947 ************************************ 00:10:04.947 END TEST json_config 00:10:04.947 ************************************ 00:10:04.947 00:10:04.947 real 0m13.893s 00:10:04.947 user 0m18.206s 00:10:04.947 sys 0m2.635s 00:10:04.947 21:23:38 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:04.947 21:23:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:04.947 21:23:38 -- common/autotest_common.sh@1142 -- # return 0 00:10:04.947 21:23:38 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:04.947 21:23:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:04.947 21:23:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:04.947 21:23:38 -- common/autotest_common.sh@10 -- # set +x 00:10:04.947 ************************************ 00:10:04.947 START TEST json_config_extra_key 00:10:04.947 ************************************ 00:10:04.947 21:23:38 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:04.947 21:23:38 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:04.947 21:23:38 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:10:04.947 21:23:38 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.947 21:23:38 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.947 21:23:38 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.947 21:23:38 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.947 21:23:38 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.947 21:23:38 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.947 21:23:38 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.947 21:23:38 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.947 21:23:38 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.947 21:23:38 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.947 21:23:38 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96b65eab-3c9d-42dd-8d52-06f59de63b56 00:10:04.947 21:23:38 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=96b65eab-3c9d-42dd-8d52-06f59de63b56 00:10:04.947 21:23:38 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.947 21:23:38 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.947 21:23:38 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:04.947 21:23:38 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.947 21:23:38 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:04.947 21:23:38 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.947 21:23:38 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.947 21:23:38 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.947 21:23:38 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:04.947 21:23:38 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:04.947 21:23:38 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:04.947 21:23:38 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:10:04.947 21:23:38 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:04.947 21:23:38 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:10:04.947 21:23:38 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:04.947 21:23:38 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:04.947 21:23:38 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.947 21:23:38 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.947 21:23:38 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.947 21:23:38 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:04.947 21:23:38 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:04.947 21:23:38 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:04.947 21:23:38 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:04.947 21:23:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=([target]="") 00:10:04.947 21:23:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:10:04.947 21:23:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock') 00:10:04.947 21:23:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:10:04.947 21:23:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=([target]='-m 0x1 -s 1024') 00:10:04.947 21:23:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:10:04.947 21:23:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=([target]="$rootdir/test/json_config/extra_key.json") 00:10:04.947 21:23:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:10:04.947 21:23:38 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:04.947 21:23:38 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:10:04.947 INFO: launching applications... 00:10:04.947 21:23:38 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:04.947 21:23:38 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:10:04.947 21:23:38 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:10:04.947 21:23:38 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:04.947 21:23:38 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:04.947 21:23:38 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:10:04.947 21:23:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:04.948 21:23:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:04.948 21:23:38 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=112563 00:10:04.948 21:23:38 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:04.948 Waiting for target to run... 00:10:04.948 21:23:38 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 112563 /var/tmp/spdk_tgt.sock 00:10:04.948 21:23:38 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:04.948 21:23:38 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 112563 ']' 00:10:04.948 21:23:38 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:04.948 21:23:38 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:04.948 21:23:38 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:04.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:04.948 21:23:38 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:04.948 21:23:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:05.206 [2024-07-15 21:23:38.359399] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:10:05.206 [2024-07-15 21:23:38.359687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112563 ] 00:10:05.774 [2024-07-15 21:23:38.930177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.031 [2024-07-15 21:23:39.169111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.967 00:10:06.967 INFO: shutting down applications... 00:10:06.967 21:23:40 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:06.968 21:23:40 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:10:06.968 21:23:40 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:10:06.968 21:23:40 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:10:06.968 21:23:40 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:10:06.968 21:23:40 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:10:06.968 21:23:40 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:06.968 21:23:40 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 112563 ]] 00:10:06.968 21:23:40 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 112563 00:10:06.968 21:23:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:06.968 21:23:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:06.968 21:23:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 112563 00:10:06.968 21:23:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:07.226 21:23:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:07.226 21:23:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:07.226 21:23:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 112563 00:10:07.226 21:23:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:07.794 21:23:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:07.794 21:23:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:07.794 21:23:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 112563 00:10:07.794 21:23:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:08.367 21:23:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:08.367 21:23:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:08.367 21:23:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 112563 00:10:08.367 21:23:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:08.937 21:23:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:08.937 21:23:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:08.937 21:23:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 112563 00:10:08.937 21:23:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:09.503 21:23:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:09.504 21:23:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:09.504 21:23:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 112563 00:10:09.504 21:23:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:09.762 21:23:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:09.762 21:23:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:09.762 21:23:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 112563 00:10:09.762 21:23:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:10.331 21:23:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:10.331 21:23:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:10.331 21:23:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 112563 00:10:10.331 21:23:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:10.898 SPDK target shutdown done 00:10:10.898 Success 00:10:10.898 21:23:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:10.898 21:23:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:10.898 21:23:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 112563 00:10:10.898 21:23:44 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:10.898 21:23:44 json_config_extra_key -- json_config/common.sh@43 -- # break 00:10:10.898 21:23:44 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:10.898 21:23:44 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:10.899 21:23:44 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:10:10.899 00:10:10.899 real 0m5.933s 00:10:10.899 user 0m5.343s 00:10:10.899 sys 0m0.718s 00:10:10.899 ************************************ 00:10:10.899 END TEST json_config_extra_key 00:10:10.899 ************************************ 00:10:10.899 21:23:44 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:10.899 21:23:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:10.899 21:23:44 -- common/autotest_common.sh@1142 -- # return 0 00:10:10.899 21:23:44 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:10.899 21:23:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:10.899 21:23:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:10.899 21:23:44 -- common/autotest_common.sh@10 -- # set +x 00:10:10.899 ************************************ 00:10:10.899 START TEST alias_rpc 00:10:10.899 ************************************ 00:10:10.899 21:23:44 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:10.899 * Looking for test storage... 00:10:11.157 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:10:11.157 21:23:44 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:11.157 21:23:44 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:11.157 21:23:44 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=112699 00:10:11.157 21:23:44 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 112699 00:10:11.157 21:23:44 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 112699 ']' 00:10:11.157 21:23:44 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.157 21:23:44 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:11.157 21:23:44 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.157 21:23:44 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:11.157 21:23:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.157 [2024-07-15 21:23:44.357826] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:10:11.157 [2024-07-15 21:23:44.358207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112699 ] 00:10:11.414 [2024-07-15 21:23:44.547563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.672 [2024-07-15 21:23:44.830360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.657 21:23:46 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:12.657 21:23:46 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:12.657 21:23:46 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:10:12.917 21:23:46 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 112699 00:10:12.917 21:23:46 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 112699 ']' 00:10:12.917 21:23:46 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 112699 00:10:12.917 21:23:46 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:10:12.917 21:23:46 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:12.917 21:23:46 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112699 00:10:12.917 killing process with pid 112699 00:10:12.917 21:23:46 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:12.917 21:23:46 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:12.917 21:23:46 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112699' 00:10:12.917 21:23:46 alias_rpc -- common/autotest_common.sh@967 -- # kill 112699 00:10:12.917 21:23:46 alias_rpc -- common/autotest_common.sh@972 -- # wait 112699 00:10:16.205 ************************************ 00:10:16.205 END TEST alias_rpc 00:10:16.205 ************************************ 00:10:16.205 00:10:16.205 real 0m5.368s 00:10:16.205 user 0m5.289s 00:10:16.205 sys 0m0.669s 00:10:16.205 21:23:49 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:16.205 21:23:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.205 21:23:49 -- common/autotest_common.sh@1142 -- # return 0 00:10:16.205 21:23:49 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:10:16.205 21:23:49 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:16.205 21:23:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:16.205 21:23:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:16.205 21:23:49 -- common/autotest_common.sh@10 -- # set +x 00:10:16.464 ************************************ 00:10:16.464 START TEST spdkcli_tcp 00:10:16.464 ************************************ 00:10:16.464 21:23:49 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:16.464 * Looking for test storage... 00:10:16.464 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:10:16.464 21:23:49 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:10:16.464 21:23:49 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:10:16.464 21:23:49 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:10:16.464 21:23:49 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:16.464 21:23:49 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:16.464 21:23:49 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:16.464 21:23:49 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:16.464 21:23:49 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:16.464 21:23:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:16.464 21:23:49 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=112846 00:10:16.464 21:23:49 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:16.464 21:23:49 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 112846 00:10:16.464 21:23:49 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 112846 ']' 00:10:16.464 21:23:49 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.464 21:23:49 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:16.464 21:23:49 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.464 21:23:49 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:16.464 21:23:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:16.464 [2024-07-15 21:23:49.792932] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:10:16.464 [2024-07-15 21:23:49.793178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112846 ] 00:10:16.723 [2024-07-15 21:23:49.960597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:16.981 [2024-07-15 21:23:50.244878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.981 [2024-07-15 21:23:50.244886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.359 21:23:51 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:18.359 21:23:51 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:10:18.359 21:23:51 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=112866 00:10:18.359 21:23:51 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:10:18.359 21:23:51 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:10:18.359 [ 00:10:18.359 "spdk_get_version", 00:10:18.359 "rpc_get_methods", 00:10:18.359 "keyring_get_keys", 00:10:18.359 "trace_get_info", 00:10:18.359 "trace_get_tpoint_group_mask", 00:10:18.359 "trace_disable_tpoint_group", 00:10:18.359 "trace_enable_tpoint_group", 00:10:18.359 "trace_clear_tpoint_mask", 00:10:18.359 "trace_set_tpoint_mask", 00:10:18.359 "framework_get_pci_devices", 00:10:18.359 "framework_get_config", 00:10:18.359 "framework_get_subsystems", 00:10:18.359 "iobuf_get_stats", 00:10:18.359 "iobuf_set_options", 00:10:18.359 "sock_get_default_impl", 00:10:18.360 "sock_set_default_impl", 00:10:18.360 "sock_impl_set_options", 00:10:18.360 "sock_impl_get_options", 00:10:18.360 "vmd_rescan", 00:10:18.360 "vmd_remove_device", 00:10:18.360 "vmd_enable", 00:10:18.360 "accel_get_stats", 00:10:18.360 "accel_set_options", 00:10:18.360 "accel_set_driver", 00:10:18.360 "accel_crypto_key_destroy", 00:10:18.360 "accel_crypto_keys_get", 00:10:18.360 "accel_crypto_key_create", 00:10:18.360 "accel_assign_opc", 00:10:18.360 "accel_get_module_info", 00:10:18.360 "accel_get_opc_assignments", 00:10:18.360 "notify_get_notifications", 00:10:18.360 "notify_get_types", 00:10:18.360 "bdev_get_histogram", 00:10:18.360 "bdev_enable_histogram", 00:10:18.360 "bdev_set_qos_limit", 00:10:18.360 "bdev_set_qd_sampling_period", 00:10:18.360 "bdev_get_bdevs", 00:10:18.360 "bdev_reset_iostat", 00:10:18.360 "bdev_get_iostat", 00:10:18.360 "bdev_examine", 00:10:18.360 "bdev_wait_for_examine", 00:10:18.360 "bdev_set_options", 00:10:18.360 "scsi_get_devices", 00:10:18.360 "thread_set_cpumask", 00:10:18.360 "framework_get_governor", 00:10:18.360 "framework_get_scheduler", 00:10:18.360 "framework_set_scheduler", 00:10:18.360 "framework_get_reactors", 00:10:18.360 "thread_get_io_channels", 00:10:18.360 "thread_get_pollers", 00:10:18.360 "thread_get_stats", 00:10:18.360 "framework_monitor_context_switch", 00:10:18.360 "spdk_kill_instance", 00:10:18.360 "log_enable_timestamps", 00:10:18.360 "log_get_flags", 00:10:18.360 "log_clear_flag", 00:10:18.360 "log_set_flag", 00:10:18.360 "log_get_level", 00:10:18.360 "log_set_level", 00:10:18.360 "log_get_print_level", 00:10:18.360 "log_set_print_level", 00:10:18.360 "framework_enable_cpumask_locks", 00:10:18.360 "framework_disable_cpumask_locks", 00:10:18.360 "framework_wait_init", 00:10:18.360 "framework_start_init", 00:10:18.360 "virtio_blk_create_transport", 00:10:18.360 "virtio_blk_get_transports", 00:10:18.360 "vhost_controller_set_coalescing", 00:10:18.360 "vhost_get_controllers", 00:10:18.360 "vhost_delete_controller", 00:10:18.360 "vhost_create_blk_controller", 00:10:18.360 "vhost_scsi_controller_remove_target", 00:10:18.360 "vhost_scsi_controller_add_target", 00:10:18.360 "vhost_start_scsi_controller", 00:10:18.360 "vhost_create_scsi_controller", 00:10:18.360 "nbd_get_disks", 00:10:18.360 "nbd_stop_disk", 00:10:18.360 "nbd_start_disk", 00:10:18.360 "env_dpdk_get_mem_stats", 00:10:18.360 "nvmf_stop_mdns_prr", 00:10:18.360 "nvmf_publish_mdns_prr", 00:10:18.360 "nvmf_subsystem_get_listeners", 00:10:18.360 "nvmf_subsystem_get_qpairs", 00:10:18.360 "nvmf_subsystem_get_controllers", 00:10:18.360 "nvmf_get_stats", 00:10:18.360 "nvmf_get_transports", 00:10:18.360 "nvmf_create_transport", 00:10:18.360 "nvmf_get_targets", 00:10:18.360 "nvmf_delete_target", 00:10:18.360 "nvmf_create_target", 00:10:18.360 "nvmf_subsystem_allow_any_host", 00:10:18.360 "nvmf_subsystem_remove_host", 00:10:18.360 "nvmf_subsystem_add_host", 00:10:18.360 "nvmf_ns_remove_host", 00:10:18.360 "nvmf_ns_add_host", 00:10:18.360 "nvmf_subsystem_remove_ns", 00:10:18.360 "nvmf_subsystem_add_ns", 00:10:18.360 "nvmf_subsystem_listener_set_ana_state", 00:10:18.360 "nvmf_discovery_get_referrals", 00:10:18.360 "nvmf_discovery_remove_referral", 00:10:18.360 "nvmf_discovery_add_referral", 00:10:18.360 "nvmf_subsystem_remove_listener", 00:10:18.360 "nvmf_subsystem_add_listener", 00:10:18.360 "nvmf_delete_subsystem", 00:10:18.360 "nvmf_create_subsystem", 00:10:18.360 "nvmf_get_subsystems", 00:10:18.360 "nvmf_set_crdt", 00:10:18.360 "nvmf_set_config", 00:10:18.360 "nvmf_set_max_subsystems", 00:10:18.360 "iscsi_get_histogram", 00:10:18.360 "iscsi_enable_histogram", 00:10:18.360 "iscsi_set_options", 00:10:18.360 "iscsi_get_auth_groups", 00:10:18.360 "iscsi_auth_group_remove_secret", 00:10:18.360 "iscsi_auth_group_add_secret", 00:10:18.360 "iscsi_delete_auth_group", 00:10:18.360 "iscsi_create_auth_group", 00:10:18.360 "iscsi_set_discovery_auth", 00:10:18.360 "iscsi_get_options", 00:10:18.360 "iscsi_target_node_request_logout", 00:10:18.360 "iscsi_target_node_set_redirect", 00:10:18.360 "iscsi_target_node_set_auth", 00:10:18.360 "iscsi_target_node_add_lun", 00:10:18.360 "iscsi_get_stats", 00:10:18.360 "iscsi_get_connections", 00:10:18.360 "iscsi_portal_group_set_auth", 00:10:18.360 "iscsi_start_portal_group", 00:10:18.360 "iscsi_delete_portal_group", 00:10:18.360 "iscsi_create_portal_group", 00:10:18.360 "iscsi_get_portal_groups", 00:10:18.360 "iscsi_delete_target_node", 00:10:18.360 "iscsi_target_node_remove_pg_ig_maps", 00:10:18.360 "iscsi_target_node_add_pg_ig_maps", 00:10:18.360 "iscsi_create_target_node", 00:10:18.360 "iscsi_get_target_nodes", 00:10:18.360 "iscsi_delete_initiator_group", 00:10:18.360 "iscsi_initiator_group_remove_initiators", 00:10:18.360 "iscsi_initiator_group_add_initiators", 00:10:18.360 "iscsi_create_initiator_group", 00:10:18.360 "iscsi_get_initiator_groups", 00:10:18.360 "keyring_linux_set_options", 00:10:18.360 "keyring_file_remove_key", 00:10:18.360 "keyring_file_add_key", 00:10:18.360 "iaa_scan_accel_module", 00:10:18.360 "dsa_scan_accel_module", 00:10:18.360 "ioat_scan_accel_module", 00:10:18.360 "accel_error_inject_error", 00:10:18.360 "bdev_iscsi_delete", 00:10:18.360 "bdev_iscsi_create", 00:10:18.360 "bdev_iscsi_set_options", 00:10:18.360 "bdev_virtio_attach_controller", 00:10:18.360 "bdev_virtio_scsi_get_devices", 00:10:18.360 "bdev_virtio_detach_controller", 00:10:18.360 "bdev_virtio_blk_set_hotplug", 00:10:18.360 "bdev_ftl_set_property", 00:10:18.360 "bdev_ftl_get_properties", 00:10:18.360 "bdev_ftl_get_stats", 00:10:18.360 "bdev_ftl_unmap", 00:10:18.360 "bdev_ftl_unload", 00:10:18.360 "bdev_ftl_delete", 00:10:18.360 "bdev_ftl_load", 00:10:18.360 "bdev_ftl_create", 00:10:18.360 "bdev_aio_delete", 00:10:18.360 "bdev_aio_rescan", 00:10:18.360 "bdev_aio_create", 00:10:18.360 "blobfs_create", 00:10:18.360 "blobfs_detect", 00:10:18.360 "blobfs_set_cache_size", 00:10:18.360 "bdev_zone_block_delete", 00:10:18.360 "bdev_zone_block_create", 00:10:18.360 "bdev_delay_delete", 00:10:18.360 "bdev_delay_create", 00:10:18.360 "bdev_delay_update_latency", 00:10:18.360 "bdev_split_delete", 00:10:18.360 "bdev_split_create", 00:10:18.360 "bdev_error_inject_error", 00:10:18.360 "bdev_error_delete", 00:10:18.360 "bdev_error_create", 00:10:18.360 "bdev_raid_set_options", 00:10:18.360 "bdev_raid_remove_base_bdev", 00:10:18.360 "bdev_raid_add_base_bdev", 00:10:18.360 "bdev_raid_delete", 00:10:18.360 "bdev_raid_create", 00:10:18.360 "bdev_raid_get_bdevs", 00:10:18.360 "bdev_lvol_set_parent_bdev", 00:10:18.360 "bdev_lvol_set_parent", 00:10:18.360 "bdev_lvol_check_shallow_copy", 00:10:18.360 "bdev_lvol_start_shallow_copy", 00:10:18.360 "bdev_lvol_grow_lvstore", 00:10:18.360 "bdev_lvol_get_lvols", 00:10:18.360 "bdev_lvol_get_lvstores", 00:10:18.360 "bdev_lvol_delete", 00:10:18.360 "bdev_lvol_set_read_only", 00:10:18.360 "bdev_lvol_resize", 00:10:18.360 "bdev_lvol_decouple_parent", 00:10:18.360 "bdev_lvol_inflate", 00:10:18.360 "bdev_lvol_rename", 00:10:18.360 "bdev_lvol_clone_bdev", 00:10:18.360 "bdev_lvol_clone", 00:10:18.360 "bdev_lvol_snapshot", 00:10:18.360 "bdev_lvol_create", 00:10:18.360 "bdev_lvol_delete_lvstore", 00:10:18.360 "bdev_lvol_rename_lvstore", 00:10:18.360 "bdev_lvol_create_lvstore", 00:10:18.360 "bdev_passthru_delete", 00:10:18.360 "bdev_passthru_create", 00:10:18.360 "bdev_nvme_cuse_unregister", 00:10:18.360 "bdev_nvme_cuse_register", 00:10:18.360 "bdev_opal_new_user", 00:10:18.360 "bdev_opal_set_lock_state", 00:10:18.360 "bdev_opal_delete", 00:10:18.360 "bdev_opal_get_info", 00:10:18.360 "bdev_opal_create", 00:10:18.360 "bdev_nvme_opal_revert", 00:10:18.360 "bdev_nvme_opal_init", 00:10:18.360 "bdev_nvme_send_cmd", 00:10:18.360 "bdev_nvme_get_path_iostat", 00:10:18.360 "bdev_nvme_get_mdns_discovery_info", 00:10:18.360 "bdev_nvme_stop_mdns_discovery", 00:10:18.360 "bdev_nvme_start_mdns_discovery", 00:10:18.360 "bdev_nvme_set_multipath_policy", 00:10:18.360 "bdev_nvme_set_preferred_path", 00:10:18.360 "bdev_nvme_get_io_paths", 00:10:18.360 "bdev_nvme_remove_error_injection", 00:10:18.360 "bdev_nvme_add_error_injection", 00:10:18.360 "bdev_nvme_get_discovery_info", 00:10:18.360 "bdev_nvme_stop_discovery", 00:10:18.360 "bdev_nvme_start_discovery", 00:10:18.360 "bdev_nvme_get_controller_health_info", 00:10:18.360 "bdev_nvme_disable_controller", 00:10:18.360 "bdev_nvme_enable_controller", 00:10:18.360 "bdev_nvme_reset_controller", 00:10:18.360 "bdev_nvme_get_transport_statistics", 00:10:18.360 "bdev_nvme_apply_firmware", 00:10:18.360 "bdev_nvme_detach_controller", 00:10:18.360 "bdev_nvme_get_controllers", 00:10:18.360 "bdev_nvme_attach_controller", 00:10:18.360 "bdev_nvme_set_hotplug", 00:10:18.360 "bdev_nvme_set_options", 00:10:18.360 "bdev_null_resize", 00:10:18.360 "bdev_null_delete", 00:10:18.360 "bdev_null_create", 00:10:18.360 "bdev_malloc_delete", 00:10:18.360 "bdev_malloc_create" 00:10:18.360 ] 00:10:18.360 21:23:51 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:10:18.360 21:23:51 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:18.360 21:23:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:18.360 21:23:51 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:18.360 21:23:51 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 112846 00:10:18.360 21:23:51 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 112846 ']' 00:10:18.360 21:23:51 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 112846 00:10:18.360 21:23:51 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:10:18.360 21:23:51 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:18.360 21:23:51 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112846 00:10:18.360 killing process with pid 112846 00:10:18.360 21:23:51 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:18.360 21:23:51 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:18.360 21:23:51 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112846' 00:10:18.360 21:23:51 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 112846 00:10:18.360 21:23:51 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 112846 00:10:21.673 ************************************ 00:10:21.673 END TEST spdkcli_tcp 00:10:21.673 ************************************ 00:10:21.673 00:10:21.673 real 0m5.206s 00:10:21.673 user 0m9.114s 00:10:21.673 sys 0m0.731s 00:10:21.673 21:23:54 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:21.673 21:23:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:21.673 21:23:54 -- common/autotest_common.sh@1142 -- # return 0 00:10:21.673 21:23:54 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:21.673 21:23:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:21.673 21:23:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:21.673 21:23:54 -- common/autotest_common.sh@10 -- # set +x 00:10:21.673 ************************************ 00:10:21.673 START TEST dpdk_mem_utility 00:10:21.673 ************************************ 00:10:21.673 21:23:54 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:21.673 * Looking for test storage... 00:10:21.673 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:10:21.673 21:23:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:21.673 21:23:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=112983 00:10:21.673 21:23:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:21.673 21:23:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 112983 00:10:21.673 21:23:54 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 112983 ']' 00:10:21.673 21:23:54 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.673 21:23:54 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:21.673 21:23:54 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.673 21:23:54 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:21.673 21:23:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:21.931 [2024-07-15 21:23:55.045443] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:10:21.931 [2024-07-15 21:23:55.045675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112983 ] 00:10:21.932 [2024-07-15 21:23:55.211374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.191 [2024-07-15 21:23:55.497068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.570 21:23:56 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:23.570 21:23:56 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:10:23.570 21:23:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:10:23.570 21:23:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:10:23.570 21:23:56 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.570 21:23:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:23.570 { 00:10:23.570 "filename": "/tmp/spdk_mem_dump.txt" 00:10:23.570 } 00:10:23.570 21:23:56 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.570 21:23:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:23.570 DPDK memory size 820.000000 MiB in 1 heap(s) 00:10:23.570 1 heaps totaling size 820.000000 MiB 00:10:23.570 size: 820.000000 MiB heap id: 0 00:10:23.570 end heaps---------- 00:10:23.570 8 mempools totaling size 598.116089 MiB 00:10:23.570 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:10:23.570 size: 158.602051 MiB name: PDU_data_out_Pool 00:10:23.570 size: 84.521057 MiB name: bdev_io_112983 00:10:23.570 size: 51.011292 MiB name: evtpool_112983 00:10:23.570 size: 50.003479 MiB name: msgpool_112983 00:10:23.570 size: 21.763794 MiB name: PDU_Pool 00:10:23.570 size: 19.513306 MiB name: SCSI_TASK_Pool 00:10:23.570 size: 0.026123 MiB name: Session_Pool 00:10:23.570 end mempools------- 00:10:23.570 6 memzones totaling size 4.142822 MiB 00:10:23.570 size: 1.000366 MiB name: RG_ring_0_112983 00:10:23.570 size: 1.000366 MiB name: RG_ring_1_112983 00:10:23.570 size: 1.000366 MiB name: RG_ring_4_112983 00:10:23.570 size: 1.000366 MiB name: RG_ring_5_112983 00:10:23.570 size: 0.125366 MiB name: RG_ring_2_112983 00:10:23.570 size: 0.015991 MiB name: RG_ring_3_112983 00:10:23.570 end memzones------- 00:10:23.570 21:23:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:10:23.570 heap id: 0 total size: 820.000000 MiB number of busy elements: 230 number of free elements: 18 00:10:23.570 list of free elements. size: 18.468750 MiB 00:10:23.570 element at address: 0x200000400000 with size: 1.999451 MiB 00:10:23.570 element at address: 0x200000800000 with size: 1.996887 MiB 00:10:23.570 element at address: 0x200007000000 with size: 1.995972 MiB 00:10:23.570 element at address: 0x20000b200000 with size: 1.995972 MiB 00:10:23.570 element at address: 0x200019100040 with size: 0.999939 MiB 00:10:23.570 element at address: 0x200019500040 with size: 0.999939 MiB 00:10:23.570 element at address: 0x200019600000 with size: 0.999329 MiB 00:10:23.570 element at address: 0x200003e00000 with size: 0.996094 MiB 00:10:23.570 element at address: 0x200032200000 with size: 0.994324 MiB 00:10:23.570 element at address: 0x200018e00000 with size: 0.959656 MiB 00:10:23.570 element at address: 0x200019900040 with size: 0.937256 MiB 00:10:23.570 element at address: 0x200000200000 with size: 0.834106 MiB 00:10:23.570 element at address: 0x20001b000000 with size: 0.560974 MiB 00:10:23.570 element at address: 0x200019200000 with size: 0.489197 MiB 00:10:23.570 element at address: 0x200019a00000 with size: 0.485413 MiB 00:10:23.570 element at address: 0x200013800000 with size: 0.468140 MiB 00:10:23.570 element at address: 0x200028400000 with size: 0.399963 MiB 00:10:23.570 element at address: 0x200003a00000 with size: 0.356140 MiB 00:10:23.570 list of standard malloc elements. size: 199.266846 MiB 00:10:23.570 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:10:23.570 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:10:23.570 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:10:23.570 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:10:23.570 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:10:23.570 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:10:23.570 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:10:23.570 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:10:23.570 element at address: 0x20000b1ff380 with size: 0.000366 MiB 00:10:23.570 element at address: 0x20000b1ff040 with size: 0.000305 MiB 00:10:23.570 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:10:23.570 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000002d6780 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000002d6880 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000002d6980 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000002d6a80 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:10:23.570 element at address: 0x200003aff980 with size: 0.000244 MiB 00:10:23.570 element at address: 0x200003affa80 with size: 0.000244 MiB 00:10:23.570 element at address: 0x200003eff000 with size: 0.000244 MiB 00:10:23.570 element at address: 0x20000b1ff180 with size: 0.000244 MiB 00:10:23.570 element at address: 0x20000b1ff280 with size: 0.000244 MiB 00:10:23.570 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:10:23.570 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:10:23.570 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:10:23.570 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:10:23.570 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:10:23.570 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:10:23.570 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:10:23.570 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:10:23.570 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:10:23.570 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:10:23.570 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:10:23.570 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:10:23.571 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:10:23.571 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:10:23.571 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:10:23.571 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:10:23.571 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:10:23.571 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:10:23.571 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:10:23.571 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:10:23.571 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:10:23.571 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:10:23.571 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:10:23.571 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:10:23.571 element at address: 0x200013877d80 with size: 0.000244 MiB 00:10:23.571 element at address: 0x200013877e80 with size: 0.000244 MiB 00:10:23.571 element at address: 0x200013877f80 with size: 0.000244 MiB 00:10:23.571 element at address: 0x200013878080 with size: 0.000244 MiB 00:10:23.571 element at address: 0x200013878180 with size: 0.000244 MiB 00:10:23.571 element at address: 0x200013878280 with size: 0.000244 MiB 00:10:23.571 element at address: 0x200013878380 with size: 0.000244 MiB 00:10:23.571 element at address: 0x200013878480 with size: 0.000244 MiB 00:10:23.571 element at address: 0x200013878580 with size: 0.000244 MiB 00:10:23.571 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:10:23.571 element at address: 0x200019abc680 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b08f9c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b08fac0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b08fbc0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b08fcc0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b08fdc0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b08fec0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b08ffc0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0900c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0901c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0902c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0903c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0904c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:10:23.571 element at address: 0x200028466640 with size: 0.000244 MiB 00:10:23.571 element at address: 0x200028466740 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846d400 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846d680 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846d780 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846d880 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846d980 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846da80 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846db80 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846de80 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846df80 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846e080 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846e180 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846e280 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846e380 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846e480 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846e580 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846e680 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846e780 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846e880 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846e980 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846f080 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846f180 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846f280 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846f380 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846f480 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846f580 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846f680 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846f780 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846f880 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846f980 with size: 0.000244 MiB 00:10:23.571 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:10:23.572 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:10:23.572 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:10:23.572 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:10:23.572 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:10:23.572 list of memzone associated elements. size: 602.264404 MiB 00:10:23.572 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:10:23.572 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:10:23.572 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:10:23.572 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:10:23.572 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:10:23.572 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_112983_0 00:10:23.572 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:10:23.572 associated memzone info: size: 48.002930 MiB name: MP_evtpool_112983_0 00:10:23.572 element at address: 0x200003fff340 with size: 48.003113 MiB 00:10:23.572 associated memzone info: size: 48.002930 MiB name: MP_msgpool_112983_0 00:10:23.572 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:10:23.572 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:10:23.572 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:10:23.572 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:10:23.572 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:10:23.572 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_112983 00:10:23.572 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:10:23.572 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_112983 00:10:23.572 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:10:23.572 associated memzone info: size: 1.007996 MiB name: MP_evtpool_112983 00:10:23.572 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:10:23.572 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:10:23.572 element at address: 0x200019abc780 with size: 1.008179 MiB 00:10:23.572 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:10:23.572 element at address: 0x200018efde00 with size: 1.008179 MiB 00:10:23.572 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:10:23.572 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:10:23.572 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:10:23.572 element at address: 0x200003eff100 with size: 1.000549 MiB 00:10:23.572 associated memzone info: size: 1.000366 MiB name: RG_ring_0_112983 00:10:23.572 element at address: 0x200003affb80 with size: 1.000549 MiB 00:10:23.572 associated memzone info: size: 1.000366 MiB name: RG_ring_1_112983 00:10:23.572 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:10:23.572 associated memzone info: size: 1.000366 MiB name: RG_ring_4_112983 00:10:23.572 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:10:23.572 associated memzone info: size: 1.000366 MiB name: RG_ring_5_112983 00:10:23.572 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:10:23.572 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_112983 00:10:23.572 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:10:23.572 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:10:23.572 element at address: 0x200013878680 with size: 0.500549 MiB 00:10:23.572 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:10:23.572 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:10:23.572 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:10:23.572 element at address: 0x200003adf740 with size: 0.125549 MiB 00:10:23.572 associated memzone info: size: 0.125366 MiB name: RG_ring_2_112983 00:10:23.572 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:10:23.572 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:10:23.572 element at address: 0x200028466840 with size: 0.023804 MiB 00:10:23.572 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:10:23.572 element at address: 0x200003adb500 with size: 0.016174 MiB 00:10:23.572 associated memzone info: size: 0.015991 MiB name: RG_ring_3_112983 00:10:23.572 element at address: 0x20002846c9c0 with size: 0.002502 MiB 00:10:23.572 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:10:23.572 element at address: 0x2000002d6b80 with size: 0.000366 MiB 00:10:23.572 associated memzone info: size: 0.000183 MiB name: MP_msgpool_112983 00:10:23.572 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:10:23.572 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_112983 00:10:23.572 element at address: 0x20002846d500 with size: 0.000366 MiB 00:10:23.572 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:10:23.572 21:23:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:10:23.572 21:23:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 112983 00:10:23.572 21:23:56 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 112983 ']' 00:10:23.572 21:23:56 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 112983 00:10:23.572 21:23:56 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:10:23.572 21:23:56 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:23.572 21:23:56 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112983 00:10:23.572 21:23:56 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:23.572 21:23:56 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:23.572 21:23:56 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112983' 00:10:23.572 killing process with pid 112983 00:10:23.572 21:23:56 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 112983 00:10:23.572 21:23:56 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 112983 00:10:26.876 ************************************ 00:10:26.876 END TEST dpdk_mem_utility 00:10:26.876 ************************************ 00:10:26.876 00:10:26.876 real 0m4.995s 00:10:26.876 user 0m4.759s 00:10:26.876 sys 0m0.646s 00:10:26.876 21:23:59 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:26.876 21:23:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:26.876 21:23:59 -- common/autotest_common.sh@1142 -- # return 0 00:10:26.876 21:23:59 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:26.876 21:23:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:26.876 21:23:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:26.876 21:23:59 -- common/autotest_common.sh@10 -- # set +x 00:10:26.876 ************************************ 00:10:26.876 START TEST event 00:10:26.876 ************************************ 00:10:26.876 21:23:59 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:26.876 * Looking for test storage... 00:10:26.876 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:26.876 21:24:00 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:26.876 21:24:00 event -- bdev/nbd_common.sh@6 -- # set -e 00:10:26.876 21:24:00 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:26.876 21:24:00 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:10:26.876 21:24:00 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:26.876 21:24:00 event -- common/autotest_common.sh@10 -- # set +x 00:10:26.876 ************************************ 00:10:26.876 START TEST event_perf 00:10:26.876 ************************************ 00:10:26.876 21:24:00 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:26.876 Running I/O for 1 seconds...[2024-07-15 21:24:00.093948] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:10:26.876 [2024-07-15 21:24:00.094614] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113119 ] 00:10:27.156 [2024-07-15 21:24:00.273035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:27.156 [2024-07-15 21:24:00.483239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.156 [2024-07-15 21:24:00.483340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:27.156 [2024-07-15 21:24:00.483549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.156 [2024-07-15 21:24:00.483566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:28.535 Running I/O for 1 seconds... 00:10:28.535 lcore 0: 192137 00:10:28.535 lcore 1: 192135 00:10:28.535 lcore 2: 192137 00:10:28.535 lcore 3: 192136 00:10:28.535 done. 00:10:28.535 ************************************ 00:10:28.535 END TEST event_perf 00:10:28.535 ************************************ 00:10:28.535 00:10:28.536 real 0m1.839s 00:10:28.536 user 0m4.578s 00:10:28.536 sys 0m0.152s 00:10:28.536 21:24:01 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:28.536 21:24:01 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:10:28.794 21:24:01 event -- common/autotest_common.sh@1142 -- # return 0 00:10:28.794 21:24:01 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:28.794 21:24:01 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:28.794 21:24:01 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:28.794 21:24:01 event -- common/autotest_common.sh@10 -- # set +x 00:10:28.794 ************************************ 00:10:28.794 START TEST event_reactor 00:10:28.794 ************************************ 00:10:28.794 21:24:01 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:28.794 [2024-07-15 21:24:01.993553] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:10:28.794 [2024-07-15 21:24:01.993784] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113172 ] 00:10:28.794 [2024-07-15 21:24:02.156701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.053 [2024-07-15 21:24:02.356786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.429 test_start 00:10:30.429 oneshot 00:10:30.429 tick 100 00:10:30.429 tick 100 00:10:30.429 tick 250 00:10:30.429 tick 100 00:10:30.429 tick 100 00:10:30.429 tick 100 00:10:30.429 tick 250 00:10:30.429 tick 500 00:10:30.429 tick 100 00:10:30.429 tick 100 00:10:30.429 tick 250 00:10:30.429 tick 100 00:10:30.429 tick 100 00:10:30.429 test_end 00:10:30.429 ************************************ 00:10:30.429 END TEST event_reactor 00:10:30.429 ************************************ 00:10:30.429 00:10:30.429 real 0m1.763s 00:10:30.429 user 0m1.557s 00:10:30.429 sys 0m0.105s 00:10:30.429 21:24:03 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:30.429 21:24:03 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:10:30.429 21:24:03 event -- common/autotest_common.sh@1142 -- # return 0 00:10:30.429 21:24:03 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:30.429 21:24:03 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:30.429 21:24:03 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.429 21:24:03 event -- common/autotest_common.sh@10 -- # set +x 00:10:30.429 ************************************ 00:10:30.429 START TEST event_reactor_perf 00:10:30.429 ************************************ 00:10:30.429 21:24:03 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:30.688 [2024-07-15 21:24:03.826129] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:10:30.688 [2024-07-15 21:24:03.826329] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113214 ] 00:10:30.688 [2024-07-15 21:24:03.988920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.947 [2024-07-15 21:24:04.196519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.342 test_start 00:10:32.342 test_end 00:10:32.342 Performance: 407907 events per second 00:10:32.342 ************************************ 00:10:32.342 END TEST event_reactor_perf 00:10:32.342 ************************************ 00:10:32.342 00:10:32.342 real 0m1.801s 00:10:32.342 user 0m1.580s 00:10:32.342 sys 0m0.120s 00:10:32.342 21:24:05 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:32.342 21:24:05 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:10:32.342 21:24:05 event -- common/autotest_common.sh@1142 -- # return 0 00:10:32.342 21:24:05 event -- event/event.sh@49 -- # uname -s 00:10:32.342 21:24:05 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:32.342 21:24:05 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:32.342 21:24:05 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:32.342 21:24:05 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:32.342 21:24:05 event -- common/autotest_common.sh@10 -- # set +x 00:10:32.342 ************************************ 00:10:32.342 START TEST event_scheduler 00:10:32.342 ************************************ 00:10:32.342 21:24:05 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:32.602 * Looking for test storage... 00:10:32.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.602 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:10:32.602 21:24:05 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:32.602 21:24:05 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=113299 00:10:32.602 21:24:05 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:32.602 21:24:05 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 113299 00:10:32.602 21:24:05 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:32.602 21:24:05 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 113299 ']' 00:10:32.602 21:24:05 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.602 21:24:05 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:32.602 21:24:05 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.602 21:24:05 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:32.602 21:24:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:32.602 [2024-07-15 21:24:05.835274] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:10:32.602 [2024-07-15 21:24:05.835521] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113299 ] 00:10:32.862 [2024-07-15 21:24:06.015392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:32.862 [2024-07-15 21:24:06.228181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.862 [2024-07-15 21:24:06.228438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:32.862 [2024-07-15 21:24:06.228346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.862 [2024-07-15 21:24:06.228445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:33.431 21:24:06 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:33.431 21:24:06 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:10:33.431 21:24:06 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:33.431 21:24:06 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.431 21:24:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:33.431 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:33.431 POWER: Cannot set governor of lcore 0 to userspace 00:10:33.431 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:33.431 POWER: Cannot set governor of lcore 0 to performance 00:10:33.431 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:33.431 POWER: Cannot set governor of lcore 0 to userspace 00:10:33.431 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:33.431 POWER: Cannot set governor of lcore 0 to userspace 00:10:33.431 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:10:33.431 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:10:33.431 POWER: Unable to set Power Management Environment for lcore 0 00:10:33.431 [2024-07-15 21:24:06.677947] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:10:33.431 [2024-07-15 21:24:06.678013] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:10:33.431 [2024-07-15 21:24:06.678063] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:10:33.431 [2024-07-15 21:24:06.678116] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:10:33.431 [2024-07-15 21:24:06.678167] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:10:33.431 [2024-07-15 21:24:06.678207] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:10:33.431 21:24:06 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.431 21:24:06 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:33.431 21:24:06 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.431 21:24:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:33.691 [2024-07-15 21:24:06.981970] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:33.691 21:24:06 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.691 21:24:06 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:33.691 21:24:06 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:33.691 21:24:06 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:33.691 21:24:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:33.691 ************************************ 00:10:33.691 START TEST scheduler_create_thread 00:10:33.691 ************************************ 00:10:33.691 21:24:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:10:33.691 21:24:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:33.691 21:24:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.691 21:24:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:33.691 2 00:10:33.691 21:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.691 21:24:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:33.691 21:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.691 21:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:33.691 3 00:10:33.691 21:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.691 21:24:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:33.691 21:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.691 21:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:33.691 4 00:10:33.691 21:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.691 21:24:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:33.691 21:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.691 21:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:33.691 5 00:10:33.691 21:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.691 21:24:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:33.691 21:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.691 21:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:33.691 6 00:10:33.691 21:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.691 21:24:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:33.691 21:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.691 21:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:33.691 7 00:10:33.691 21:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.691 21:24:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:33.691 21:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.691 21:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:33.951 8 00:10:33.951 21:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.951 21:24:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:33.951 21:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.951 21:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:33.951 9 00:10:33.951 21:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.951 21:24:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:33.951 21:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.951 21:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:33.951 10 00:10:33.951 21:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.951 21:24:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:33.951 21:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.951 21:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:33.951 21:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.951 21:24:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:10:33.951 21:24:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:10:33.951 21:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.951 21:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:33.951 21:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.951 21:24:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:10:33.951 21:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.951 21:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:35.333 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.333 21:24:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:10:35.333 21:24:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:10:35.333 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.333 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:36.273 ************************************ 00:10:36.273 END TEST scheduler_create_thread 00:10:36.273 ************************************ 00:10:36.273 21:24:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.273 00:10:36.273 real 0m2.630s 00:10:36.273 user 0m0.010s 00:10:36.273 sys 0m0.007s 00:10:36.273 21:24:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:36.273 21:24:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:36.530 21:24:09 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:10:36.530 21:24:09 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:36.530 21:24:09 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 113299 00:10:36.530 21:24:09 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 113299 ']' 00:10:36.530 21:24:09 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 113299 00:10:36.530 21:24:09 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:10:36.530 21:24:09 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:36.530 21:24:09 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113299 00:10:36.530 21:24:09 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:10:36.530 21:24:09 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:10:36.530 killing process with pid 113299 00:10:36.530 21:24:09 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113299' 00:10:36.530 21:24:09 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 113299 00:10:36.530 21:24:09 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 113299 00:10:36.788 [2024-07-15 21:24:10.108191] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:10:38.162 ************************************ 00:10:38.162 END TEST event_scheduler 00:10:38.162 ************************************ 00:10:38.162 00:10:38.162 real 0m5.708s 00:10:38.162 user 0m9.425s 00:10:38.162 sys 0m0.446s 00:10:38.162 21:24:11 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:38.162 21:24:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:38.162 21:24:11 event -- common/autotest_common.sh@1142 -- # return 0 00:10:38.162 21:24:11 event -- event/event.sh@51 -- # modprobe -n nbd 00:10:38.162 21:24:11 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:10:38.162 21:24:11 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:38.162 21:24:11 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:38.162 21:24:11 event -- common/autotest_common.sh@10 -- # set +x 00:10:38.162 ************************************ 00:10:38.162 START TEST app_repeat 00:10:38.162 ************************************ 00:10:38.162 21:24:11 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:10:38.162 21:24:11 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:38.162 21:24:11 event.app_repeat -- event/event.sh@13 -- # nbd_list=("/dev/nbd0" "/dev/nbd1") 00:10:38.162 21:24:11 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:10:38.162 21:24:11 event.app_repeat -- event/event.sh@14 -- # bdev_list=("Malloc0" "Malloc1") 00:10:38.162 21:24:11 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:10:38.162 21:24:11 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:10:38.162 21:24:11 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:10:38.162 21:24:11 event.app_repeat -- event/event.sh@19 -- # repeat_pid=113433 00:10:38.162 21:24:11 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:10:38.162 21:24:11 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:10:38.162 21:24:11 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 113433' 00:10:38.162 Process app_repeat pid: 113433 00:10:38.162 21:24:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:38.162 21:24:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:10:38.162 spdk_app_start Round 0 00:10:38.162 21:24:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 113433 /var/tmp/spdk-nbd.sock 00:10:38.162 21:24:11 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 113433 ']' 00:10:38.162 21:24:11 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:38.162 21:24:11 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:38.162 21:24:11 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:38.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:38.162 21:24:11 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:38.162 21:24:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:38.162 [2024-07-15 21:24:11.491108] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:10:38.162 [2024-07-15 21:24:11.491328] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113433 ] 00:10:38.421 [2024-07-15 21:24:11.658923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:38.679 [2024-07-15 21:24:11.833656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.679 [2024-07-15 21:24:11.833662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.256 21:24:12 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:39.256 21:24:12 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:10:39.256 21:24:12 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:39.529 Malloc0 00:10:39.529 21:24:12 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:39.788 Malloc1 00:10:39.788 21:24:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:39.788 21:24:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:39.788 21:24:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:10:39.788 21:24:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:39.788 21:24:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:10:39.788 21:24:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:39.788 21:24:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:39.788 21:24:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:39.788 21:24:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:10:39.788 21:24:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:39.788 21:24:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:10:39.788 21:24:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:39.788 21:24:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:39.788 21:24:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:39.788 21:24:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:39.788 21:24:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:39.788 /dev/nbd0 00:10:39.788 21:24:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:39.788 21:24:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:39.788 21:24:13 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:10:39.788 21:24:13 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:39.788 21:24:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:39.788 21:24:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:39.788 21:24:13 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:10:39.788 21:24:13 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:39.788 21:24:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:39.788 21:24:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:39.788 21:24:13 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:39.788 1+0 records in 00:10:39.788 1+0 records out 00:10:39.788 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490403 s, 8.4 MB/s 00:10:39.789 21:24:13 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:40.049 21:24:13 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:40.049 21:24:13 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:40.049 21:24:13 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:40.049 21:24:13 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:40.049 21:24:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:40.049 21:24:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:40.049 21:24:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:40.049 /dev/nbd1 00:10:40.049 21:24:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:40.049 21:24:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:40.049 21:24:13 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:10:40.049 21:24:13 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:40.049 21:24:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:40.049 21:24:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:40.049 21:24:13 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:10:40.049 21:24:13 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:40.049 21:24:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:40.049 21:24:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:40.050 21:24:13 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:40.050 1+0 records in 00:10:40.050 1+0 records out 00:10:40.050 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000663427 s, 6.2 MB/s 00:10:40.050 21:24:13 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:40.314 21:24:13 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:40.314 21:24:13 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:40.314 21:24:13 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:40.314 21:24:13 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:40.314 21:24:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:40.314 21:24:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:40.314 21:24:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:40.314 21:24:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:40.314 21:24:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:40.314 21:24:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:40.314 { 00:10:40.314 "nbd_device": "/dev/nbd0", 00:10:40.314 "bdev_name": "Malloc0" 00:10:40.314 }, 00:10:40.314 { 00:10:40.314 "nbd_device": "/dev/nbd1", 00:10:40.314 "bdev_name": "Malloc1" 00:10:40.314 } 00:10:40.314 ]' 00:10:40.314 21:24:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:40.314 { 00:10:40.314 "nbd_device": "/dev/nbd0", 00:10:40.314 "bdev_name": "Malloc0" 00:10:40.314 }, 00:10:40.314 { 00:10:40.314 "nbd_device": "/dev/nbd1", 00:10:40.314 "bdev_name": "Malloc1" 00:10:40.314 } 00:10:40.314 ]' 00:10:40.314 21:24:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:40.314 21:24:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:40.314 /dev/nbd1' 00:10:40.314 21:24:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:40.314 /dev/nbd1' 00:10:40.314 21:24:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:40.573 256+0 records in 00:10:40.573 256+0 records out 00:10:40.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124575 s, 84.2 MB/s 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:40.573 256+0 records in 00:10:40.573 256+0 records out 00:10:40.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0231417 s, 45.3 MB/s 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:40.573 256+0 records in 00:10:40.573 256+0 records out 00:10:40.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0368283 s, 28.5 MB/s 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:40.573 21:24:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:40.833 21:24:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:40.833 21:24:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:40.833 21:24:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:40.833 21:24:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:40.833 21:24:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:40.833 21:24:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:40.833 21:24:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:40.833 21:24:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:40.833 21:24:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:40.833 21:24:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:41.092 21:24:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:41.092 21:24:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:41.092 21:24:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:41.092 21:24:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:41.092 21:24:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:41.092 21:24:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:41.092 21:24:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:41.092 21:24:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:41.092 21:24:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:41.092 21:24:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:41.092 21:24:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:41.352 21:24:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:41.352 21:24:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:41.352 21:24:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:41.352 21:24:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:41.352 21:24:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:41.352 21:24:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:41.352 21:24:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:41.352 21:24:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:41.352 21:24:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:41.352 21:24:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:41.352 21:24:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:41.352 21:24:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:41.352 21:24:14 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:41.611 21:24:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:42.990 [2024-07-15 21:24:16.119718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:42.990 [2024-07-15 21:24:16.293838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.990 [2024-07-15 21:24:16.293842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.250 [2024-07-15 21:24:16.439001] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:43.250 [2024-07-15 21:24:16.439080] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:44.625 spdk_app_start Round 1 00:10:44.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:44.625 21:24:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:44.625 21:24:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:10:44.625 21:24:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 113433 /var/tmp/spdk-nbd.sock 00:10:44.625 21:24:17 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 113433 ']' 00:10:44.625 21:24:17 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:44.625 21:24:17 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:44.625 21:24:17 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:44.625 21:24:17 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:44.625 21:24:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:44.883 21:24:18 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:44.883 21:24:18 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:10:44.883 21:24:18 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:45.141 Malloc0 00:10:45.141 21:24:18 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:45.399 Malloc1 00:10:45.399 21:24:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:45.399 21:24:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:45.399 21:24:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:10:45.399 21:24:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:45.399 21:24:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:10:45.399 21:24:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:45.399 21:24:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:45.399 21:24:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:45.399 21:24:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:10:45.399 21:24:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:45.399 21:24:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:10:45.399 21:24:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:45.399 21:24:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:45.399 21:24:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:45.399 21:24:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:45.399 21:24:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:45.657 /dev/nbd0 00:10:45.657 21:24:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:45.657 21:24:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:45.657 21:24:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:10:45.657 21:24:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:45.657 21:24:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:45.657 21:24:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:45.657 21:24:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:10:45.657 21:24:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:45.657 21:24:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:45.657 21:24:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:45.657 21:24:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:45.657 1+0 records in 00:10:45.657 1+0 records out 00:10:45.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031511 s, 13.0 MB/s 00:10:45.657 21:24:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:45.657 21:24:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:45.657 21:24:19 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:45.657 21:24:19 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:45.657 21:24:19 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:45.657 21:24:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:45.657 21:24:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:45.657 21:24:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:45.965 /dev/nbd1 00:10:45.965 21:24:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:45.965 21:24:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:45.965 21:24:19 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:10:45.965 21:24:19 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:45.965 21:24:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:45.965 21:24:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:45.965 21:24:19 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:10:45.965 21:24:19 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:45.965 21:24:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:45.965 21:24:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:45.965 21:24:19 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:45.965 1+0 records in 00:10:45.965 1+0 records out 00:10:45.966 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405601 s, 10.1 MB/s 00:10:45.966 21:24:19 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:45.966 21:24:19 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:45.966 21:24:19 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:45.966 21:24:19 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:45.966 21:24:19 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:45.966 21:24:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:45.966 21:24:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:45.966 21:24:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:45.966 21:24:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:45.966 21:24:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:46.224 21:24:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:46.224 { 00:10:46.224 "nbd_device": "/dev/nbd0", 00:10:46.224 "bdev_name": "Malloc0" 00:10:46.224 }, 00:10:46.224 { 00:10:46.224 "nbd_device": "/dev/nbd1", 00:10:46.224 "bdev_name": "Malloc1" 00:10:46.224 } 00:10:46.224 ]' 00:10:46.224 21:24:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:46.224 { 00:10:46.224 "nbd_device": "/dev/nbd0", 00:10:46.224 "bdev_name": "Malloc0" 00:10:46.224 }, 00:10:46.224 { 00:10:46.224 "nbd_device": "/dev/nbd1", 00:10:46.224 "bdev_name": "Malloc1" 00:10:46.224 } 00:10:46.224 ]' 00:10:46.224 21:24:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:46.224 21:24:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:46.224 /dev/nbd1' 00:10:46.224 21:24:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:46.224 /dev/nbd1' 00:10:46.224 21:24:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:46.224 21:24:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:46.224 21:24:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:46.224 21:24:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:46.224 21:24:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:46.224 21:24:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:46.224 21:24:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:10:46.224 21:24:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:46.224 21:24:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:46.224 21:24:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:46.224 21:24:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:46.224 21:24:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:46.224 256+0 records in 00:10:46.224 256+0 records out 00:10:46.224 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00522185 s, 201 MB/s 00:10:46.224 21:24:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:46.224 21:24:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:46.482 256+0 records in 00:10:46.482 256+0 records out 00:10:46.482 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250439 s, 41.9 MB/s 00:10:46.482 21:24:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:46.482 21:24:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:46.482 256+0 records in 00:10:46.482 256+0 records out 00:10:46.482 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299962 s, 35.0 MB/s 00:10:46.482 21:24:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:46.482 21:24:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:10:46.482 21:24:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:46.482 21:24:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:46.482 21:24:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:46.482 21:24:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:46.483 21:24:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:46.483 21:24:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:46.483 21:24:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:46.483 21:24:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:46.483 21:24:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:46.483 21:24:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:46.483 21:24:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:46.483 21:24:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:46.483 21:24:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:10:46.483 21:24:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:46.483 21:24:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:46.483 21:24:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:46.483 21:24:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:46.741 21:24:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:46.741 21:24:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:46.741 21:24:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:46.741 21:24:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:46.741 21:24:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:46.741 21:24:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:46.741 21:24:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:46.741 21:24:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:46.741 21:24:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:46.741 21:24:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:46.741 21:24:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:46.741 21:24:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:46.741 21:24:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:46.741 21:24:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:46.741 21:24:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:46.741 21:24:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:46.741 21:24:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:46.741 21:24:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:46.741 21:24:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:46.741 21:24:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:46.741 21:24:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:47.000 21:24:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:47.000 21:24:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:47.000 21:24:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:47.258 21:24:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:47.258 21:24:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:47.258 21:24:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:47.258 21:24:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:47.258 21:24:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:47.258 21:24:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:47.258 21:24:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:47.258 21:24:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:47.258 21:24:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:47.258 21:24:20 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:47.516 21:24:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:48.920 [2024-07-15 21:24:22.025679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:48.920 [2024-07-15 21:24:22.204449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.920 [2024-07-15 21:24:22.204454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.179 [2024-07-15 21:24:22.358965] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:49.179 [2024-07-15 21:24:22.359107] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:50.555 spdk_app_start Round 2 00:10:50.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:50.555 21:24:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:50.555 21:24:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:10:50.555 21:24:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 113433 /var/tmp/spdk-nbd.sock 00:10:50.555 21:24:23 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 113433 ']' 00:10:50.555 21:24:23 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:50.555 21:24:23 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:50.555 21:24:23 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:50.555 21:24:23 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:50.555 21:24:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:50.815 21:24:24 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:50.815 21:24:24 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:10:50.815 21:24:24 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:51.073 Malloc0 00:10:51.073 21:24:24 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:51.331 Malloc1 00:10:51.331 21:24:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:51.331 21:24:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:51.331 21:24:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:10:51.331 21:24:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:51.331 21:24:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:10:51.331 21:24:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:51.331 21:24:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:51.331 21:24:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:51.331 21:24:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:10:51.331 21:24:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:51.331 21:24:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:10:51.331 21:24:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:51.331 21:24:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:51.331 21:24:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:51.331 21:24:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:51.331 21:24:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:51.590 /dev/nbd0 00:10:51.590 21:24:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:51.590 21:24:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:51.590 21:24:24 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:10:51.590 21:24:24 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:51.590 21:24:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:51.590 21:24:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:51.590 21:24:24 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:10:51.590 21:24:24 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:51.590 21:24:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:51.590 21:24:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:51.590 21:24:24 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:51.590 1+0 records in 00:10:51.590 1+0 records out 00:10:51.590 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379484 s, 10.8 MB/s 00:10:51.590 21:24:24 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:51.590 21:24:24 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:51.590 21:24:24 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:51.590 21:24:24 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:51.590 21:24:24 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:51.590 21:24:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:51.590 21:24:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:51.590 21:24:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:51.848 /dev/nbd1 00:10:51.848 21:24:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:51.848 21:24:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:51.848 21:24:25 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:10:51.848 21:24:25 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:51.848 21:24:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:51.848 21:24:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:51.848 21:24:25 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:10:51.848 21:24:25 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:51.848 21:24:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:51.849 21:24:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:51.849 21:24:25 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:51.849 1+0 records in 00:10:51.849 1+0 records out 00:10:51.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367551 s, 11.1 MB/s 00:10:51.849 21:24:25 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:51.849 21:24:25 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:51.849 21:24:25 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:51.849 21:24:25 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:51.849 21:24:25 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:51.849 21:24:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:51.849 21:24:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:51.849 21:24:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:51.849 21:24:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:51.849 21:24:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:52.107 21:24:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:52.107 { 00:10:52.107 "nbd_device": "/dev/nbd0", 00:10:52.107 "bdev_name": "Malloc0" 00:10:52.107 }, 00:10:52.107 { 00:10:52.107 "nbd_device": "/dev/nbd1", 00:10:52.107 "bdev_name": "Malloc1" 00:10:52.107 } 00:10:52.107 ]' 00:10:52.107 21:24:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:52.107 21:24:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:52.108 { 00:10:52.108 "nbd_device": "/dev/nbd0", 00:10:52.108 "bdev_name": "Malloc0" 00:10:52.108 }, 00:10:52.108 { 00:10:52.108 "nbd_device": "/dev/nbd1", 00:10:52.108 "bdev_name": "Malloc1" 00:10:52.108 } 00:10:52.108 ]' 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:52.108 /dev/nbd1' 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:52.108 /dev/nbd1' 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:52.108 256+0 records in 00:10:52.108 256+0 records out 00:10:52.108 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141858 s, 73.9 MB/s 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:52.108 256+0 records in 00:10:52.108 256+0 records out 00:10:52.108 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264394 s, 39.7 MB/s 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:52.108 256+0 records in 00:10:52.108 256+0 records out 00:10:52.108 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0321811 s, 32.6 MB/s 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:52.108 21:24:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:52.367 21:24:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:52.367 21:24:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:52.367 21:24:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:52.367 21:24:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:52.367 21:24:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:52.367 21:24:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:52.367 21:24:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:52.367 21:24:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:52.367 21:24:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:52.367 21:24:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:52.627 21:24:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:52.627 21:24:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:52.627 21:24:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:52.627 21:24:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:52.627 21:24:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:52.627 21:24:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:52.627 21:24:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:52.627 21:24:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:52.627 21:24:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:52.627 21:24:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:52.627 21:24:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:52.886 21:24:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:52.886 21:24:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:52.886 21:24:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:53.145 21:24:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:53.145 21:24:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:53.145 21:24:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:53.145 21:24:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:53.145 21:24:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:53.145 21:24:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:53.145 21:24:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:53.145 21:24:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:53.145 21:24:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:53.145 21:24:26 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:53.405 21:24:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:54.786 [2024-07-15 21:24:27.932506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:55.044 [2024-07-15 21:24:28.176433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.044 [2024-07-15 21:24:28.176437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.044 [2024-07-15 21:24:28.362075] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:55.044 [2024-07-15 21:24:28.362246] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:56.420 21:24:29 event.app_repeat -- event/event.sh@38 -- # waitforlisten 113433 /var/tmp/spdk-nbd.sock 00:10:56.420 21:24:29 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 113433 ']' 00:10:56.420 21:24:29 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:56.420 21:24:29 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:56.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:56.420 21:24:29 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:56.420 21:24:29 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:56.420 21:24:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:56.678 21:24:29 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:56.678 21:24:29 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:10:56.678 21:24:29 event.app_repeat -- event/event.sh@39 -- # killprocess 113433 00:10:56.678 21:24:29 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 113433 ']' 00:10:56.678 21:24:29 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 113433 00:10:56.678 21:24:29 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:10:56.678 21:24:29 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:56.678 21:24:29 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113433 00:10:56.678 killing process with pid 113433 00:10:56.678 21:24:29 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:56.678 21:24:29 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:56.678 21:24:29 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113433' 00:10:56.678 21:24:29 event.app_repeat -- common/autotest_common.sh@967 -- # kill 113433 00:10:56.678 21:24:29 event.app_repeat -- common/autotest_common.sh@972 -- # wait 113433 00:10:58.081 spdk_app_start is called in Round 0. 00:10:58.081 Shutdown signal received, stop current app iteration 00:10:58.081 Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 reinitialization... 00:10:58.081 spdk_app_start is called in Round 1. 00:10:58.081 Shutdown signal received, stop current app iteration 00:10:58.081 Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 reinitialization... 00:10:58.081 spdk_app_start is called in Round 2. 00:10:58.081 Shutdown signal received, stop current app iteration 00:10:58.081 Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 reinitialization... 00:10:58.081 spdk_app_start is called in Round 3. 00:10:58.081 Shutdown signal received, stop current app iteration 00:10:58.081 ************************************ 00:10:58.081 END TEST app_repeat 00:10:58.081 ************************************ 00:10:58.082 21:24:31 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:10:58.082 21:24:31 event.app_repeat -- event/event.sh@42 -- # return 0 00:10:58.082 00:10:58.082 real 0m19.678s 00:10:58.082 user 0m41.597s 00:10:58.082 sys 0m2.657s 00:10:58.082 21:24:31 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:58.082 21:24:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:58.082 21:24:31 event -- common/autotest_common.sh@1142 -- # return 0 00:10:58.082 21:24:31 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:10:58.082 21:24:31 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:58.082 21:24:31 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:58.082 21:24:31 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:58.082 21:24:31 event -- common/autotest_common.sh@10 -- # set +x 00:10:58.082 ************************************ 00:10:58.082 START TEST cpu_locks 00:10:58.082 ************************************ 00:10:58.082 21:24:31 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:58.082 * Looking for test storage... 00:10:58.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:58.082 21:24:31 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:10:58.082 21:24:31 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:10:58.082 21:24:31 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:10:58.082 21:24:31 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:10:58.082 21:24:31 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:58.082 21:24:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:58.082 21:24:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:58.082 ************************************ 00:10:58.082 START TEST default_locks 00:10:58.082 ************************************ 00:10:58.082 21:24:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:10:58.082 21:24:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=113982 00:10:58.082 21:24:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 113982 00:10:58.082 21:24:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:58.082 21:24:31 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 113982 ']' 00:10:58.082 21:24:31 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.082 21:24:31 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:58.082 21:24:31 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.082 21:24:31 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:58.082 21:24:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:58.082 [2024-07-15 21:24:31.338755] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:10:58.082 [2024-07-15 21:24:31.339066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113982 ] 00:10:58.340 [2024-07-15 21:24:31.503157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.598 [2024-07-15 21:24:31.788213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.971 21:24:32 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:59.971 21:24:32 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:10:59.971 21:24:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 113982 00:10:59.971 21:24:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 113982 00:10:59.971 21:24:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:59.971 21:24:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 113982 00:10:59.971 21:24:33 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 113982 ']' 00:10:59.971 21:24:33 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 113982 00:10:59.971 21:24:33 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:10:59.971 21:24:33 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:59.971 21:24:33 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113982 00:10:59.971 killing process with pid 113982 00:10:59.971 21:24:33 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:59.971 21:24:33 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:59.971 21:24:33 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113982' 00:10:59.971 21:24:33 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 113982 00:10:59.971 21:24:33 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 113982 00:11:03.252 21:24:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 113982 00:11:03.252 21:24:36 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:11:03.252 21:24:36 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 113982 00:11:03.252 21:24:36 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:11:03.252 21:24:36 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:03.252 21:24:36 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:11:03.252 21:24:36 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:03.252 21:24:36 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 113982 00:11:03.252 21:24:36 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 113982 ']' 00:11:03.252 21:24:36 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.252 21:24:36 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:03.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.252 21:24:36 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.252 21:24:36 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:03.252 21:24:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:03.252 ERROR: process (pid: 113982) is no longer running 00:11:03.252 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (113982) - No such process 00:11:03.252 21:24:36 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:03.252 21:24:36 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:11:03.252 21:24:36 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:11:03.252 21:24:36 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:03.252 21:24:36 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:03.252 21:24:36 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:03.252 21:24:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:11:03.252 21:24:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:11:03.252 21:24:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:11:03.252 21:24:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:03.252 00:11:03.252 real 0m5.058s 00:11:03.252 user 0m4.882s 00:11:03.252 sys 0m0.644s 00:11:03.252 21:24:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:03.252 21:24:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:03.252 ************************************ 00:11:03.252 END TEST default_locks 00:11:03.252 ************************************ 00:11:03.252 21:24:36 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:11:03.252 21:24:36 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:11:03.252 21:24:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:03.252 21:24:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:03.252 21:24:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:03.252 ************************************ 00:11:03.252 START TEST default_locks_via_rpc 00:11:03.252 ************************************ 00:11:03.252 21:24:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:11:03.252 21:24:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=114079 00:11:03.252 21:24:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 114079 00:11:03.252 21:24:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 114079 ']' 00:11:03.252 21:24:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.252 21:24:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:03.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.252 21:24:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.252 21:24:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:03.252 21:24:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.252 21:24:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:03.252 [2024-07-15 21:24:36.442294] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:11:03.252 [2024-07-15 21:24:36.442525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114079 ] 00:11:03.510 [2024-07-15 21:24:36.631425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.767 [2024-07-15 21:24:36.925000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.701 21:24:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:04.701 21:24:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:04.701 21:24:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:11:04.701 21:24:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.701 21:24:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.959 21:24:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.959 21:24:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:11:04.959 21:24:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:11:04.959 21:24:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:11:04.959 21:24:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:04.959 21:24:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:11:04.959 21:24:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.959 21:24:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.959 21:24:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.960 21:24:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 114079 00:11:04.960 21:24:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 114079 00:11:04.960 21:24:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:04.960 21:24:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 114079 00:11:04.960 21:24:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 114079 ']' 00:11:04.960 21:24:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 114079 00:11:04.960 21:24:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:11:04.960 21:24:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:04.960 21:24:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114079 00:11:04.960 21:24:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:04.960 21:24:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:04.960 killing process with pid 114079 00:11:04.960 21:24:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114079' 00:11:04.960 21:24:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 114079 00:11:04.960 21:24:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 114079 00:11:08.239 00:11:08.239 real 0m5.085s 00:11:08.239 user 0m4.869s 00:11:08.239 sys 0m0.672s 00:11:08.240 21:24:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:08.240 21:24:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.240 ************************************ 00:11:08.240 END TEST default_locks_via_rpc 00:11:08.240 ************************************ 00:11:08.240 21:24:41 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:11:08.240 21:24:41 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:11:08.240 21:24:41 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:08.240 21:24:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:08.240 21:24:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:08.240 ************************************ 00:11:08.240 START TEST non_locking_app_on_locked_coremask 00:11:08.240 ************************************ 00:11:08.240 21:24:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:11:08.240 21:24:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=114194 00:11:08.240 21:24:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 114194 /var/tmp/spdk.sock 00:11:08.240 21:24:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 114194 ']' 00:11:08.240 21:24:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.240 21:24:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:08.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.240 21:24:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:08.240 21:24:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.240 21:24:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:08.240 21:24:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:08.240 [2024-07-15 21:24:41.590243] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:11:08.240 [2024-07-15 21:24:41.590430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114194 ] 00:11:08.499 [2024-07-15 21:24:41.754754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.757 [2024-07-15 21:24:42.030924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.188 21:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:10.188 21:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:11:10.188 21:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=114222 00:11:10.188 21:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:11:10.188 21:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 114222 /var/tmp/spdk2.sock 00:11:10.188 21:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 114222 ']' 00:11:10.188 21:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:10.188 21:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:10.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:10.188 21:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:10.188 21:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:10.188 21:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:10.188 [2024-07-15 21:24:43.220010] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:11:10.188 [2024-07-15 21:24:43.220181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114222 ] 00:11:10.188 [2024-07-15 21:24:43.364231] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:10.188 [2024-07-15 21:24:43.364313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.751 [2024-07-15 21:24:43.940967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.294 21:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:13.294 21:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:11:13.294 21:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 114194 00:11:13.294 21:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 114194 00:11:13.294 21:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:13.294 21:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 114194 00:11:13.294 21:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 114194 ']' 00:11:13.294 21:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 114194 00:11:13.294 21:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:11:13.294 21:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:13.294 21:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114194 00:11:13.294 21:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:13.294 21:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:13.294 21:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114194' 00:11:13.294 killing process with pid 114194 00:11:13.294 21:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 114194 00:11:13.294 21:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 114194 00:11:19.895 21:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 114222 00:11:19.895 21:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 114222 ']' 00:11:19.895 21:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 114222 00:11:19.895 21:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:11:19.895 21:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:19.895 21:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114222 00:11:19.895 21:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:19.895 21:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:19.895 21:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114222' 00:11:19.895 killing process with pid 114222 00:11:19.895 21:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 114222 00:11:19.895 21:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 114222 00:11:22.424 00:11:22.424 real 0m14.196s 00:11:22.424 user 0m13.912s 00:11:22.424 sys 0m1.625s 00:11:22.424 21:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:22.424 ************************************ 00:11:22.424 END TEST non_locking_app_on_locked_coremask 00:11:22.424 21:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:22.424 ************************************ 00:11:22.424 21:24:55 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:11:22.424 21:24:55 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:11:22.424 21:24:55 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:22.424 21:24:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:22.424 21:24:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:22.424 ************************************ 00:11:22.424 START TEST locking_app_on_unlocked_coremask 00:11:22.424 ************************************ 00:11:22.424 21:24:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:11:22.424 21:24:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=114420 00:11:22.424 21:24:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:11:22.424 21:24:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 114420 /var/tmp/spdk.sock 00:11:22.424 21:24:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 114420 ']' 00:11:22.424 21:24:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.424 21:24:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:22.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.424 21:24:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.424 21:24:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:22.424 21:24:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:22.684 [2024-07-15 21:24:55.857151] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:11:22.684 [2024-07-15 21:24:55.857381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114420 ] 00:11:22.684 [2024-07-15 21:24:56.028147] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:22.684 [2024-07-15 21:24:56.028276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.250 [2024-07-15 21:24:56.315374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.187 21:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:24.187 21:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:11:24.187 21:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=114469 00:11:24.187 21:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 114469 /var/tmp/spdk2.sock 00:11:24.187 21:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:24.187 21:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 114469 ']' 00:11:24.187 21:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:24.187 21:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:24.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:24.188 21:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:24.188 21:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:24.188 21:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:24.188 [2024-07-15 21:24:57.520802] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:11:24.188 [2024-07-15 21:24:57.520961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114469 ] 00:11:24.446 [2024-07-15 21:24:57.676631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.011 [2024-07-15 21:24:58.230214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.540 21:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:27.540 21:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:11:27.540 21:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 114469 00:11:27.540 21:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 114469 00:11:27.540 21:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:27.540 21:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 114420 00:11:27.541 21:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 114420 ']' 00:11:27.541 21:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 114420 00:11:27.541 21:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:11:27.541 21:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:27.541 21:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114420 00:11:27.541 21:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:27.541 21:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:27.541 21:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114420' 00:11:27.541 killing process with pid 114420 00:11:27.541 21:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 114420 00:11:27.541 21:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 114420 00:11:34.091 21:25:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 114469 00:11:34.091 21:25:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 114469 ']' 00:11:34.091 21:25:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 114469 00:11:34.091 21:25:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:11:34.091 21:25:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:34.091 21:25:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114469 00:11:34.091 21:25:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:34.091 21:25:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:34.091 killing process with pid 114469 00:11:34.091 21:25:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114469' 00:11:34.091 21:25:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 114469 00:11:34.091 21:25:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 114469 00:11:37.375 00:11:37.375 real 0m14.289s 00:11:37.375 user 0m14.015s 00:11:37.375 sys 0m1.643s 00:11:37.375 21:25:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:37.375 ************************************ 00:11:37.375 END TEST locking_app_on_unlocked_coremask 00:11:37.375 ************************************ 00:11:37.375 21:25:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:37.375 21:25:10 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:11:37.375 21:25:10 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:37.375 21:25:10 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:37.375 21:25:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:37.375 21:25:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:37.375 ************************************ 00:11:37.375 START TEST locking_app_on_locked_coremask 00:11:37.375 ************************************ 00:11:37.375 21:25:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:11:37.375 21:25:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=114667 00:11:37.375 21:25:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:37.375 21:25:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 114667 /var/tmp/spdk.sock 00:11:37.375 21:25:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 114667 ']' 00:11:37.375 21:25:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.375 21:25:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:37.375 21:25:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.375 21:25:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:37.375 21:25:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:37.376 [2024-07-15 21:25:10.198101] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:11:37.376 [2024-07-15 21:25:10.198376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114667 ] 00:11:37.376 [2024-07-15 21:25:10.362674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.376 [2024-07-15 21:25:10.645370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.752 21:25:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:38.752 21:25:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:11:38.752 21:25:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=114700 00:11:38.752 21:25:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 114700 /var/tmp/spdk2.sock 00:11:38.752 21:25:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:38.752 21:25:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:11:38.752 21:25:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 114700 /var/tmp/spdk2.sock 00:11:38.752 21:25:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:11:38.752 21:25:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:38.752 21:25:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:11:38.752 21:25:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:38.752 21:25:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 114700 /var/tmp/spdk2.sock 00:11:38.752 21:25:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 114700 ']' 00:11:38.752 21:25:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:38.753 21:25:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:38.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:38.753 21:25:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:38.753 21:25:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:38.753 21:25:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:38.753 [2024-07-15 21:25:11.841770] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:11:38.753 [2024-07-15 21:25:11.842046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114700 ] 00:11:38.753 [2024-07-15 21:25:11.990149] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 114667 has claimed it. 00:11:38.753 [2024-07-15 21:25:11.990254] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:39.321 ERROR: process (pid: 114700) is no longer running 00:11:39.321 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (114700) - No such process 00:11:39.321 21:25:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:39.321 21:25:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:11:39.321 21:25:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:11:39.321 21:25:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:39.321 21:25:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:39.321 21:25:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:39.321 21:25:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 114667 00:11:39.321 21:25:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 114667 00:11:39.321 21:25:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:39.321 21:25:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 114667 00:11:39.321 21:25:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 114667 ']' 00:11:39.321 21:25:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 114667 00:11:39.580 21:25:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:11:39.580 21:25:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:39.580 21:25:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114667 00:11:39.580 killing process with pid 114667 00:11:39.580 21:25:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:39.580 21:25:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:39.580 21:25:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114667' 00:11:39.580 21:25:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 114667 00:11:39.580 21:25:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 114667 00:11:42.868 00:11:42.868 real 0m5.730s 00:11:42.868 user 0m5.680s 00:11:42.868 sys 0m0.834s 00:11:42.868 21:25:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:42.868 21:25:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:42.868 ************************************ 00:11:42.868 END TEST locking_app_on_locked_coremask 00:11:42.868 ************************************ 00:11:42.868 21:25:15 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:11:42.868 21:25:15 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:42.868 21:25:15 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:42.868 21:25:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:42.868 21:25:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:42.868 ************************************ 00:11:42.868 START TEST locking_overlapped_coremask 00:11:42.868 ************************************ 00:11:42.868 21:25:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:11:42.868 21:25:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=114776 00:11:42.868 21:25:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:11:42.868 21:25:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 114776 /var/tmp/spdk.sock 00:11:42.868 21:25:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 114776 ']' 00:11:42.868 21:25:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.868 21:25:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:42.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.868 21:25:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.868 21:25:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:42.868 21:25:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:42.868 [2024-07-15 21:25:15.987190] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:11:42.868 [2024-07-15 21:25:15.987354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114776 ] 00:11:42.868 [2024-07-15 21:25:16.161035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:43.126 [2024-07-15 21:25:16.448557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.126 [2024-07-15 21:25:16.448513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.126 [2024-07-15 21:25:16.448557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.507 21:25:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:44.507 21:25:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:11:44.507 21:25:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=114815 00:11:44.507 21:25:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 114815 /var/tmp/spdk2.sock 00:11:44.507 21:25:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:44.507 21:25:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:11:44.507 21:25:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 114815 /var/tmp/spdk2.sock 00:11:44.507 21:25:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:11:44.507 21:25:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:44.507 21:25:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:11:44.507 21:25:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:44.507 21:25:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 114815 /var/tmp/spdk2.sock 00:11:44.507 21:25:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 114815 ']' 00:11:44.507 21:25:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:44.507 21:25:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:44.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:44.507 21:25:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:44.507 21:25:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:44.507 21:25:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:44.507 [2024-07-15 21:25:17.703645] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:11:44.507 [2024-07-15 21:25:17.703783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114815 ] 00:11:44.507 [2024-07-15 21:25:17.869301] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 114776 has claimed it. 00:11:44.507 [2024-07-15 21:25:17.869390] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:45.076 ERROR: process (pid: 114815) is no longer running 00:11:45.076 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (114815) - No such process 00:11:45.076 21:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:45.076 21:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:11:45.076 21:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:11:45.076 21:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:45.076 21:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:45.076 21:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:45.076 21:25:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:11:45.076 21:25:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:45.076 21:25:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:45.076 21:25:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:45.076 21:25:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 114776 00:11:45.076 21:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 114776 ']' 00:11:45.076 21:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 114776 00:11:45.076 21:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:11:45.076 21:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:45.076 21:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114776 00:11:45.076 21:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:45.076 21:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:45.076 killing process with pid 114776 00:11:45.076 21:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114776' 00:11:45.076 21:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 114776 00:11:45.076 21:25:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 114776 00:11:48.367 00:11:48.367 real 0m5.701s 00:11:48.367 user 0m14.874s 00:11:48.367 sys 0m0.694s 00:11:48.367 21:25:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:48.367 21:25:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:48.367 ************************************ 00:11:48.367 END TEST locking_overlapped_coremask 00:11:48.367 ************************************ 00:11:48.367 21:25:21 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:11:48.367 21:25:21 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:11:48.367 21:25:21 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:48.367 21:25:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:48.367 21:25:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:48.367 ************************************ 00:11:48.367 START TEST locking_overlapped_coremask_via_rpc 00:11:48.367 ************************************ 00:11:48.368 21:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:11:48.368 21:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:11:48.368 21:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=114905 00:11:48.368 21:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 114905 /var/tmp/spdk.sock 00:11:48.368 21:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 114905 ']' 00:11:48.368 21:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.368 21:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:48.368 21:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.368 21:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:48.368 21:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.627 [2024-07-15 21:25:21.747211] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:11:48.627 [2024-07-15 21:25:21.747517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114905 ] 00:11:48.627 [2024-07-15 21:25:21.946532] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:48.627 [2024-07-15 21:25:21.946606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:48.886 [2024-07-15 21:25:22.233512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.886 [2024-07-15 21:25:22.233392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.886 [2024-07-15 21:25:22.233522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.264 21:25:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:50.264 21:25:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:50.264 21:25:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=114934 00:11:50.264 21:25:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 114934 /var/tmp/spdk2.sock 00:11:50.264 21:25:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:11:50.264 21:25:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 114934 ']' 00:11:50.264 21:25:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:50.264 21:25:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:50.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:50.264 21:25:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:50.264 21:25:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:50.264 21:25:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.264 [2024-07-15 21:25:23.473009] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:11:50.264 [2024-07-15 21:25:23.473205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114934 ] 00:11:50.521 [2024-07-15 21:25:23.646699] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:50.521 [2024-07-15 21:25:23.646787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:51.088 [2024-07-15 21:25:24.214536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:51.088 [2024-07-15 21:25:24.229490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:51.088 [2024-07-15 21:25:24.229501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:53.625 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:53.625 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:53.625 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:11:53.625 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.626 [2024-07-15 21:25:26.529459] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 114905 has claimed it. 00:11:53.626 request: 00:11:53.626 { 00:11:53.626 "method": "framework_enable_cpumask_locks", 00:11:53.626 "req_id": 1 00:11:53.626 } 00:11:53.626 Got JSON-RPC error response 00:11:53.626 response: 00:11:53.626 { 00:11:53.626 "code": -32603, 00:11:53.626 "message": "Failed to claim CPU core: 2" 00:11:53.626 } 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 114905 /var/tmp/spdk.sock 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 114905 ']' 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:53.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 114934 /var/tmp/spdk2.sock 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 114934 ']' 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:53.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:53.626 00:11:53.626 real 0m5.295s 00:11:53.626 user 0m1.283s 00:11:53.626 sys 0m0.184s 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:53.626 ************************************ 00:11:53.626 END TEST locking_overlapped_coremask_via_rpc 00:11:53.626 ************************************ 00:11:53.626 21:25:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.885 21:25:27 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:11:53.885 21:25:27 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:11:53.885 21:25:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 114905 ]] 00:11:53.885 21:25:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 114905 00:11:53.885 21:25:27 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 114905 ']' 00:11:53.885 21:25:27 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 114905 00:11:53.885 21:25:27 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:11:53.885 21:25:27 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:53.885 21:25:27 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114905 00:11:53.885 killing process with pid 114905 00:11:53.885 21:25:27 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:53.885 21:25:27 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:53.885 21:25:27 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114905' 00:11:53.885 21:25:27 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 114905 00:11:53.885 21:25:27 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 114905 00:11:58.063 21:25:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 114934 ]] 00:11:58.063 21:25:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 114934 00:11:58.063 21:25:30 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 114934 ']' 00:11:58.063 21:25:30 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 114934 00:11:58.063 21:25:30 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:11:58.063 21:25:30 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:58.063 21:25:30 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114934 00:11:58.063 21:25:30 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:11:58.063 21:25:30 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:11:58.063 21:25:30 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114934' 00:11:58.063 killing process with pid 114934 00:11:58.063 21:25:30 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 114934 00:11:58.063 21:25:30 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 114934 00:12:00.585 21:25:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:00.585 21:25:33 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:12:00.585 21:25:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 114905 ]] 00:12:00.585 21:25:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 114905 00:12:00.585 21:25:33 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 114905 ']' 00:12:00.585 21:25:33 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 114905 00:12:00.586 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (114905) - No such process 00:12:00.586 Process with pid 114905 is not found 00:12:00.586 21:25:33 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 114905 is not found' 00:12:00.586 21:25:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 114934 ]] 00:12:00.586 21:25:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 114934 00:12:00.586 Process with pid 114934 is not found 00:12:00.586 21:25:33 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 114934 ']' 00:12:00.586 21:25:33 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 114934 00:12:00.586 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (114934) - No such process 00:12:00.586 21:25:33 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 114934 is not found' 00:12:00.586 21:25:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:00.586 00:12:00.586 real 1m2.462s 00:12:00.586 user 1m44.565s 00:12:00.586 sys 0m7.688s 00:12:00.586 21:25:33 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:00.586 21:25:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:00.586 ************************************ 00:12:00.586 END TEST cpu_locks 00:12:00.586 ************************************ 00:12:00.586 21:25:33 event -- common/autotest_common.sh@1142 -- # return 0 00:12:00.586 ************************************ 00:12:00.586 END TEST event 00:12:00.586 ************************************ 00:12:00.586 00:12:00.586 real 1m33.763s 00:12:00.586 user 2m43.562s 00:12:00.586 sys 0m11.441s 00:12:00.586 21:25:33 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:00.586 21:25:33 event -- common/autotest_common.sh@10 -- # set +x 00:12:00.586 21:25:33 -- common/autotest_common.sh@1142 -- # return 0 00:12:00.586 21:25:33 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:00.586 21:25:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:00.586 21:25:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:00.586 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:12:00.586 ************************************ 00:12:00.586 START TEST thread 00:12:00.586 ************************************ 00:12:00.586 21:25:33 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:00.586 * Looking for test storage... 00:12:00.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:12:00.586 21:25:33 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:00.586 21:25:33 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:12:00.586 21:25:33 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:00.586 21:25:33 thread -- common/autotest_common.sh@10 -- # set +x 00:12:00.586 ************************************ 00:12:00.586 START TEST thread_poller_perf 00:12:00.586 ************************************ 00:12:00.586 21:25:33 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:00.586 [2024-07-15 21:25:33.914425] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:12:00.586 [2024-07-15 21:25:33.914667] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115183 ] 00:12:00.843 [2024-07-15 21:25:34.086832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.121 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:12:01.121 [2024-07-15 21:25:34.398853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.017 ====================================== 00:12:03.017 busy:2302225236 (cyc) 00:12:03.017 total_run_count: 324000 00:12:03.017 tsc_hz: 2290000000 (cyc) 00:12:03.017 ====================================== 00:12:03.017 poller_cost: 7105 (cyc), 3102 (nsec) 00:12:03.017 00:12:03.017 real 0m2.093s 00:12:03.017 user 0m1.853s 00:12:03.017 sys 0m0.140s 00:12:03.017 21:25:35 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:03.017 21:25:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:03.017 ************************************ 00:12:03.017 END TEST thread_poller_perf 00:12:03.017 ************************************ 00:12:03.017 21:25:35 thread -- common/autotest_common.sh@1142 -- # return 0 00:12:03.017 21:25:35 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:03.017 21:25:35 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:12:03.017 21:25:35 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:03.017 21:25:35 thread -- common/autotest_common.sh@10 -- # set +x 00:12:03.017 ************************************ 00:12:03.017 START TEST thread_poller_perf 00:12:03.017 ************************************ 00:12:03.017 21:25:36 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:03.017 [2024-07-15 21:25:36.056829] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:12:03.017 [2024-07-15 21:25:36.057028] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115233 ] 00:12:03.017 [2024-07-15 21:25:36.223365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.275 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:12:03.275 [2024-07-15 21:25:36.521213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.649 ====================================== 00:12:04.649 busy:2293889596 (cyc) 00:12:04.649 total_run_count: 4514000 00:12:04.649 tsc_hz: 2290000000 (cyc) 00:12:04.649 ====================================== 00:12:04.649 poller_cost: 508 (cyc), 221 (nsec) 00:12:04.649 ************************************ 00:12:04.649 END TEST thread_poller_perf 00:12:04.649 ************************************ 00:12:04.649 00:12:04.649 real 0m2.009s 00:12:04.649 user 0m1.749s 00:12:04.649 sys 0m0.153s 00:12:04.649 21:25:38 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:04.649 21:25:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:04.908 21:25:38 thread -- common/autotest_common.sh@1142 -- # return 0 00:12:04.908 21:25:38 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:12:04.908 21:25:38 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:12:04.908 21:25:38 thread -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:04.908 21:25:38 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:04.908 21:25:38 thread -- common/autotest_common.sh@10 -- # set +x 00:12:04.908 ************************************ 00:12:04.908 START TEST thread_spdk_lock 00:12:04.908 ************************************ 00:12:04.908 21:25:38 thread.thread_spdk_lock -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:12:04.908 [2024-07-15 21:25:38.123660] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:12:04.908 [2024-07-15 21:25:38.123854] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115299 ] 00:12:05.167 [2024-07-15 21:25:38.301828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:05.426 [2024-07-15 21:25:38.566042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.426 [2024-07-15 21:25:38.566052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.993 [2024-07-15 21:25:39.107589] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 965:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:12:05.993 [2024-07-15 21:25:39.107752] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:12:05.993 [2024-07-15 21:25:39.107789] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x5571d5286340 00:12:05.993 [2024-07-15 21:25:39.117732] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:12:05.993 [2024-07-15 21:25:39.117833] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1026:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:12:05.993 [2024-07-15 21:25:39.117863] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:12:06.251 Starting test contend 00:12:06.251 Worker Delay Wait us Hold us Total us 00:12:06.251 0 3 150818 193650 344468 00:12:06.251 1 5 73035 296680 369715 00:12:06.251 PASS test contend 00:12:06.251 Starting test hold_by_poller 00:12:06.251 PASS test hold_by_poller 00:12:06.251 Starting test hold_by_message 00:12:06.251 PASS test hold_by_message 00:12:06.251 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:12:06.251 100014 assertions passed 00:12:06.251 0 assertions failed 00:12:06.251 ************************************ 00:12:06.251 END TEST thread_spdk_lock 00:12:06.251 ************************************ 00:12:06.251 00:12:06.251 real 0m1.488s 00:12:06.251 user 0m1.789s 00:12:06.251 sys 0m0.145s 00:12:06.251 21:25:39 thread.thread_spdk_lock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:06.251 21:25:39 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:12:06.251 21:25:39 thread -- common/autotest_common.sh@1142 -- # return 0 00:12:06.251 00:12:06.251 real 0m5.871s 00:12:06.251 user 0m5.536s 00:12:06.251 sys 0m0.592s 00:12:06.251 21:25:39 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:06.251 21:25:39 thread -- common/autotest_common.sh@10 -- # set +x 00:12:06.251 ************************************ 00:12:06.251 END TEST thread 00:12:06.251 ************************************ 00:12:06.509 21:25:39 -- common/autotest_common.sh@1142 -- # return 0 00:12:06.509 21:25:39 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:12:06.509 21:25:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:06.509 21:25:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:06.509 21:25:39 -- common/autotest_common.sh@10 -- # set +x 00:12:06.509 ************************************ 00:12:06.509 START TEST accel 00:12:06.509 ************************************ 00:12:06.509 21:25:39 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:12:06.509 * Looking for test storage... 00:12:06.509 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:12:06.509 21:25:39 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:12:06.509 21:25:39 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:12:06.509 21:25:39 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:06.509 21:25:39 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=115397 00:12:06.509 21:25:39 accel -- accel/accel.sh@63 -- # waitforlisten 115397 00:12:06.509 21:25:39 accel -- common/autotest_common.sh@829 -- # '[' -z 115397 ']' 00:12:06.509 21:25:39 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.509 21:25:39 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:06.509 21:25:39 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:12:06.509 21:25:39 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.509 21:25:39 accel -- accel/accel.sh@61 -- # build_accel_config 00:12:06.509 21:25:39 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:06.509 21:25:39 accel -- common/autotest_common.sh@10 -- # set +x 00:12:06.509 21:25:39 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:06.509 21:25:39 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:06.509 21:25:39 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:06.509 21:25:39 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:06.509 21:25:39 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:06.509 21:25:39 accel -- accel/accel.sh@40 -- # local IFS=, 00:12:06.509 21:25:39 accel -- accel/accel.sh@41 -- # jq -r . 00:12:06.509 [2024-07-15 21:25:39.838286] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:12:06.509 [2024-07-15 21:25:39.838470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115397 ] 00:12:06.768 [2024-07-15 21:25:39.986784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.026 [2024-07-15 21:25:40.266135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.400 21:25:41 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:08.400 21:25:41 accel -- common/autotest_common.sh@862 -- # return 0 00:12:08.400 21:25:41 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:12:08.400 21:25:41 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:12:08.400 21:25:41 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:12:08.400 21:25:41 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:12:08.400 21:25:41 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:12:08.400 21:25:41 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:12:08.400 21:25:41 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:12:08.400 21:25:41 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.400 21:25:41 accel -- common/autotest_common.sh@10 -- # set +x 00:12:08.400 21:25:41 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.400 21:25:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:08.400 21:25:41 accel -- accel/accel.sh@72 -- # IFS== 00:12:08.400 21:25:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:08.400 21:25:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:08.400 21:25:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:08.400 21:25:41 accel -- accel/accel.sh@72 -- # IFS== 00:12:08.400 21:25:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:08.400 21:25:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:08.400 21:25:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:08.400 21:25:41 accel -- accel/accel.sh@72 -- # IFS== 00:12:08.400 21:25:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:08.400 21:25:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:08.400 21:25:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:08.400 21:25:41 accel -- accel/accel.sh@72 -- # IFS== 00:12:08.400 21:25:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:08.400 21:25:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:08.400 21:25:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:08.400 21:25:41 accel -- accel/accel.sh@72 -- # IFS== 00:12:08.400 21:25:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:08.400 21:25:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:08.400 21:25:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:08.401 21:25:41 accel -- accel/accel.sh@72 -- # IFS== 00:12:08.401 21:25:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:08.401 21:25:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:08.401 21:25:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:08.401 21:25:41 accel -- accel/accel.sh@72 -- # IFS== 00:12:08.401 21:25:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:08.401 21:25:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:08.401 21:25:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:08.401 21:25:41 accel -- accel/accel.sh@72 -- # IFS== 00:12:08.401 21:25:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:08.401 21:25:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:08.401 21:25:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:08.401 21:25:41 accel -- accel/accel.sh@72 -- # IFS== 00:12:08.401 21:25:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:08.401 21:25:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:08.401 21:25:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:08.401 21:25:41 accel -- accel/accel.sh@72 -- # IFS== 00:12:08.401 21:25:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:08.401 21:25:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:08.401 21:25:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:08.401 21:25:41 accel -- accel/accel.sh@72 -- # IFS== 00:12:08.401 21:25:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:08.401 21:25:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:08.401 21:25:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:08.401 21:25:41 accel -- accel/accel.sh@72 -- # IFS== 00:12:08.401 21:25:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:08.401 21:25:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:08.401 21:25:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:08.401 21:25:41 accel -- accel/accel.sh@72 -- # IFS== 00:12:08.401 21:25:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:08.401 21:25:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:08.401 21:25:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:08.401 21:25:41 accel -- accel/accel.sh@72 -- # IFS== 00:12:08.401 21:25:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:08.401 21:25:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:08.401 21:25:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:08.401 21:25:41 accel -- accel/accel.sh@72 -- # IFS== 00:12:08.401 21:25:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:08.401 21:25:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:08.401 21:25:41 accel -- accel/accel.sh@75 -- # killprocess 115397 00:12:08.401 21:25:41 accel -- common/autotest_common.sh@948 -- # '[' -z 115397 ']' 00:12:08.401 21:25:41 accel -- common/autotest_common.sh@952 -- # kill -0 115397 00:12:08.401 21:25:41 accel -- common/autotest_common.sh@953 -- # uname 00:12:08.401 21:25:41 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:08.401 21:25:41 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115397 00:12:08.401 21:25:41 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:08.401 21:25:41 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:08.401 21:25:41 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115397' 00:12:08.401 killing process with pid 115397 00:12:08.401 21:25:41 accel -- common/autotest_common.sh@967 -- # kill 115397 00:12:08.401 21:25:41 accel -- common/autotest_common.sh@972 -- # wait 115397 00:12:11.682 21:25:44 accel -- accel/accel.sh@76 -- # trap - ERR 00:12:11.682 21:25:44 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:12:11.682 21:25:44 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:11.682 21:25:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:11.682 21:25:44 accel -- common/autotest_common.sh@10 -- # set +x 00:12:11.682 21:25:44 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:12:11.682 21:25:44 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:12:11.682 21:25:44 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:12:11.682 21:25:44 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:11.682 21:25:44 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:11.682 21:25:44 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:11.682 21:25:44 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:11.682 21:25:44 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:11.682 21:25:44 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:12:11.682 21:25:44 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:12:11.682 21:25:44 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:11.682 21:25:44 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:12:11.682 21:25:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:11.682 21:25:44 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:12:11.682 21:25:44 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:11.682 21:25:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:11.682 21:25:44 accel -- common/autotest_common.sh@10 -- # set +x 00:12:11.682 ************************************ 00:12:11.682 START TEST accel_missing_filename 00:12:11.682 ************************************ 00:12:11.682 21:25:44 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:12:11.682 21:25:44 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:12:11.682 21:25:44 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:12:11.682 21:25:44 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:12:11.682 21:25:44 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:11.682 21:25:44 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:12:11.682 21:25:44 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:11.682 21:25:44 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:12:11.682 21:25:44 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:12:11.682 21:25:44 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:12:11.682 21:25:44 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:11.682 21:25:44 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:11.683 21:25:44 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:11.683 21:25:44 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:11.683 21:25:44 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:11.683 21:25:44 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:12:11.683 21:25:44 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:12:11.683 [2024-07-15 21:25:44.811229] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:12:11.683 [2024-07-15 21:25:44.811401] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115496 ] 00:12:11.683 [2024-07-15 21:25:44.975640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.940 [2024-07-15 21:25:45.228587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.198 [2024-07-15 21:25:45.500814] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:12.782 [2024-07-15 21:25:46.075158] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:12:13.349 A filename is required. 00:12:13.349 21:25:46 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:12:13.349 21:25:46 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:13.349 21:25:46 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:12:13.349 21:25:46 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:12:13.349 21:25:46 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:12:13.349 21:25:46 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:13.349 00:12:13.349 real 0m1.740s 00:12:13.349 user 0m1.456s 00:12:13.349 sys 0m0.236s 00:12:13.349 21:25:46 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:13.349 21:25:46 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:12:13.349 ************************************ 00:12:13.349 END TEST accel_missing_filename 00:12:13.349 ************************************ 00:12:13.349 21:25:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:13.349 21:25:46 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:13.349 21:25:46 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:12:13.349 21:25:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:13.349 21:25:46 accel -- common/autotest_common.sh@10 -- # set +x 00:12:13.349 ************************************ 00:12:13.349 START TEST accel_compress_verify 00:12:13.349 ************************************ 00:12:13.349 21:25:46 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:13.349 21:25:46 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:12:13.349 21:25:46 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:13.349 21:25:46 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:12:13.349 21:25:46 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:13.349 21:25:46 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:12:13.349 21:25:46 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:13.349 21:25:46 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:13.349 21:25:46 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:13.349 21:25:46 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:12:13.349 21:25:46 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:13.349 21:25:46 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:13.349 21:25:46 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:13.349 21:25:46 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:13.349 21:25:46 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:13.349 21:25:46 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:12:13.349 21:25:46 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:12:13.349 [2024-07-15 21:25:46.612715] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:12:13.349 [2024-07-15 21:25:46.612889] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115547 ] 00:12:13.608 [2024-07-15 21:25:46.777431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.866 [2024-07-15 21:25:47.035198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.123 [2024-07-15 21:25:47.299765] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:14.689 [2024-07-15 21:25:47.888866] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:12:14.947 00:12:14.947 Compression does not support the verify option, aborting. 00:12:14.947 21:25:48 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:12:14.947 21:25:48 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:14.947 21:25:48 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:12:14.947 21:25:48 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:12:14.947 21:25:48 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:12:14.947 21:25:48 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:14.947 00:12:14.947 real 0m1.757s 00:12:14.947 user 0m1.488s 00:12:14.947 sys 0m0.222s 00:12:14.947 21:25:48 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:14.947 21:25:48 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:12:14.947 ************************************ 00:12:14.947 END TEST accel_compress_verify 00:12:14.947 ************************************ 00:12:15.206 21:25:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:15.206 21:25:48 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:12:15.206 21:25:48 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:15.206 21:25:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:15.206 21:25:48 accel -- common/autotest_common.sh@10 -- # set +x 00:12:15.206 ************************************ 00:12:15.206 START TEST accel_wrong_workload 00:12:15.206 ************************************ 00:12:15.206 21:25:48 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:12:15.206 21:25:48 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:12:15.206 21:25:48 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:12:15.206 21:25:48 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:12:15.206 21:25:48 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:15.206 21:25:48 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:12:15.206 21:25:48 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:15.206 21:25:48 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:12:15.206 21:25:48 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:12:15.206 21:25:48 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:12:15.206 21:25:48 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:15.206 21:25:48 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:15.206 21:25:48 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:15.206 21:25:48 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:15.206 21:25:48 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:15.206 21:25:48 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:12:15.206 21:25:48 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:12:15.206 Unsupported workload type: foobar 00:12:15.206 [2024-07-15 21:25:48.426562] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:12:15.206 accel_perf options: 00:12:15.206 [-h help message] 00:12:15.206 [-q queue depth per core] 00:12:15.206 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:12:15.206 [-T number of threads per core 00:12:15.206 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:12:15.206 [-t time in seconds] 00:12:15.206 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:12:15.206 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:12:15.206 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:12:15.206 [-l for compress/decompress workloads, name of uncompressed input file 00:12:15.206 [-S for crc32c workload, use this seed value (default 0) 00:12:15.206 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:12:15.206 [-f for fill workload, use this BYTE value (default 255) 00:12:15.206 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:12:15.206 [-y verify result if this switch is on] 00:12:15.206 [-a tasks to allocate per core (default: same value as -q)] 00:12:15.206 Can be used to spread operations across a wider range of memory. 00:12:15.206 21:25:48 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:12:15.206 21:25:48 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:15.206 21:25:48 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:15.206 21:25:48 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:15.206 00:12:15.206 real 0m0.076s 00:12:15.206 user 0m0.091s 00:12:15.206 sys 0m0.036s 00:12:15.206 21:25:48 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:15.206 21:25:48 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:12:15.206 ************************************ 00:12:15.206 END TEST accel_wrong_workload 00:12:15.206 ************************************ 00:12:15.206 21:25:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:15.206 21:25:48 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:12:15.206 21:25:48 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:12:15.206 21:25:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:15.206 21:25:48 accel -- common/autotest_common.sh@10 -- # set +x 00:12:15.206 ************************************ 00:12:15.206 START TEST accel_negative_buffers 00:12:15.206 ************************************ 00:12:15.206 21:25:48 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:12:15.206 21:25:48 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:12:15.206 21:25:48 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:12:15.206 21:25:48 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:12:15.206 21:25:48 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:15.206 21:25:48 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:12:15.206 21:25:48 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:15.206 21:25:48 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:12:15.207 21:25:48 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:12:15.207 21:25:48 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:12:15.207 21:25:48 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:15.207 21:25:48 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:15.207 21:25:48 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:15.207 21:25:48 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:15.207 21:25:48 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:15.207 21:25:48 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:12:15.207 21:25:48 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:12:15.207 -x option must be non-negative. 00:12:15.207 [2024-07-15 21:25:48.558532] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:12:15.467 accel_perf options: 00:12:15.467 [-h help message] 00:12:15.467 [-q queue depth per core] 00:12:15.467 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:12:15.467 [-T number of threads per core 00:12:15.467 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:12:15.467 [-t time in seconds] 00:12:15.467 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:12:15.467 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:12:15.467 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:12:15.467 [-l for compress/decompress workloads, name of uncompressed input file 00:12:15.467 [-S for crc32c workload, use this seed value (default 0) 00:12:15.467 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:12:15.467 [-f for fill workload, use this BYTE value (default 255) 00:12:15.467 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:12:15.467 [-y verify result if this switch is on] 00:12:15.467 [-a tasks to allocate per core (default: same value as -q)] 00:12:15.467 Can be used to spread operations across a wider range of memory. 00:12:15.467 21:25:48 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:12:15.467 21:25:48 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:15.467 21:25:48 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:15.467 21:25:48 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:15.467 00:12:15.467 real 0m0.079s 00:12:15.467 user 0m0.099s 00:12:15.467 sys 0m0.040s 00:12:15.467 21:25:48 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:15.467 21:25:48 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:12:15.467 ************************************ 00:12:15.467 END TEST accel_negative_buffers 00:12:15.467 ************************************ 00:12:15.467 21:25:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:15.467 21:25:48 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:12:15.467 21:25:48 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:12:15.467 21:25:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:15.467 21:25:48 accel -- common/autotest_common.sh@10 -- # set +x 00:12:15.467 ************************************ 00:12:15.467 START TEST accel_crc32c 00:12:15.467 ************************************ 00:12:15.467 21:25:48 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:12:15.467 21:25:48 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:12:15.467 21:25:48 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:12:15.467 21:25:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:15.467 21:25:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:15.467 21:25:48 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:12:15.467 21:25:48 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:12:15.467 21:25:48 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:12:15.467 21:25:48 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:15.467 21:25:48 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:15.467 21:25:48 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:15.467 21:25:48 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:15.467 21:25:48 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:15.467 21:25:48 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:12:15.467 21:25:48 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:12:15.467 [2024-07-15 21:25:48.695161] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:12:15.467 [2024-07-15 21:25:48.695341] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115657 ] 00:12:15.724 [2024-07-15 21:25:48.859759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.981 [2024-07-15 21:25:49.155350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:16.239 21:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:18.138 21:25:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:18.138 21:25:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:18.138 21:25:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:18.138 21:25:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:18.138 21:25:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:18.138 21:25:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:18.138 21:25:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:18.138 21:25:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:18.138 21:25:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:18.138 21:25:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:18.138 21:25:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:18.138 21:25:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:18.138 21:25:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:18.138 21:25:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:18.138 21:25:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:18.138 21:25:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:18.138 21:25:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:18.138 21:25:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:18.138 21:25:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:18.138 21:25:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:18.138 21:25:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:18.138 21:25:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:18.138 21:25:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:18.138 21:25:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:18.138 21:25:51 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:18.138 21:25:51 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:12:18.138 21:25:51 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:18.138 00:12:18.138 real 0m2.803s 00:12:18.138 user 0m2.525s 00:12:18.138 sys 0m0.203s 00:12:18.138 21:25:51 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:18.138 21:25:51 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:12:18.138 ************************************ 00:12:18.138 END TEST accel_crc32c 00:12:18.138 ************************************ 00:12:18.138 21:25:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:18.138 21:25:51 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:12:18.138 21:25:51 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:12:18.138 21:25:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:18.138 21:25:51 accel -- common/autotest_common.sh@10 -- # set +x 00:12:18.396 ************************************ 00:12:18.396 START TEST accel_crc32c_C2 00:12:18.396 ************************************ 00:12:18.396 21:25:51 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:12:18.396 21:25:51 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:12:18.397 21:25:51 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:12:18.397 21:25:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:18.397 21:25:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:18.397 21:25:51 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:12:18.397 21:25:51 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:12:18.397 21:25:51 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:18.397 21:25:51 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:18.397 21:25:51 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:18.397 21:25:51 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:18.397 21:25:51 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:18.397 21:25:51 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:12:18.397 21:25:51 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:12:18.397 21:25:51 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:12:18.397 [2024-07-15 21:25:51.570461] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:12:18.397 [2024-07-15 21:25:51.570667] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115711 ] 00:12:18.397 [2024-07-15 21:25:51.738494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.654 [2024-07-15 21:25:52.010627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:19.219 21:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:21.114 21:25:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:21.114 21:25:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:21.114 21:25:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:21.114 21:25:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:21.114 21:25:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:21.114 21:25:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:21.114 21:25:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:21.114 21:25:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:21.114 21:25:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:21.114 21:25:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:21.115 21:25:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:21.115 21:25:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:21.115 21:25:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:21.115 21:25:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:21.115 21:25:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:21.115 21:25:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:21.115 21:25:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:21.115 21:25:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:21.115 21:25:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:21.115 21:25:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:21.115 21:25:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:21.115 21:25:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:21.115 21:25:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:21.115 21:25:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:21.115 21:25:54 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:21.115 21:25:54 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:12:21.115 21:25:54 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:21.115 00:12:21.115 real 0m2.769s 00:12:21.115 user 0m2.463s 00:12:21.115 sys 0m0.249s 00:12:21.115 21:25:54 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:21.115 21:25:54 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:12:21.115 ************************************ 00:12:21.115 END TEST accel_crc32c_C2 00:12:21.115 ************************************ 00:12:21.115 21:25:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:21.115 21:25:54 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:12:21.115 21:25:54 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:21.115 21:25:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:21.115 21:25:54 accel -- common/autotest_common.sh@10 -- # set +x 00:12:21.115 ************************************ 00:12:21.115 START TEST accel_copy 00:12:21.115 ************************************ 00:12:21.115 21:25:54 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:12:21.115 21:25:54 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:12:21.115 21:25:54 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:12:21.115 21:25:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:21.115 21:25:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:21.115 21:25:54 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:12:21.115 21:25:54 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:12:21.115 21:25:54 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:12:21.115 21:25:54 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:21.115 21:25:54 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:21.115 21:25:54 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:21.115 21:25:54 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:21.115 21:25:54 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:21.115 21:25:54 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:12:21.115 21:25:54 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:12:21.115 [2024-07-15 21:25:54.392184] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:12:21.115 [2024-07-15 21:25:54.392379] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115774 ] 00:12:21.375 [2024-07-15 21:25:54.557836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.634 [2024-07-15 21:25:54.848643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:21.894 21:25:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:23.813 21:25:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:23.813 21:25:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:23.813 21:25:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:23.813 21:25:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:23.813 21:25:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:23.813 21:25:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:23.813 21:25:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:23.813 21:25:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:23.813 21:25:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:23.813 21:25:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:23.813 21:25:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:23.813 21:25:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:23.813 21:25:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:23.813 21:25:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:23.813 21:25:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:23.813 21:25:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:23.813 21:25:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:23.813 21:25:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:23.813 21:25:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:23.813 21:25:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:23.813 21:25:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:23.813 21:25:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:23.813 21:25:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:23.813 21:25:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:23.813 21:25:57 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:23.813 21:25:57 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:12:23.813 21:25:57 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:23.813 00:12:23.813 real 0m2.823s 00:12:23.813 user 0m2.539s 00:12:23.813 sys 0m0.221s 00:12:23.813 21:25:57 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:23.813 21:25:57 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:12:23.813 ************************************ 00:12:23.813 END TEST accel_copy 00:12:23.813 ************************************ 00:12:24.073 21:25:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:24.073 21:25:57 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:24.073 21:25:57 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:12:24.073 21:25:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:24.073 21:25:57 accel -- common/autotest_common.sh@10 -- # set +x 00:12:24.073 ************************************ 00:12:24.073 START TEST accel_fill 00:12:24.073 ************************************ 00:12:24.073 21:25:57 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:24.073 21:25:57 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:12:24.073 21:25:57 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:12:24.073 21:25:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:24.073 21:25:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:24.073 21:25:57 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:24.073 21:25:57 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:24.073 21:25:57 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:12:24.073 21:25:57 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:24.073 21:25:57 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:24.073 21:25:57 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:24.073 21:25:57 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:24.073 21:25:57 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:24.073 21:25:57 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:12:24.073 21:25:57 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:12:24.073 [2024-07-15 21:25:57.282174] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:12:24.073 [2024-07-15 21:25:57.282339] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115831 ] 00:12:24.333 [2024-07-15 21:25:57.450231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.592 [2024-07-15 21:25:57.736017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.853 21:25:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:24.853 21:25:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:24.853 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:24.853 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:24.853 21:25:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:24.853 21:25:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:24.853 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:24.853 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:24.853 21:25:58 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:12:24.853 21:25:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:24.854 21:25:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:26.756 21:26:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:26.756 21:26:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:26.756 21:26:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:26.756 21:26:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:26.756 21:26:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:26.756 21:26:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:26.756 21:26:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:26.756 21:26:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:26.756 21:26:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:26.756 21:26:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:26.756 21:26:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:26.756 21:26:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:26.756 21:26:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:26.756 21:26:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:26.756 21:26:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:26.756 21:26:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:26.756 21:26:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:26.756 21:26:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:26.756 21:26:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:26.756 21:26:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:26.756 21:26:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:26.756 21:26:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:26.756 21:26:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:26.756 21:26:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:26.756 21:26:00 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:26.756 21:26:00 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:12:26.756 21:26:00 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:26.756 00:12:26.756 real 0m2.862s 00:12:26.756 user 0m2.535s 00:12:26.756 sys 0m0.253s 00:12:26.756 21:26:00 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:26.756 21:26:00 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:12:26.756 ************************************ 00:12:26.756 END TEST accel_fill 00:12:26.756 ************************************ 00:12:27.015 21:26:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:27.015 21:26:00 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:12:27.015 21:26:00 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:27.015 21:26:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:27.015 21:26:00 accel -- common/autotest_common.sh@10 -- # set +x 00:12:27.015 ************************************ 00:12:27.015 START TEST accel_copy_crc32c 00:12:27.015 ************************************ 00:12:27.015 21:26:00 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:12:27.015 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:12:27.015 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:12:27.015 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:27.015 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:27.015 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:12:27.015 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:12:27.015 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:12:27.015 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:27.015 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:27.015 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:27.015 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:27.015 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:27.015 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:12:27.015 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:12:27.015 [2024-07-15 21:26:00.211893] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:12:27.015 [2024-07-15 21:26:00.212474] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115913 ] 00:12:27.015 [2024-07-15 21:26:00.373688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.583 [2024-07-15 21:26:00.667787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:27.841 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:27.842 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:27.842 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:12:27.842 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:27.842 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:27.842 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:27.842 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:12:27.842 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:27.842 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:27.842 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:27.842 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:12:27.842 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:27.842 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:27.842 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:27.842 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:12:27.842 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:27.842 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:27.842 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:27.842 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:27.842 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:27.842 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:27.842 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:27.842 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:27.842 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:27.842 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:27.842 21:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:29.746 21:26:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:29.746 21:26:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:29.746 21:26:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:29.746 21:26:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:29.746 21:26:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:29.746 21:26:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:29.746 21:26:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:29.746 21:26:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:29.746 21:26:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:29.746 21:26:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:29.746 21:26:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:29.746 21:26:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:29.746 21:26:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:29.746 21:26:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:29.746 21:26:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:29.746 21:26:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:29.746 21:26:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:29.746 21:26:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:29.746 21:26:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:29.746 21:26:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:29.746 21:26:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:29.746 21:26:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:29.746 21:26:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:29.746 21:26:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:29.746 21:26:03 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:29.746 21:26:03 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:12:29.746 21:26:03 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:29.746 00:12:29.746 real 0m2.853s 00:12:29.746 user 0m2.546s 00:12:29.746 sys 0m0.238s 00:12:29.746 21:26:03 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:29.746 21:26:03 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:12:29.746 ************************************ 00:12:29.746 END TEST accel_copy_crc32c 00:12:29.746 ************************************ 00:12:29.746 21:26:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:29.746 21:26:03 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:12:29.746 21:26:03 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:12:29.746 21:26:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:29.746 21:26:03 accel -- common/autotest_common.sh@10 -- # set +x 00:12:29.746 ************************************ 00:12:29.746 START TEST accel_copy_crc32c_C2 00:12:29.746 ************************************ 00:12:29.746 21:26:03 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:12:29.746 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:12:29.746 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:12:29.746 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:29.746 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:29.746 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:12:29.746 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:12:29.746 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:12:29.746 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:29.746 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:29.746 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:29.746 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:29.746 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:29.746 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:12:29.746 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:12:30.005 [2024-07-15 21:26:03.132253] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:12:30.005 [2024-07-15 21:26:03.132405] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115971 ] 00:12:30.005 [2024-07-15 21:26:03.294182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.264 [2024-07-15 21:26:03.590176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:30.523 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:30.524 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:30.524 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.524 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:30.524 21:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:33.056 21:26:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:33.056 21:26:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.056 21:26:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:33.056 21:26:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:33.056 21:26:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:33.056 21:26:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.056 21:26:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:33.056 21:26:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:33.056 21:26:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:33.056 21:26:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.056 21:26:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:33.056 21:26:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:33.056 21:26:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:33.056 21:26:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.056 21:26:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:33.056 21:26:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:33.056 21:26:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:33.056 21:26:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.056 21:26:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:33.056 21:26:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:33.056 21:26:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:33.056 21:26:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.056 21:26:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:33.056 21:26:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:33.056 21:26:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:33.056 21:26:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:12:33.056 21:26:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:33.056 00:12:33.056 real 0m2.847s 00:12:33.056 user 0m2.536s 00:12:33.056 sys 0m0.245s 00:12:33.056 21:26:05 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:33.056 21:26:05 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:12:33.056 ************************************ 00:12:33.056 END TEST accel_copy_crc32c_C2 00:12:33.056 ************************************ 00:12:33.056 21:26:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:33.056 21:26:05 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:12:33.056 21:26:05 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:33.056 21:26:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:33.056 21:26:05 accel -- common/autotest_common.sh@10 -- # set +x 00:12:33.056 ************************************ 00:12:33.056 START TEST accel_dualcast 00:12:33.056 ************************************ 00:12:33.056 21:26:05 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:12:33.056 21:26:05 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:12:33.056 21:26:05 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:12:33.056 21:26:05 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:12:33.056 21:26:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:33.056 21:26:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:33.056 21:26:05 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:12:33.056 21:26:05 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:12:33.056 21:26:05 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:33.056 21:26:05 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:33.056 21:26:05 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:33.056 21:26:05 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:33.056 21:26:05 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:33.056 21:26:05 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:12:33.056 21:26:05 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:12:33.056 [2024-07-15 21:26:06.041971] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:12:33.056 [2024-07-15 21:26:06.042150] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116034 ] 00:12:33.056 [2024-07-15 21:26:06.206154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.314 [2024-07-15 21:26:06.508015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.571 21:26:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:33.571 21:26:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:33.571 21:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:33.572 21:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:36.102 21:26:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:36.102 21:26:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:36.102 21:26:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:36.102 21:26:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:36.102 21:26:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:36.102 21:26:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:36.102 21:26:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:36.102 21:26:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:36.102 21:26:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:36.102 21:26:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:36.102 21:26:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:36.102 21:26:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:36.102 21:26:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:36.102 21:26:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:36.102 21:26:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:36.102 21:26:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:36.102 21:26:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:36.102 21:26:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:36.102 21:26:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:36.102 21:26:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:36.102 21:26:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:36.102 21:26:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:36.102 21:26:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:36.102 21:26:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:36.102 21:26:08 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:36.102 21:26:08 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:12:36.102 21:26:08 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:36.102 00:12:36.102 real 0m2.916s 00:12:36.102 user 0m2.601s 00:12:36.102 sys 0m0.243s 00:12:36.102 21:26:08 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:36.102 21:26:08 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:12:36.102 ************************************ 00:12:36.102 END TEST accel_dualcast 00:12:36.102 ************************************ 00:12:36.102 21:26:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:36.102 21:26:08 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:12:36.102 21:26:08 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:36.102 21:26:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:36.102 21:26:08 accel -- common/autotest_common.sh@10 -- # set +x 00:12:36.102 ************************************ 00:12:36.102 START TEST accel_compare 00:12:36.102 ************************************ 00:12:36.102 21:26:08 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:12:36.102 21:26:08 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:12:36.102 21:26:08 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:12:36.102 21:26:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:36.102 21:26:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:36.102 21:26:08 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:12:36.102 21:26:08 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:12:36.102 21:26:08 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:12:36.102 21:26:08 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:36.102 21:26:08 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:36.102 21:26:08 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:36.102 21:26:08 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:36.102 21:26:08 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:36.102 21:26:08 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:12:36.102 21:26:08 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:12:36.102 [2024-07-15 21:26:09.027140] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:12:36.102 [2024-07-15 21:26:09.027824] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116112 ] 00:12:36.102 [2024-07-15 21:26:09.196619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.361 [2024-07-15 21:26:09.472964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.620 21:26:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:36.620 21:26:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:36.620 21:26:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:36.620 21:26:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:36.620 21:26:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:36.620 21:26:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:36.620 21:26:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:36.620 21:26:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:36.620 21:26:09 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:12:36.620 21:26:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:36.620 21:26:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:36.620 21:26:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:36.620 21:26:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:36.620 21:26:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:36.620 21:26:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:36.620 21:26:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:36.620 21:26:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:36.620 21:26:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:36.620 21:26:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:36.620 21:26:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:36.620 21:26:09 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:12:36.620 21:26:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:36.620 21:26:09 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:12:36.620 21:26:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:36.620 21:26:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:36.620 21:26:09 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:36.620 21:26:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:36.620 21:26:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:36.620 21:26:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:36.620 21:26:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:36.620 21:26:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:36.620 21:26:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:36.621 21:26:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:38.529 21:26:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:38.529 21:26:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:38.529 21:26:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:38.529 21:26:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:38.529 21:26:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:38.529 21:26:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:38.529 21:26:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:38.529 21:26:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:38.529 21:26:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:38.529 21:26:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:38.529 21:26:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:38.529 21:26:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:38.529 21:26:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:38.529 21:26:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:38.529 21:26:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:38.529 21:26:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:38.529 21:26:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:38.529 21:26:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:38.529 21:26:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:38.529 21:26:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:38.529 21:26:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:38.529 21:26:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:38.529 21:26:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:38.529 21:26:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:38.529 21:26:11 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:38.529 21:26:11 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:12:38.529 21:26:11 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:38.529 00:12:38.529 real 0m2.897s 00:12:38.529 user 0m2.565s 00:12:38.529 sys 0m0.266s 00:12:38.529 21:26:11 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:38.529 21:26:11 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:12:38.529 ************************************ 00:12:38.529 END TEST accel_compare 00:12:38.529 ************************************ 00:12:38.788 21:26:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:38.788 21:26:11 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:12:38.788 21:26:11 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:38.788 21:26:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:38.788 21:26:11 accel -- common/autotest_common.sh@10 -- # set +x 00:12:38.788 ************************************ 00:12:38.788 START TEST accel_xor 00:12:38.788 ************************************ 00:12:38.788 21:26:11 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:12:38.788 21:26:11 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:12:38.788 21:26:11 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:12:38.788 21:26:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:38.788 21:26:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:38.788 21:26:11 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:12:38.788 21:26:11 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:12:38.788 21:26:11 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:12:38.788 21:26:11 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:38.788 21:26:11 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:38.788 21:26:11 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:38.788 21:26:11 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:38.788 21:26:11 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:38.788 21:26:11 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:12:38.788 21:26:11 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:12:38.788 [2024-07-15 21:26:11.987127] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:12:38.788 [2024-07-15 21:26:11.987314] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116169 ] 00:12:38.788 [2024-07-15 21:26:12.156774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.355 [2024-07-15 21:26:12.484317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:39.615 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:39.616 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:39.616 21:26:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:39.616 21:26:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:39.616 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:39.616 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:39.616 21:26:12 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:12:39.616 21:26:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:39.616 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:39.616 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:39.616 21:26:12 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:12:39.616 21:26:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:39.616 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:39.616 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:39.616 21:26:12 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:12:39.616 21:26:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:39.616 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:39.616 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:39.616 21:26:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:39.616 21:26:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:39.616 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:39.616 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:39.616 21:26:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:39.616 21:26:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:39.616 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:39.616 21:26:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:41.535 21:26:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:41.535 21:26:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:41.535 21:26:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:41.535 21:26:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:41.535 21:26:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:41.535 21:26:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:41.535 21:26:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:41.535 21:26:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:41.535 21:26:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:41.535 21:26:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:41.535 21:26:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:41.535 21:26:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:41.535 21:26:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:41.535 21:26:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:41.535 21:26:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:41.535 21:26:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:41.535 21:26:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:41.535 21:26:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:41.535 21:26:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:41.535 21:26:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:41.535 21:26:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:41.535 21:26:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:41.535 21:26:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:41.535 21:26:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:41.794 21:26:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:41.794 21:26:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:12:41.794 21:26:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:41.794 00:12:41.794 real 0m2.979s 00:12:41.794 user 0m2.663s 00:12:41.794 sys 0m0.231s 00:12:41.794 21:26:14 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:41.794 21:26:14 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:12:41.794 ************************************ 00:12:41.794 END TEST accel_xor 00:12:41.794 ************************************ 00:12:41.794 21:26:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:41.794 21:26:14 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:12:41.794 21:26:14 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:12:41.794 21:26:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:41.794 21:26:14 accel -- common/autotest_common.sh@10 -- # set +x 00:12:41.794 ************************************ 00:12:41.794 START TEST accel_xor 00:12:41.794 ************************************ 00:12:41.794 21:26:14 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:12:41.794 21:26:14 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:12:41.794 21:26:14 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:12:41.794 21:26:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:41.794 21:26:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:41.794 21:26:14 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:12:41.794 21:26:14 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:12:41.794 21:26:14 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:12:41.794 21:26:14 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:41.794 21:26:14 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:41.794 21:26:14 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:41.794 21:26:14 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:41.794 21:26:14 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:41.794 21:26:14 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:12:41.794 21:26:14 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:12:41.794 [2024-07-15 21:26:15.031266] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:12:41.794 [2024-07-15 21:26:15.031463] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116232 ] 00:12:42.053 [2024-07-15 21:26:15.202312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.312 [2024-07-15 21:26:15.510495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:42.572 21:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:12:42.573 21:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.573 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.573 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:42.573 21:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:42.573 21:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.573 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.573 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:42.573 21:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:42.573 21:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.573 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.573 21:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:45.106 21:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:45.106 21:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:45.106 21:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:45.106 21:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:45.106 21:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:45.106 21:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:45.106 21:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:45.106 21:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:45.106 21:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:45.106 21:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:45.106 21:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:45.106 21:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:45.106 21:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:45.106 21:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:45.106 21:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:45.106 21:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:45.106 21:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:45.106 21:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:45.106 21:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:45.106 21:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:45.106 21:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:45.106 21:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:45.106 21:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:45.106 21:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:45.106 21:26:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:45.106 21:26:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:12:45.106 21:26:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:45.106 00:12:45.106 real 0m2.974s 00:12:45.106 user 0m2.658s 00:12:45.106 sys 0m0.254s 00:12:45.106 21:26:17 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:45.106 21:26:17 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:12:45.106 ************************************ 00:12:45.106 END TEST accel_xor 00:12:45.106 ************************************ 00:12:45.106 21:26:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:45.106 21:26:17 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:12:45.106 21:26:17 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:12:45.106 21:26:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:45.106 21:26:17 accel -- common/autotest_common.sh@10 -- # set +x 00:12:45.106 ************************************ 00:12:45.106 START TEST accel_dif_verify 00:12:45.106 ************************************ 00:12:45.106 21:26:18 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:12:45.106 21:26:18 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:12:45.106 21:26:18 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:12:45.106 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:45.106 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:45.106 21:26:18 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:12:45.106 21:26:18 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:12:45.106 21:26:18 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:12:45.106 21:26:18 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:45.106 21:26:18 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:45.106 21:26:18 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:45.106 21:26:18 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:45.106 21:26:18 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:45.106 21:26:18 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:12:45.106 21:26:18 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:12:45.106 [2024-07-15 21:26:18.065673] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:12:45.106 [2024-07-15 21:26:18.065845] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116294 ] 00:12:45.106 [2024-07-15 21:26:18.231494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.365 [2024-07-15 21:26:18.541090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:45.625 21:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:47.528 21:26:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:47.528 21:26:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:47.528 21:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:47.528 21:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:47.528 21:26:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:47.528 21:26:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:47.528 21:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:47.528 21:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:47.528 21:26:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:47.528 21:26:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:47.528 21:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:47.528 21:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:47.528 21:26:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:47.528 21:26:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:47.528 21:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:47.528 21:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:47.528 21:26:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:47.528 21:26:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:47.528 21:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:47.528 21:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:47.528 21:26:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:47.528 21:26:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:47.528 21:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:47.528 21:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:47.528 21:26:20 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:47.528 21:26:20 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:12:47.528 21:26:20 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:47.528 00:12:47.528 real 0m2.872s 00:12:47.528 user 0m2.569s 00:12:47.528 sys 0m0.239s 00:12:47.528 21:26:20 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:47.528 21:26:20 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:12:47.528 ************************************ 00:12:47.528 END TEST accel_dif_verify 00:12:47.528 ************************************ 00:12:47.787 21:26:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:47.787 21:26:20 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:12:47.787 21:26:20 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:12:47.787 21:26:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:47.787 21:26:20 accel -- common/autotest_common.sh@10 -- # set +x 00:12:47.787 ************************************ 00:12:47.787 START TEST accel_dif_generate 00:12:47.787 ************************************ 00:12:47.787 21:26:20 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:12:47.787 21:26:20 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:12:47.787 21:26:20 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:12:47.787 21:26:20 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:12:47.787 21:26:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:47.787 21:26:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:47.787 21:26:20 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:12:47.787 21:26:20 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:12:47.787 21:26:20 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:47.787 21:26:20 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:47.787 21:26:20 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:47.787 21:26:20 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:47.787 21:26:20 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:47.787 21:26:20 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:12:47.787 21:26:20 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:12:47.787 [2024-07-15 21:26:20.996772] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:12:47.787 [2024-07-15 21:26:20.996941] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116372 ] 00:12:48.047 [2024-07-15 21:26:21.165823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.306 [2024-07-15 21:26:21.449786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:48.567 21:26:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:50.475 21:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:50.475 21:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:50.475 21:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:50.475 21:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:50.475 21:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:50.475 21:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:50.475 21:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:50.475 21:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:50.475 21:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:50.475 21:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:50.475 21:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:50.475 21:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:50.475 21:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:50.475 21:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:50.475 21:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:50.475 21:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:50.475 21:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:50.475 21:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:50.475 21:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:50.475 21:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:50.475 21:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:50.475 21:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:50.475 21:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:50.475 21:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:50.475 21:26:23 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:50.475 21:26:23 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:12:50.475 21:26:23 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:50.475 00:12:50.475 real 0m2.862s 00:12:50.475 user 0m2.535s 00:12:50.475 sys 0m0.255s 00:12:50.475 21:26:23 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:50.475 ************************************ 00:12:50.475 END TEST accel_dif_generate 00:12:50.475 21:26:23 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:12:50.475 ************************************ 00:12:50.734 21:26:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:50.734 21:26:23 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:12:50.734 21:26:23 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:12:50.734 21:26:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:50.734 21:26:23 accel -- common/autotest_common.sh@10 -- # set +x 00:12:50.734 ************************************ 00:12:50.734 START TEST accel_dif_generate_copy 00:12:50.734 ************************************ 00:12:50.734 21:26:23 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:12:50.734 21:26:23 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:12:50.734 21:26:23 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:12:50.734 21:26:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:50.734 21:26:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:50.734 21:26:23 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:12:50.734 21:26:23 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:12:50.734 21:26:23 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:12:50.734 21:26:23 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:50.734 21:26:23 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:50.734 21:26:23 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:50.734 21:26:23 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:50.734 21:26:23 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:50.734 21:26:23 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:12:50.734 21:26:23 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:12:50.734 [2024-07-15 21:26:23.913053] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:12:50.734 [2024-07-15 21:26:23.913238] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116435 ] 00:12:50.734 [2024-07-15 21:26:24.081528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.302 [2024-07-15 21:26:24.377894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.569 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.570 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:51.570 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:12:51.570 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.570 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.570 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:51.570 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:51.570 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.570 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.570 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:51.570 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:51.570 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.570 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.570 21:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:53.482 21:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:53.482 21:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:53.482 21:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:53.482 21:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:53.482 21:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:53.482 21:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:53.482 21:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:53.482 21:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:53.482 21:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:53.482 21:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:53.482 21:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:53.482 21:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:53.482 21:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:53.482 21:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:53.482 21:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:53.482 21:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:53.482 21:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:53.482 21:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:53.482 21:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:53.482 21:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:53.482 21:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:53.482 21:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:53.482 21:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:53.482 21:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:53.482 21:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:53.482 21:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:12:53.482 21:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:53.482 00:12:53.482 real 0m2.924s 00:12:53.482 user 0m2.593s 00:12:53.482 sys 0m0.252s 00:12:53.482 21:26:26 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:53.482 21:26:26 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:12:53.482 ************************************ 00:12:53.482 END TEST accel_dif_generate_copy 00:12:53.482 ************************************ 00:12:53.482 21:26:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:53.482 21:26:26 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:12:53.482 21:26:26 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:53.482 21:26:26 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:12:53.482 21:26:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:53.482 21:26:26 accel -- common/autotest_common.sh@10 -- # set +x 00:12:53.482 ************************************ 00:12:53.482 START TEST accel_comp 00:12:53.482 ************************************ 00:12:53.482 21:26:26 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:53.482 21:26:26 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:12:53.482 21:26:26 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:12:53.482 21:26:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:53.482 21:26:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:53.482 21:26:26 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:53.482 21:26:26 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:53.482 21:26:26 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:12:53.482 21:26:26 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:53.482 21:26:26 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:53.482 21:26:26 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:53.482 21:26:26 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:53.482 21:26:26 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:53.482 21:26:26 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:12:53.482 21:26:26 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:12:53.741 [2024-07-15 21:26:26.902174] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:12:53.741 [2024-07-15 21:26:26.902371] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116493 ] 00:12:53.741 [2024-07-15 21:26:27.071238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.000 [2024-07-15 21:26:27.361649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.567 21:26:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:56.481 21:26:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:56.481 21:26:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:56.481 21:26:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:56.481 21:26:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:56.481 21:26:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:56.481 21:26:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:56.481 21:26:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:56.481 21:26:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:56.481 21:26:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:56.481 21:26:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:56.481 21:26:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:56.481 21:26:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:56.481 21:26:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:56.481 21:26:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:56.481 21:26:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:56.481 21:26:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:56.481 21:26:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:56.481 21:26:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:56.481 21:26:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:56.481 21:26:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:56.481 21:26:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:56.481 21:26:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:56.481 21:26:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:56.481 21:26:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:56.481 21:26:29 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:56.481 21:26:29 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:12:56.481 21:26:29 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:56.481 00:12:56.481 real 0m2.845s 00:12:56.481 user 0m2.531s 00:12:56.481 sys 0m0.245s 00:12:56.481 21:26:29 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:56.481 21:26:29 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:12:56.481 ************************************ 00:12:56.481 END TEST accel_comp 00:12:56.481 ************************************ 00:12:56.481 21:26:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:56.481 21:26:29 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:56.481 21:26:29 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:12:56.481 21:26:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:56.481 21:26:29 accel -- common/autotest_common.sh@10 -- # set +x 00:12:56.481 ************************************ 00:12:56.481 START TEST accel_decomp 00:12:56.481 ************************************ 00:12:56.481 21:26:29 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:56.481 21:26:29 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:12:56.482 21:26:29 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:12:56.482 21:26:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:56.482 21:26:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:56.482 21:26:29 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:56.482 21:26:29 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:56.482 21:26:29 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:12:56.482 21:26:29 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:56.482 21:26:29 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:56.482 21:26:29 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:56.482 21:26:29 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:56.482 21:26:29 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:56.482 21:26:29 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:12:56.482 21:26:29 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:12:56.482 [2024-07-15 21:26:29.803946] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:12:56.482 [2024-07-15 21:26:29.804091] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116574 ] 00:12:56.739 [2024-07-15 21:26:29.966015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.998 [2024-07-15 21:26:30.257220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.256 21:26:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.257 21:26:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:59.789 21:26:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:59.789 21:26:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:59.789 21:26:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:59.789 21:26:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:59.789 21:26:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:59.789 21:26:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:59.789 21:26:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:59.789 21:26:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:59.789 21:26:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:59.789 21:26:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:59.789 21:26:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:59.789 21:26:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:59.789 21:26:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:59.789 21:26:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:59.789 21:26:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:59.789 21:26:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:59.789 21:26:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:59.789 21:26:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:59.789 21:26:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:59.789 21:26:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:59.789 21:26:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:59.789 21:26:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:59.789 21:26:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:59.789 21:26:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:59.789 21:26:32 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:59.789 21:26:32 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:59.789 21:26:32 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:59.789 00:12:59.789 real 0m2.845s 00:12:59.789 user 0m2.573s 00:12:59.789 sys 0m0.209s 00:12:59.789 21:26:32 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:59.789 21:26:32 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:12:59.789 ************************************ 00:12:59.790 END TEST accel_decomp 00:12:59.790 ************************************ 00:12:59.790 21:26:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:59.790 21:26:32 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:59.790 21:26:32 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:12:59.790 21:26:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:59.790 21:26:32 accel -- common/autotest_common.sh@10 -- # set +x 00:12:59.790 ************************************ 00:12:59.790 START TEST accel_decomp_full 00:12:59.790 ************************************ 00:12:59.790 21:26:32 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:59.790 21:26:32 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:12:59.790 21:26:32 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:12:59.790 21:26:32 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:59.790 21:26:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:59.790 21:26:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:59.790 21:26:32 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:59.790 21:26:32 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:12:59.790 21:26:32 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:59.790 21:26:32 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:59.790 21:26:32 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:59.790 21:26:32 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:59.790 21:26:32 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:59.790 21:26:32 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:12:59.790 21:26:32 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:12:59.790 [2024-07-15 21:26:32.715777] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:12:59.790 [2024-07-15 21:26:32.715970] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116632 ] 00:12:59.790 [2024-07-15 21:26:32.881877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.790 [2024-07-15 21:26:33.133135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.047 21:26:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:00.047 21:26:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:00.047 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:00.047 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:00.047 21:26:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:00.048 21:26:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:02.574 21:26:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:02.574 21:26:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:02.574 21:26:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:02.574 21:26:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:02.574 21:26:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:02.574 21:26:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:02.574 21:26:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:02.574 21:26:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:02.574 21:26:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:02.574 21:26:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:02.574 21:26:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:02.574 21:26:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:02.574 21:26:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:02.574 21:26:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:02.574 21:26:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:02.574 21:26:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:02.574 21:26:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:02.574 21:26:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:02.574 21:26:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:02.574 21:26:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:02.574 21:26:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:02.574 21:26:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:02.574 21:26:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:02.574 21:26:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:02.574 21:26:35 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:02.574 21:26:35 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:02.574 21:26:35 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:02.574 00:13:02.574 real 0m2.696s 00:13:02.574 user 0m2.458s 00:13:02.574 sys 0m0.177s 00:13:02.574 21:26:35 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:02.574 21:26:35 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:13:02.574 ************************************ 00:13:02.574 END TEST accel_decomp_full 00:13:02.574 ************************************ 00:13:02.574 21:26:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:02.574 21:26:35 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:02.574 21:26:35 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:13:02.574 21:26:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:02.574 21:26:35 accel -- common/autotest_common.sh@10 -- # set +x 00:13:02.574 ************************************ 00:13:02.574 START TEST accel_decomp_mcore 00:13:02.574 ************************************ 00:13:02.574 21:26:35 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:02.574 21:26:35 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:13:02.574 21:26:35 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:13:02.574 21:26:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.574 21:26:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.574 21:26:35 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:02.574 21:26:35 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:02.574 21:26:35 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:13:02.574 21:26:35 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:02.574 21:26:35 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:02.574 21:26:35 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:02.574 21:26:35 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:02.574 21:26:35 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:02.574 21:26:35 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:13:02.574 21:26:35 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:13:02.574 [2024-07-15 21:26:35.471908] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:13:02.574 [2024-07-15 21:26:35.472148] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116688 ] 00:13:02.574 [2024-07-15 21:26:35.659214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:02.574 [2024-07-15 21:26:35.914749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.574 [2024-07-15 21:26:35.915085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.574 [2024-07-15 21:26:35.914957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.575 [2024-07-15 21:26:35.915099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.141 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.142 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:03.142 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.142 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.142 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.142 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:13:03.142 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.142 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.142 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.142 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:13:03.142 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.142 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.142 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.142 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:13:03.142 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.142 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.142 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.142 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:13:03.142 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.142 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.142 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.142 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:13:03.142 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.142 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.142 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.142 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:03.142 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.142 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.142 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.142 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:03.142 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.142 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.142 21:26:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:05.042 00:13:05.042 real 0m2.804s 00:13:05.042 user 0m8.179s 00:13:05.042 sys 0m0.234s 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:05.042 21:26:38 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:13:05.042 ************************************ 00:13:05.042 END TEST accel_decomp_mcore 00:13:05.042 ************************************ 00:13:05.042 21:26:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:05.042 21:26:38 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:05.042 21:26:38 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:13:05.042 21:26:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:05.042 21:26:38 accel -- common/autotest_common.sh@10 -- # set +x 00:13:05.042 ************************************ 00:13:05.042 START TEST accel_decomp_full_mcore 00:13:05.042 ************************************ 00:13:05.042 21:26:38 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:05.042 21:26:38 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:13:05.042 21:26:38 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:13:05.042 21:26:38 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:05.042 21:26:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.042 21:26:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.042 21:26:38 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:05.042 21:26:38 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:13:05.042 21:26:38 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:05.042 21:26:38 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:05.042 21:26:38 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:05.042 21:26:38 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:05.042 21:26:38 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:05.042 21:26:38 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:13:05.042 21:26:38 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:13:05.042 [2024-07-15 21:26:38.343175] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:13:05.042 [2024-07-15 21:26:38.343326] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116749 ] 00:13:05.303 [2024-07-15 21:26:38.517899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:05.563 [2024-07-15 21:26:38.788024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.563 [2024-07-15 21:26:38.788100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:05.563 [2024-07-15 21:26:38.788254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.563 [2024-07-15 21:26:38.788268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.821 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.822 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.822 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:13:05.822 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.822 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.822 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.822 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:13:05.822 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.822 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.822 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.822 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:13:05.822 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.822 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.822 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.822 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:05.822 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.822 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.822 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.822 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:05.822 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.822 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.822 21:26:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:08.354 00:13:08.354 real 0m2.847s 00:13:08.354 user 0m8.363s 00:13:08.354 sys 0m0.219s 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:08.354 21:26:41 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:13:08.354 ************************************ 00:13:08.354 END TEST accel_decomp_full_mcore 00:13:08.354 ************************************ 00:13:08.354 21:26:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:08.354 21:26:41 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:08.354 21:26:41 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:13:08.354 21:26:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:08.354 21:26:41 accel -- common/autotest_common.sh@10 -- # set +x 00:13:08.354 ************************************ 00:13:08.354 START TEST accel_decomp_mthread 00:13:08.354 ************************************ 00:13:08.355 21:26:41 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:08.355 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:13:08.355 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:13:08.355 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:08.355 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:08.355 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:08.355 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:08.355 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:13:08.355 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:08.355 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:08.355 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:08.355 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:08.355 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:08.355 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:13:08.355 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:13:08.355 [2024-07-15 21:26:41.259962] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:13:08.355 [2024-07-15 21:26:41.260154] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116829 ] 00:13:08.355 [2024-07-15 21:26:41.429574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.355 [2024-07-15 21:26:41.667556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:13:08.614 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:08.615 21:26:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:11.161 21:26:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:11.162 21:26:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:11.162 21:26:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:11.162 21:26:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:11.162 21:26:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:11.162 21:26:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:11.162 21:26:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:11.162 21:26:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:11.162 21:26:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:11.162 21:26:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:11.162 21:26:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:11.162 21:26:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:11.162 21:26:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:11.162 21:26:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:11.162 21:26:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:11.162 21:26:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:11.162 21:26:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:11.162 21:26:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:11.162 21:26:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:11.162 21:26:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:11.162 21:26:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:11.162 21:26:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:11.162 21:26:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:11.162 21:26:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:11.162 21:26:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:11.162 21:26:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:11.162 21:26:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:11.162 21:26:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:11.162 21:26:43 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:11.162 21:26:43 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:11.162 21:26:43 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:11.162 00:13:11.162 real 0m2.788s 00:13:11.162 user 0m2.522s 00:13:11.162 sys 0m0.198s 00:13:11.162 21:26:43 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:11.162 21:26:43 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:13:11.162 ************************************ 00:13:11.162 END TEST accel_decomp_mthread 00:13:11.162 ************************************ 00:13:11.162 21:26:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:11.162 21:26:44 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:11.162 21:26:44 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:13:11.162 21:26:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:11.162 21:26:44 accel -- common/autotest_common.sh@10 -- # set +x 00:13:11.162 ************************************ 00:13:11.162 START TEST accel_decomp_full_mthread 00:13:11.162 ************************************ 00:13:11.162 21:26:44 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:11.162 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:13:11.162 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:13:11.162 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:11.162 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:11.162 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:11.162 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:11.162 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:13:11.162 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:11.162 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:11.162 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:11.162 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:11.162 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:11.162 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:13:11.162 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:13:11.162 [2024-07-15 21:26:44.102074] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:13:11.162 [2024-07-15 21:26:44.102260] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116887 ] 00:13:11.162 [2024-07-15 21:26:44.272214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.421 [2024-07-15 21:26:44.567726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:13:11.713 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:11.714 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:11.714 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:11.714 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:11.714 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:11.714 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:11.714 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:11.714 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:11.714 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:11.714 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:11.714 21:26:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:13.616 21:26:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:13.616 21:26:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:13.616 21:26:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:13.616 21:26:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:13.616 21:26:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:13.616 21:26:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:13.616 21:26:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:13.616 21:26:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:13.616 21:26:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:13.616 21:26:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:13.616 21:26:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:13.616 21:26:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:13.616 21:26:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:13.616 21:26:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:13.616 21:26:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:13.616 21:26:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:13.616 21:26:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:13.616 21:26:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:13.616 21:26:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:13.616 21:26:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:13.616 21:26:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:13.616 21:26:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:13.616 21:26:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:13.616 21:26:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:13.616 21:26:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:13.616 21:26:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:13.616 21:26:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:13.616 21:26:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:13.616 21:26:46 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:13.616 21:26:46 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:13.616 21:26:46 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:13.616 00:13:13.616 real 0m2.929s 00:13:13.616 user 0m2.575s 00:13:13.616 sys 0m0.287s 00:13:13.616 21:26:46 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:13.616 21:26:46 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:13:13.616 ************************************ 00:13:13.616 END TEST accel_decomp_full_mthread 00:13:13.616 ************************************ 00:13:13.876 21:26:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:13.876 21:26:47 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:13:13.876 21:26:47 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:13.876 21:26:47 accel -- accel/accel.sh@137 -- # build_accel_config 00:13:13.876 21:26:47 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:13.876 21:26:47 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:13.876 21:26:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:13.876 21:26:47 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:13.876 21:26:47 accel -- common/autotest_common.sh@10 -- # set +x 00:13:13.876 21:26:47 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:13.876 21:26:47 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:13.876 21:26:47 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:13.876 21:26:47 accel -- accel/accel.sh@40 -- # local IFS=, 00:13:13.876 21:26:47 accel -- accel/accel.sh@41 -- # jq -r . 00:13:13.876 ************************************ 00:13:13.876 START TEST accel_dif_functional_tests 00:13:13.876 ************************************ 00:13:13.876 21:26:47 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:13.876 [2024-07-15 21:26:47.130242] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:13:13.876 [2024-07-15 21:26:47.130569] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116950 ] 00:13:14.136 [2024-07-15 21:26:47.330627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:14.394 [2024-07-15 21:26:47.599516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.394 [2024-07-15 21:26:47.599652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.394 [2024-07-15 21:26:47.599673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.652 00:13:14.652 00:13:14.652 CUnit - A unit testing framework for C - Version 2.1-3 00:13:14.652 http://cunit.sourceforge.net/ 00:13:14.652 00:13:14.652 00:13:14.652 Suite: accel_dif 00:13:14.652 Test: verify: DIF generated, GUARD check ...passed 00:13:14.652 Test: verify: DIF generated, APPTAG check ...passed 00:13:14.652 Test: verify: DIF generated, REFTAG check ...passed 00:13:14.652 Test: verify: DIF not generated, GUARD check ...[2024-07-15 21:26:48.013862] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:14.652 passed 00:13:14.652 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 21:26:48.014009] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:14.652 passed 00:13:14.652 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 21:26:48.014082] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:14.652 passed 00:13:14.652 Test: verify: APPTAG correct, APPTAG check ...passed 00:13:14.652 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 21:26:48.014209] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:13:14.652 passed 00:13:14.652 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:13:14.652 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:13:14.652 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:13:14.652 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 21:26:48.014447] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:13:14.652 passed 00:13:14.652 Test: verify copy: DIF generated, GUARD check ...passed 00:13:14.652 Test: verify copy: DIF generated, APPTAG check ...passed 00:13:14.652 Test: verify copy: DIF generated, REFTAG check ...passed 00:13:14.652 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 21:26:48.014708] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:14.652 passed 00:13:14.652 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 21:26:48.014822] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:14.652 passed 00:13:14.652 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 21:26:48.014903] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:14.652 passed 00:13:14.652 Test: generate copy: DIF generated, GUARD check ...passed 00:13:14.652 Test: generate copy: DIF generated, APTTAG check ...passed 00:13:14.652 Test: generate copy: DIF generated, REFTAG check ...passed 00:13:14.652 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:13:14.652 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:13:14.652 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:13:14.652 Test: generate copy: iovecs-len validate ...[2024-07-15 21:26:48.015344] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:13:14.652 passed 00:13:14.652 Test: generate copy: buffer alignment validate ...passed 00:13:14.652 00:13:14.652 Run Summary: Type Total Ran Passed Failed Inactive 00:13:14.652 suites 1 1 n/a 0 0 00:13:14.652 tests 26 26 26 0 0 00:13:14.652 asserts 115 115 115 0 n/a 00:13:14.652 00:13:14.652 Elapsed time = 0.001 seconds 00:13:16.603 00:13:16.603 real 0m2.439s 00:13:16.603 user 0m4.803s 00:13:16.603 sys 0m0.360s 00:13:16.603 21:26:49 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:16.603 21:26:49 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:13:16.603 ************************************ 00:13:16.603 END TEST accel_dif_functional_tests 00:13:16.603 ************************************ 00:13:16.603 21:26:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:16.603 00:13:16.603 real 1m9.878s 00:13:16.603 user 1m15.999s 00:13:16.603 sys 0m7.233s 00:13:16.603 21:26:49 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:16.603 21:26:49 accel -- common/autotest_common.sh@10 -- # set +x 00:13:16.603 ************************************ 00:13:16.603 END TEST accel 00:13:16.603 ************************************ 00:13:16.603 21:26:49 -- common/autotest_common.sh@1142 -- # return 0 00:13:16.604 21:26:49 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:13:16.604 21:26:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:16.604 21:26:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:16.604 21:26:49 -- common/autotest_common.sh@10 -- # set +x 00:13:16.604 ************************************ 00:13:16.604 START TEST accel_rpc 00:13:16.604 ************************************ 00:13:16.604 21:26:49 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:13:16.604 * Looking for test storage... 00:13:16.604 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:13:16.604 21:26:49 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:13:16.604 21:26:49 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=117053 00:13:16.604 21:26:49 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 117053 00:13:16.604 21:26:49 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:13:16.604 21:26:49 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 117053 ']' 00:13:16.604 21:26:49 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.604 21:26:49 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:16.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.604 21:26:49 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.604 21:26:49 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:16.604 21:26:49 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.604 [2024-07-15 21:26:49.815333] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:13:16.604 [2024-07-15 21:26:49.815584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117053 ] 00:13:16.863 [2024-07-15 21:26:50.003723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.122 [2024-07-15 21:26:50.305361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.382 21:26:50 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:17.382 21:26:50 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:13:17.382 21:26:50 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:13:17.382 21:26:50 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:13:17.382 21:26:50 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:13:17.382 21:26:50 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:13:17.382 21:26:50 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:13:17.382 21:26:50 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:17.382 21:26:50 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:17.382 21:26:50 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.382 ************************************ 00:13:17.382 START TEST accel_assign_opcode 00:13:17.382 ************************************ 00:13:17.382 21:26:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:13:17.382 21:26:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:13:17.382 21:26:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.382 21:26:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:17.382 [2024-07-15 21:26:50.705492] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:13:17.382 21:26:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.382 21:26:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:13:17.382 21:26:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.382 21:26:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:17.382 [2024-07-15 21:26:50.717381] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:13:17.382 21:26:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.382 21:26:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:13:17.382 21:26:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.382 21:26:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:18.761 21:26:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.761 21:26:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:13:18.761 21:26:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:13:18.761 21:26:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.761 21:26:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:18.761 21:26:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:13:18.761 21:26:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.761 software 00:13:18.761 00:13:18.761 real 0m1.231s 00:13:18.761 user 0m0.062s 00:13:18.761 sys 0m0.010s 00:13:18.761 21:26:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:18.761 21:26:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:18.761 ************************************ 00:13:18.761 END TEST accel_assign_opcode 00:13:18.761 ************************************ 00:13:18.761 21:26:51 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:13:18.761 21:26:51 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 117053 00:13:18.761 21:26:51 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 117053 ']' 00:13:18.761 21:26:51 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 117053 00:13:18.761 21:26:51 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:13:18.761 21:26:51 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:18.761 21:26:51 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 117053 00:13:18.761 21:26:52 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:18.761 21:26:52 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:18.761 21:26:52 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 117053' 00:13:18.761 killing process with pid 117053 00:13:18.761 21:26:52 accel_rpc -- common/autotest_common.sh@967 -- # kill 117053 00:13:18.761 21:26:52 accel_rpc -- common/autotest_common.sh@972 -- # wait 117053 00:13:22.114 00:13:22.114 real 0m5.640s 00:13:22.114 user 0m5.433s 00:13:22.114 sys 0m0.724s 00:13:22.114 21:26:55 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:22.114 21:26:55 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.114 ************************************ 00:13:22.114 END TEST accel_rpc 00:13:22.114 ************************************ 00:13:22.114 21:26:55 -- common/autotest_common.sh@1142 -- # return 0 00:13:22.114 21:26:55 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:22.114 21:26:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:22.114 21:26:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:22.114 21:26:55 -- common/autotest_common.sh@10 -- # set +x 00:13:22.114 ************************************ 00:13:22.114 START TEST app_cmdline 00:13:22.114 ************************************ 00:13:22.114 21:26:55 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:22.114 * Looking for test storage... 00:13:22.114 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:22.114 21:26:55 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:13:22.114 21:26:55 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=117213 00:13:22.114 21:26:55 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:13:22.114 21:26:55 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 117213 00:13:22.114 21:26:55 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 117213 ']' 00:13:22.114 21:26:55 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.114 21:26:55 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:22.114 21:26:55 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.114 21:26:55 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:22.114 21:26:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:22.373 [2024-07-15 21:26:55.501890] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:13:22.374 [2024-07-15 21:26:55.502197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117213 ] 00:13:22.374 [2024-07-15 21:26:55.671962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.634 [2024-07-15 21:26:55.953119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.013 21:26:57 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:24.013 21:26:57 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:13:24.013 21:26:57 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:13:24.013 { 00:13:24.013 "version": "SPDK v24.09-pre git sha1 0663932f5", 00:13:24.013 "fields": { 00:13:24.013 "major": 24, 00:13:24.013 "minor": 9, 00:13:24.013 "patch": 0, 00:13:24.013 "suffix": "-pre", 00:13:24.013 "commit": "0663932f5" 00:13:24.013 } 00:13:24.013 } 00:13:24.013 21:26:57 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:13:24.013 21:26:57 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:13:24.013 21:26:57 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:13:24.013 21:26:57 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:13:24.013 21:26:57 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:13:24.013 21:26:57 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.013 21:26:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:24.013 21:26:57 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:13:24.013 21:26:57 app_cmdline -- app/cmdline.sh@26 -- # sort 00:13:24.013 21:26:57 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.289 21:26:57 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:13:24.289 21:26:57 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:13:24.289 21:26:57 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:24.289 21:26:57 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:13:24.289 21:26:57 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:24.289 21:26:57 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:24.289 21:26:57 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:24.289 21:26:57 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:24.289 21:26:57 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:24.289 21:26:57 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:24.289 21:26:57 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:24.289 21:26:57 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:24.289 21:26:57 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:24.289 21:26:57 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:24.289 request: 00:13:24.289 { 00:13:24.289 "method": "env_dpdk_get_mem_stats", 00:13:24.289 "req_id": 1 00:13:24.289 } 00:13:24.289 Got JSON-RPC error response 00:13:24.289 response: 00:13:24.289 { 00:13:24.289 "code": -32601, 00:13:24.289 "message": "Method not found" 00:13:24.289 } 00:13:24.289 21:26:57 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:13:24.289 21:26:57 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:24.289 21:26:57 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:24.289 21:26:57 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:24.289 21:26:57 app_cmdline -- app/cmdline.sh@1 -- # killprocess 117213 00:13:24.289 21:26:57 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 117213 ']' 00:13:24.289 21:26:57 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 117213 00:13:24.289 21:26:57 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:13:24.289 21:26:57 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:24.289 21:26:57 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 117213 00:13:24.557 21:26:57 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:24.557 21:26:57 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:24.557 21:26:57 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 117213' 00:13:24.557 killing process with pid 117213 00:13:24.557 21:26:57 app_cmdline -- common/autotest_common.sh@967 -- # kill 117213 00:13:24.557 21:26:57 app_cmdline -- common/autotest_common.sh@972 -- # wait 117213 00:13:27.844 00:13:27.844 real 0m5.610s 00:13:27.844 user 0m5.779s 00:13:27.844 sys 0m0.707s 00:13:27.844 21:27:00 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:27.844 21:27:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:27.844 ************************************ 00:13:27.844 END TEST app_cmdline 00:13:27.844 ************************************ 00:13:27.844 21:27:00 -- common/autotest_common.sh@1142 -- # return 0 00:13:27.844 21:27:00 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:27.844 21:27:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:27.844 21:27:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:27.844 21:27:00 -- common/autotest_common.sh@10 -- # set +x 00:13:27.844 ************************************ 00:13:27.844 START TEST version 00:13:27.844 ************************************ 00:13:27.844 21:27:00 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:27.844 * Looking for test storage... 00:13:27.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:27.844 21:27:01 version -- app/version.sh@17 -- # get_header_version major 00:13:27.844 21:27:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:27.844 21:27:01 version -- app/version.sh@14 -- # cut -f2 00:13:27.844 21:27:01 version -- app/version.sh@14 -- # tr -d '"' 00:13:27.844 21:27:01 version -- app/version.sh@17 -- # major=24 00:13:27.844 21:27:01 version -- app/version.sh@18 -- # get_header_version minor 00:13:27.844 21:27:01 version -- app/version.sh@14 -- # cut -f2 00:13:27.844 21:27:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:27.844 21:27:01 version -- app/version.sh@14 -- # tr -d '"' 00:13:27.844 21:27:01 version -- app/version.sh@18 -- # minor=9 00:13:27.844 21:27:01 version -- app/version.sh@19 -- # get_header_version patch 00:13:27.844 21:27:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:27.844 21:27:01 version -- app/version.sh@14 -- # cut -f2 00:13:27.844 21:27:01 version -- app/version.sh@14 -- # tr -d '"' 00:13:27.844 21:27:01 version -- app/version.sh@19 -- # patch=0 00:13:27.844 21:27:01 version -- app/version.sh@20 -- # get_header_version suffix 00:13:27.844 21:27:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:27.844 21:27:01 version -- app/version.sh@14 -- # cut -f2 00:13:27.844 21:27:01 version -- app/version.sh@14 -- # tr -d '"' 00:13:27.844 21:27:01 version -- app/version.sh@20 -- # suffix=-pre 00:13:27.844 21:27:01 version -- app/version.sh@22 -- # version=24.9 00:13:27.844 21:27:01 version -- app/version.sh@25 -- # (( patch != 0 )) 00:13:27.844 21:27:01 version -- app/version.sh@28 -- # version=24.9rc0 00:13:27.844 21:27:01 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:27.844 21:27:01 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:13:27.844 21:27:01 version -- app/version.sh@30 -- # py_version=24.9rc0 00:13:27.844 21:27:01 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:13:27.844 00:13:27.844 real 0m0.183s 00:13:27.844 user 0m0.105s 00:13:27.844 sys 0m0.124s 00:13:27.844 21:27:01 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:27.844 21:27:01 version -- common/autotest_common.sh@10 -- # set +x 00:13:27.844 ************************************ 00:13:27.844 END TEST version 00:13:27.844 ************************************ 00:13:27.844 21:27:01 -- common/autotest_common.sh@1142 -- # return 0 00:13:27.844 21:27:01 -- spdk/autotest.sh@188 -- # '[' 1 -eq 1 ']' 00:13:27.844 21:27:01 -- spdk/autotest.sh@189 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:13:27.844 21:27:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:27.844 21:27:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:27.844 21:27:01 -- common/autotest_common.sh@10 -- # set +x 00:13:28.103 ************************************ 00:13:28.103 START TEST blockdev_general 00:13:28.103 ************************************ 00:13:28.103 21:27:01 blockdev_general -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:13:28.103 * Looking for test storage... 00:13:28.103 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:28.103 21:27:01 blockdev_general -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:28.103 21:27:01 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:13:28.103 21:27:01 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:28.103 21:27:01 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:28.103 21:27:01 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:28.103 21:27:01 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:28.103 21:27:01 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:28.103 21:27:01 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:28.103 21:27:01 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:13:28.103 21:27:01 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:13:28.103 21:27:01 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:13:28.103 21:27:01 blockdev_general -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:13:28.103 21:27:01 blockdev_general -- bdev/blockdev.sh@674 -- # uname -s 00:13:28.103 21:27:01 blockdev_general -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:13:28.103 21:27:01 blockdev_general -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:13:28.103 21:27:01 blockdev_general -- bdev/blockdev.sh@682 -- # test_type=bdev 00:13:28.103 21:27:01 blockdev_general -- bdev/blockdev.sh@683 -- # crypto_device= 00:13:28.103 21:27:01 blockdev_general -- bdev/blockdev.sh@684 -- # dek= 00:13:28.103 21:27:01 blockdev_general -- bdev/blockdev.sh@685 -- # env_ctx= 00:13:28.103 21:27:01 blockdev_general -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:13:28.103 21:27:01 blockdev_general -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:13:28.103 21:27:01 blockdev_general -- bdev/blockdev.sh@690 -- # [[ bdev == bdev ]] 00:13:28.103 21:27:01 blockdev_general -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:13:28.103 21:27:01 blockdev_general -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:13:28.103 21:27:01 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=117434 00:13:28.103 21:27:01 blockdev_general -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:13:28.104 21:27:01 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:28.104 21:27:01 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 117434 00:13:28.104 21:27:01 blockdev_general -- common/autotest_common.sh@829 -- # '[' -z 117434 ']' 00:13:28.104 21:27:01 blockdev_general -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.104 21:27:01 blockdev_general -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:28.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.104 21:27:01 blockdev_general -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.104 21:27:01 blockdev_general -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:28.104 21:27:01 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:28.104 [2024-07-15 21:27:01.406554] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:13:28.104 [2024-07-15 21:27:01.406723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117434 ] 00:13:28.379 [2024-07-15 21:27:01.573607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.638 [2024-07-15 21:27:01.862426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.206 21:27:02 blockdev_general -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:29.206 21:27:02 blockdev_general -- common/autotest_common.sh@862 -- # return 0 00:13:29.206 21:27:02 blockdev_general -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:13:29.206 21:27:02 blockdev_general -- bdev/blockdev.sh@696 -- # setup_bdev_conf 00:13:29.206 21:27:02 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:13:29.206 21:27:02 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.206 21:27:02 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:30.141 [2024-07-15 21:27:03.456124] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:30.141 [2024-07-15 21:27:03.456231] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:30.141 00:13:30.141 [2024-07-15 21:27:03.464032] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:30.141 [2024-07-15 21:27:03.464073] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:30.141 00:13:30.400 Malloc0 00:13:30.400 Malloc1 00:13:30.400 Malloc2 00:13:30.400 Malloc3 00:13:30.658 Malloc4 00:13:30.658 Malloc5 00:13:30.658 Malloc6 00:13:30.658 Malloc7 00:13:30.658 Malloc8 00:13:30.916 Malloc9 00:13:30.916 [2024-07-15 21:27:04.080731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:30.916 [2024-07-15 21:27:04.080815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.917 [2024-07-15 21:27:04.080847] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:13:30.917 [2024-07-15 21:27:04.080895] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.917 [2024-07-15 21:27:04.083268] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.917 [2024-07-15 21:27:04.083314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:30.917 TestPT 00:13:30.917 21:27:04 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.917 21:27:04 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:13:30.917 5000+0 records in 00:13:30.917 5000+0 records out 00:13:30.917 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0291475 s, 351 MB/s 00:13:30.917 21:27:04 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:13:30.917 21:27:04 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.917 21:27:04 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:30.917 AIO0 00:13:30.917 21:27:04 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.917 21:27:04 blockdev_general -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:13:30.917 21:27:04 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.917 21:27:04 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:30.917 21:27:04 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.917 21:27:04 blockdev_general -- bdev/blockdev.sh@740 -- # cat 00:13:30.917 21:27:04 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:13:30.917 21:27:04 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.917 21:27:04 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:30.917 21:27:04 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.917 21:27:04 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:13:30.917 21:27:04 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.917 21:27:04 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:30.917 21:27:04 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.917 21:27:04 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:30.917 21:27:04 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.917 21:27:04 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:30.917 21:27:04 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.176 21:27:04 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:13:31.176 21:27:04 blockdev_general -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:13:31.176 21:27:04 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.176 21:27:04 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:31.176 21:27:04 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:13:31.176 21:27:04 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.176 21:27:04 blockdev_general -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:13:31.176 21:27:04 blockdev_general -- bdev/blockdev.sh@749 -- # jq -r .name 00:13:31.177 21:27:04 blockdev_general -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "1a85a85e-b973-4413-9489-5c6bb8cea844"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "1a85a85e-b973-4413-9489-5c6bb8cea844",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "00d4d53f-b61d-5ae3-a783-ee3443a7eb21"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "00d4d53f-b61d-5ae3-a783-ee3443a7eb21",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "838274b4-3477-5595-8f4d-232dbffbd68a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "838274b4-3477-5595-8f4d-232dbffbd68a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "35c728ff-e43f-56be-b003-b7dedc319cf5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "35c728ff-e43f-56be-b003-b7dedc319cf5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "a0ff9f61-61aa-5629-ad2a-183d19be71f6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a0ff9f61-61aa-5629-ad2a-183d19be71f6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "453eb0af-afc0-59ef-8141-2a1228a15fcc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "453eb0af-afc0-59ef-8141-2a1228a15fcc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "68c1d849-0049-54ad-b73f-f4b73c109c68"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "68c1d849-0049-54ad-b73f-f4b73c109c68",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "b62cc09b-483a-5459-9c29-d6a7289a334d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b62cc09b-483a-5459-9c29-d6a7289a334d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "119ccd64-d582-5a4e-ae61-4863139e6eca"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "119ccd64-d582-5a4e-ae61-4863139e6eca",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "ae8e436e-ed15-51ee-9da8-6438dcd8b4e9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ae8e436e-ed15-51ee-9da8-6438dcd8b4e9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "bac72eaa-f055-5643-8782-580195ec0fe9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bac72eaa-f055-5643-8782-580195ec0fe9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "1cecede1-d609-53c4-98b0-8d6900b431c3"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "1cecede1-d609-53c4-98b0-8d6900b431c3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "02aff188-d268-4ba9-87dc-8625ed7f28e3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "02aff188-d268-4ba9-87dc-8625ed7f28e3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "02aff188-d268-4ba9-87dc-8625ed7f28e3",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "31a43bb2-9524-4e48-b51d-e8d907cc7d5c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "9e6b4f39-4239-40ae-a760-96619c22928c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "78822723-9e18-4538-ba9c-b92e3996a337"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "78822723-9e18-4538-ba9c-b92e3996a337",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "78822723-9e18-4538-ba9c-b92e3996a337",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "c583a477-3670-4e24-aaad-3b4097d44576",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "8977590d-8302-401a-b3ff-d3834a30ddde",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "d3211ba3-1e62-4695-af05-6acc2f72d583"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d3211ba3-1e62-4695-af05-6acc2f72d583",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d3211ba3-1e62-4695-af05-6acc2f72d583",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "6efaea0e-e9b2-461f-9752-ea59aac8cd14",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "e73afa97-d610-4b54-b595-f875d3e9055c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "54c32968-56c8-4c92-b07d-bb5478e88751"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "54c32968-56c8-4c92-b07d-bb5478e88751",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:13:31.177 21:27:04 blockdev_general -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:13:31.177 21:27:04 blockdev_general -- bdev/blockdev.sh@752 -- # hello_world_bdev=Malloc0 00:13:31.177 21:27:04 blockdev_general -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:13:31.177 21:27:04 blockdev_general -- bdev/blockdev.sh@754 -- # killprocess 117434 00:13:31.177 21:27:04 blockdev_general -- common/autotest_common.sh@948 -- # '[' -z 117434 ']' 00:13:31.177 21:27:04 blockdev_general -- common/autotest_common.sh@952 -- # kill -0 117434 00:13:31.177 21:27:04 blockdev_general -- common/autotest_common.sh@953 -- # uname 00:13:31.177 21:27:04 blockdev_general -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:31.177 21:27:04 blockdev_general -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 117434 00:13:31.177 killing process with pid 117434 00:13:31.177 21:27:04 blockdev_general -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:31.177 21:27:04 blockdev_general -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:31.177 21:27:04 blockdev_general -- common/autotest_common.sh@966 -- # echo 'killing process with pid 117434' 00:13:31.177 21:27:04 blockdev_general -- common/autotest_common.sh@967 -- # kill 117434 00:13:31.177 21:27:04 blockdev_general -- common/autotest_common.sh@972 -- # wait 117434 00:13:36.474 21:27:08 blockdev_general -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:36.474 21:27:08 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:13:36.474 21:27:08 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:13:36.474 21:27:08 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:36.474 21:27:08 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:36.474 ************************************ 00:13:36.474 START TEST bdev_hello_world 00:13:36.474 ************************************ 00:13:36.474 21:27:08 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:13:36.474 [2024-07-15 21:27:09.041833] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:13:36.474 [2024-07-15 21:27:09.042564] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117549 ] 00:13:36.474 [2024-07-15 21:27:09.208162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.474 [2024-07-15 21:27:09.466988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.733 [2024-07-15 21:27:09.963970] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:36.733 [2024-07-15 21:27:09.964152] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:36.733 [2024-07-15 21:27:09.971917] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:36.733 [2024-07-15 21:27:09.972016] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:36.733 [2024-07-15 21:27:09.979932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:36.733 [2024-07-15 21:27:09.980037] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:36.733 [2024-07-15 21:27:09.980097] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:36.993 [2024-07-15 21:27:10.216770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:36.993 [2024-07-15 21:27:10.216953] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.993 [2024-07-15 21:27:10.216999] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:36.993 [2024-07-15 21:27:10.217045] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.993 [2024-07-15 21:27:10.219849] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.993 [2024-07-15 21:27:10.219944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:37.253 [2024-07-15 21:27:10.614889] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:37.253 [2024-07-15 21:27:10.615187] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:13:37.253 [2024-07-15 21:27:10.615398] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:37.253 [2024-07-15 21:27:10.615682] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:37.253 [2024-07-15 21:27:10.616001] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:37.253 [2024-07-15 21:27:10.616161] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:37.253 [2024-07-15 21:27:10.616386] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:37.253 00:13:37.253 [2024-07-15 21:27:10.616548] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:39.817 ************************************ 00:13:39.817 END TEST bdev_hello_world 00:13:39.817 ************************************ 00:13:39.817 00:13:39.817 real 0m4.200s 00:13:39.817 user 0m3.503s 00:13:39.817 sys 0m0.529s 00:13:39.817 21:27:13 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:39.817 21:27:13 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:40.075 21:27:13 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:40.075 21:27:13 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:13:40.075 21:27:13 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:40.075 21:27:13 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:40.075 21:27:13 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:40.075 ************************************ 00:13:40.075 START TEST bdev_bounds 00:13:40.075 ************************************ 00:13:40.075 21:27:13 blockdev_general.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:13:40.075 21:27:13 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:40.075 21:27:13 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=117640 00:13:40.075 21:27:13 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:40.075 21:27:13 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 117640' 00:13:40.075 Process bdevio pid: 117640 00:13:40.075 21:27:13 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 117640 00:13:40.075 21:27:13 blockdev_general.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 117640 ']' 00:13:40.075 21:27:13 blockdev_general.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.075 21:27:13 blockdev_general.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:40.075 21:27:13 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.075 21:27:13 blockdev_general.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:40.075 21:27:13 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:40.075 [2024-07-15 21:27:13.312994] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:13:40.075 [2024-07-15 21:27:13.313379] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117640 ] 00:13:40.333 [2024-07-15 21:27:13.486235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:40.592 [2024-07-15 21:27:13.770097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.592 [2024-07-15 21:27:13.770265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.592 [2024-07-15 21:27:13.770274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.159 [2024-07-15 21:27:14.315214] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:41.159 [2024-07-15 21:27:14.315458] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:41.159 [2024-07-15 21:27:14.323116] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:41.159 [2024-07-15 21:27:14.323307] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:41.159 [2024-07-15 21:27:14.331108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:41.159 [2024-07-15 21:27:14.331315] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:41.159 [2024-07-15 21:27:14.331368] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:41.417 [2024-07-15 21:27:14.617686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:41.417 [2024-07-15 21:27:14.617965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.417 [2024-07-15 21:27:14.618080] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:41.417 [2024-07-15 21:27:14.618137] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.417 [2024-07-15 21:27:14.620940] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.417 [2024-07-15 21:27:14.621081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:41.985 21:27:15 blockdev_general.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:41.985 21:27:15 blockdev_general.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:13:41.985 21:27:15 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:41.985 I/O targets: 00:13:41.985 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:13:41.985 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:13:41.985 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:13:41.985 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:13:41.985 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:13:41.985 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:13:41.985 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:13:41.985 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:13:41.985 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:13:41.985 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:13:41.985 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:13:41.985 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:13:41.985 raid0: 131072 blocks of 512 bytes (64 MiB) 00:13:41.985 concat0: 131072 blocks of 512 bytes (64 MiB) 00:13:41.985 raid1: 65536 blocks of 512 bytes (32 MiB) 00:13:41.985 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:13:41.985 00:13:41.985 00:13:41.985 CUnit - A unit testing framework for C - Version 2.1-3 00:13:41.985 http://cunit.sourceforge.net/ 00:13:41.985 00:13:41.985 00:13:41.986 Suite: bdevio tests on: AIO0 00:13:41.986 Test: blockdev write read block ...passed 00:13:41.986 Test: blockdev write zeroes read block ...passed 00:13:41.986 Test: blockdev write zeroes read no split ...passed 00:13:41.986 Test: blockdev write zeroes read split ...passed 00:13:41.986 Test: blockdev write zeroes read split partial ...passed 00:13:41.986 Test: blockdev reset ...passed 00:13:41.986 Test: blockdev write read 8 blocks ...passed 00:13:41.986 Test: blockdev write read size > 128k ...passed 00:13:41.986 Test: blockdev write read invalid size ...passed 00:13:41.986 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:41.986 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:41.986 Test: blockdev write read max offset ...passed 00:13:41.986 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:41.986 Test: blockdev writev readv 8 blocks ...passed 00:13:41.986 Test: blockdev writev readv 30 x 1block ...passed 00:13:41.986 Test: blockdev writev readv block ...passed 00:13:41.986 Test: blockdev writev readv size > 128k ...passed 00:13:41.986 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:41.986 Test: blockdev comparev and writev ...passed 00:13:41.986 Test: blockdev nvme passthru rw ...passed 00:13:41.986 Test: blockdev nvme passthru vendor specific ...passed 00:13:41.986 Test: blockdev nvme admin passthru ...passed 00:13:41.986 Test: blockdev copy ...passed 00:13:41.986 Suite: bdevio tests on: raid1 00:13:41.986 Test: blockdev write read block ...passed 00:13:41.986 Test: blockdev write zeroes read block ...passed 00:13:41.986 Test: blockdev write zeroes read no split ...passed 00:13:41.986 Test: blockdev write zeroes read split ...passed 00:13:42.244 Test: blockdev write zeroes read split partial ...passed 00:13:42.244 Test: blockdev reset ...passed 00:13:42.244 Test: blockdev write read 8 blocks ...passed 00:13:42.244 Test: blockdev write read size > 128k ...passed 00:13:42.244 Test: blockdev write read invalid size ...passed 00:13:42.245 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:42.245 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:42.245 Test: blockdev write read max offset ...passed 00:13:42.245 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:42.245 Test: blockdev writev readv 8 blocks ...passed 00:13:42.245 Test: blockdev writev readv 30 x 1block ...passed 00:13:42.245 Test: blockdev writev readv block ...passed 00:13:42.245 Test: blockdev writev readv size > 128k ...passed 00:13:42.245 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:42.245 Test: blockdev comparev and writev ...passed 00:13:42.245 Test: blockdev nvme passthru rw ...passed 00:13:42.245 Test: blockdev nvme passthru vendor specific ...passed 00:13:42.245 Test: blockdev nvme admin passthru ...passed 00:13:42.245 Test: blockdev copy ...passed 00:13:42.245 Suite: bdevio tests on: concat0 00:13:42.245 Test: blockdev write read block ...passed 00:13:42.245 Test: blockdev write zeroes read block ...passed 00:13:42.245 Test: blockdev write zeroes read no split ...passed 00:13:42.245 Test: blockdev write zeroes read split ...passed 00:13:42.245 Test: blockdev write zeroes read split partial ...passed 00:13:42.245 Test: blockdev reset ...passed 00:13:42.245 Test: blockdev write read 8 blocks ...passed 00:13:42.245 Test: blockdev write read size > 128k ...passed 00:13:42.245 Test: blockdev write read invalid size ...passed 00:13:42.245 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:42.245 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:42.245 Test: blockdev write read max offset ...passed 00:13:42.245 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:42.245 Test: blockdev writev readv 8 blocks ...passed 00:13:42.245 Test: blockdev writev readv 30 x 1block ...passed 00:13:42.245 Test: blockdev writev readv block ...passed 00:13:42.245 Test: blockdev writev readv size > 128k ...passed 00:13:42.245 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:42.245 Test: blockdev comparev and writev ...passed 00:13:42.245 Test: blockdev nvme passthru rw ...passed 00:13:42.245 Test: blockdev nvme passthru vendor specific ...passed 00:13:42.245 Test: blockdev nvme admin passthru ...passed 00:13:42.245 Test: blockdev copy ...passed 00:13:42.245 Suite: bdevio tests on: raid0 00:13:42.245 Test: blockdev write read block ...passed 00:13:42.245 Test: blockdev write zeroes read block ...passed 00:13:42.245 Test: blockdev write zeroes read no split ...passed 00:13:42.245 Test: blockdev write zeroes read split ...passed 00:13:42.245 Test: blockdev write zeroes read split partial ...passed 00:13:42.245 Test: blockdev reset ...passed 00:13:42.245 Test: blockdev write read 8 blocks ...passed 00:13:42.245 Test: blockdev write read size > 128k ...passed 00:13:42.245 Test: blockdev write read invalid size ...passed 00:13:42.245 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:42.245 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:42.245 Test: blockdev write read max offset ...passed 00:13:42.245 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:42.245 Test: blockdev writev readv 8 blocks ...passed 00:13:42.245 Test: blockdev writev readv 30 x 1block ...passed 00:13:42.245 Test: blockdev writev readv block ...passed 00:13:42.245 Test: blockdev writev readv size > 128k ...passed 00:13:42.245 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:42.245 Test: blockdev comparev and writev ...passed 00:13:42.245 Test: blockdev nvme passthru rw ...passed 00:13:42.245 Test: blockdev nvme passthru vendor specific ...passed 00:13:42.245 Test: blockdev nvme admin passthru ...passed 00:13:42.245 Test: blockdev copy ...passed 00:13:42.245 Suite: bdevio tests on: TestPT 00:13:42.245 Test: blockdev write read block ...passed 00:13:42.245 Test: blockdev write zeroes read block ...passed 00:13:42.505 Test: blockdev write zeroes read no split ...passed 00:13:42.505 Test: blockdev write zeroes read split ...passed 00:13:42.505 Test: blockdev write zeroes read split partial ...passed 00:13:42.505 Test: blockdev reset ...passed 00:13:42.505 Test: blockdev write read 8 blocks ...passed 00:13:42.505 Test: blockdev write read size > 128k ...passed 00:13:42.505 Test: blockdev write read invalid size ...passed 00:13:42.505 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:42.505 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:42.505 Test: blockdev write read max offset ...passed 00:13:42.505 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:42.505 Test: blockdev writev readv 8 blocks ...passed 00:13:42.505 Test: blockdev writev readv 30 x 1block ...passed 00:13:42.505 Test: blockdev writev readv block ...passed 00:13:42.505 Test: blockdev writev readv size > 128k ...passed 00:13:42.505 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:42.505 Test: blockdev comparev and writev ...passed 00:13:42.505 Test: blockdev nvme passthru rw ...passed 00:13:42.505 Test: blockdev nvme passthru vendor specific ...passed 00:13:42.505 Test: blockdev nvme admin passthru ...passed 00:13:42.505 Test: blockdev copy ...passed 00:13:42.505 Suite: bdevio tests on: Malloc2p7 00:13:42.505 Test: blockdev write read block ...passed 00:13:42.505 Test: blockdev write zeroes read block ...passed 00:13:42.505 Test: blockdev write zeroes read no split ...passed 00:13:42.505 Test: blockdev write zeroes read split ...passed 00:13:42.505 Test: blockdev write zeroes read split partial ...passed 00:13:42.505 Test: blockdev reset ...passed 00:13:42.505 Test: blockdev write read 8 blocks ...passed 00:13:42.505 Test: blockdev write read size > 128k ...passed 00:13:42.505 Test: blockdev write read invalid size ...passed 00:13:42.505 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:42.505 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:42.505 Test: blockdev write read max offset ...passed 00:13:42.505 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:42.505 Test: blockdev writev readv 8 blocks ...passed 00:13:42.505 Test: blockdev writev readv 30 x 1block ...passed 00:13:42.505 Test: blockdev writev readv block ...passed 00:13:42.505 Test: blockdev writev readv size > 128k ...passed 00:13:42.505 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:42.505 Test: blockdev comparev and writev ...passed 00:13:42.505 Test: blockdev nvme passthru rw ...passed 00:13:42.505 Test: blockdev nvme passthru vendor specific ...passed 00:13:42.505 Test: blockdev nvme admin passthru ...passed 00:13:42.505 Test: blockdev copy ...passed 00:13:42.505 Suite: bdevio tests on: Malloc2p6 00:13:42.505 Test: blockdev write read block ...passed 00:13:42.505 Test: blockdev write zeroes read block ...passed 00:13:42.505 Test: blockdev write zeroes read no split ...passed 00:13:42.505 Test: blockdev write zeroes read split ...passed 00:13:42.766 Test: blockdev write zeroes read split partial ...passed 00:13:42.766 Test: blockdev reset ...passed 00:13:42.766 Test: blockdev write read 8 blocks ...passed 00:13:42.766 Test: blockdev write read size > 128k ...passed 00:13:42.766 Test: blockdev write read invalid size ...passed 00:13:42.766 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:42.766 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:42.766 Test: blockdev write read max offset ...passed 00:13:42.766 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:42.766 Test: blockdev writev readv 8 blocks ...passed 00:13:42.766 Test: blockdev writev readv 30 x 1block ...passed 00:13:42.766 Test: blockdev writev readv block ...passed 00:13:42.766 Test: blockdev writev readv size > 128k ...passed 00:13:42.766 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:42.766 Test: blockdev comparev and writev ...passed 00:13:42.766 Test: blockdev nvme passthru rw ...passed 00:13:42.766 Test: blockdev nvme passthru vendor specific ...passed 00:13:42.766 Test: blockdev nvme admin passthru ...passed 00:13:42.766 Test: blockdev copy ...passed 00:13:42.766 Suite: bdevio tests on: Malloc2p5 00:13:42.766 Test: blockdev write read block ...passed 00:13:42.766 Test: blockdev write zeroes read block ...passed 00:13:42.766 Test: blockdev write zeroes read no split ...passed 00:13:42.766 Test: blockdev write zeroes read split ...passed 00:13:42.766 Test: blockdev write zeroes read split partial ...passed 00:13:42.766 Test: blockdev reset ...passed 00:13:42.766 Test: blockdev write read 8 blocks ...passed 00:13:42.766 Test: blockdev write read size > 128k ...passed 00:13:42.766 Test: blockdev write read invalid size ...passed 00:13:42.766 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:42.766 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:42.766 Test: blockdev write read max offset ...passed 00:13:42.766 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:42.766 Test: blockdev writev readv 8 blocks ...passed 00:13:42.766 Test: blockdev writev readv 30 x 1block ...passed 00:13:42.766 Test: blockdev writev readv block ...passed 00:13:42.766 Test: blockdev writev readv size > 128k ...passed 00:13:42.766 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:42.766 Test: blockdev comparev and writev ...passed 00:13:42.766 Test: blockdev nvme passthru rw ...passed 00:13:42.766 Test: blockdev nvme passthru vendor specific ...passed 00:13:42.766 Test: blockdev nvme admin passthru ...passed 00:13:42.766 Test: blockdev copy ...passed 00:13:42.766 Suite: bdevio tests on: Malloc2p4 00:13:42.766 Test: blockdev write read block ...passed 00:13:42.766 Test: blockdev write zeroes read block ...passed 00:13:42.766 Test: blockdev write zeroes read no split ...passed 00:13:42.766 Test: blockdev write zeroes read split ...passed 00:13:42.766 Test: blockdev write zeroes read split partial ...passed 00:13:42.766 Test: blockdev reset ...passed 00:13:42.766 Test: blockdev write read 8 blocks ...passed 00:13:42.766 Test: blockdev write read size > 128k ...passed 00:13:42.766 Test: blockdev write read invalid size ...passed 00:13:42.766 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:42.766 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:42.766 Test: blockdev write read max offset ...passed 00:13:42.766 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:42.766 Test: blockdev writev readv 8 blocks ...passed 00:13:42.766 Test: blockdev writev readv 30 x 1block ...passed 00:13:42.766 Test: blockdev writev readv block ...passed 00:13:42.766 Test: blockdev writev readv size > 128k ...passed 00:13:42.766 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:42.766 Test: blockdev comparev and writev ...passed 00:13:42.766 Test: blockdev nvme passthru rw ...passed 00:13:42.766 Test: blockdev nvme passthru vendor specific ...passed 00:13:42.766 Test: blockdev nvme admin passthru ...passed 00:13:42.766 Test: blockdev copy ...passed 00:13:42.766 Suite: bdevio tests on: Malloc2p3 00:13:42.766 Test: blockdev write read block ...passed 00:13:42.766 Test: blockdev write zeroes read block ...passed 00:13:42.766 Test: blockdev write zeroes read no split ...passed 00:13:42.766 Test: blockdev write zeroes read split ...passed 00:13:43.026 Test: blockdev write zeroes read split partial ...passed 00:13:43.026 Test: blockdev reset ...passed 00:13:43.026 Test: blockdev write read 8 blocks ...passed 00:13:43.026 Test: blockdev write read size > 128k ...passed 00:13:43.026 Test: blockdev write read invalid size ...passed 00:13:43.026 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:43.026 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:43.026 Test: blockdev write read max offset ...passed 00:13:43.026 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:43.026 Test: blockdev writev readv 8 blocks ...passed 00:13:43.026 Test: blockdev writev readv 30 x 1block ...passed 00:13:43.026 Test: blockdev writev readv block ...passed 00:13:43.026 Test: blockdev writev readv size > 128k ...passed 00:13:43.026 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:43.026 Test: blockdev comparev and writev ...passed 00:13:43.026 Test: blockdev nvme passthru rw ...passed 00:13:43.026 Test: blockdev nvme passthru vendor specific ...passed 00:13:43.026 Test: blockdev nvme admin passthru ...passed 00:13:43.026 Test: blockdev copy ...passed 00:13:43.026 Suite: bdevio tests on: Malloc2p2 00:13:43.026 Test: blockdev write read block ...passed 00:13:43.026 Test: blockdev write zeroes read block ...passed 00:13:43.026 Test: blockdev write zeroes read no split ...passed 00:13:43.026 Test: blockdev write zeroes read split ...passed 00:13:43.026 Test: blockdev write zeroes read split partial ...passed 00:13:43.026 Test: blockdev reset ...passed 00:13:43.026 Test: blockdev write read 8 blocks ...passed 00:13:43.026 Test: blockdev write read size > 128k ...passed 00:13:43.026 Test: blockdev write read invalid size ...passed 00:13:43.026 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:43.026 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:43.026 Test: blockdev write read max offset ...passed 00:13:43.026 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:43.026 Test: blockdev writev readv 8 blocks ...passed 00:13:43.026 Test: blockdev writev readv 30 x 1block ...passed 00:13:43.026 Test: blockdev writev readv block ...passed 00:13:43.026 Test: blockdev writev readv size > 128k ...passed 00:13:43.026 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:43.026 Test: blockdev comparev and writev ...passed 00:13:43.026 Test: blockdev nvme passthru rw ...passed 00:13:43.026 Test: blockdev nvme passthru vendor specific ...passed 00:13:43.026 Test: blockdev nvme admin passthru ...passed 00:13:43.026 Test: blockdev copy ...passed 00:13:43.026 Suite: bdevio tests on: Malloc2p1 00:13:43.026 Test: blockdev write read block ...passed 00:13:43.026 Test: blockdev write zeroes read block ...passed 00:13:43.026 Test: blockdev write zeroes read no split ...passed 00:13:43.026 Test: blockdev write zeroes read split ...passed 00:13:43.026 Test: blockdev write zeroes read split partial ...passed 00:13:43.026 Test: blockdev reset ...passed 00:13:43.026 Test: blockdev write read 8 blocks ...passed 00:13:43.026 Test: blockdev write read size > 128k ...passed 00:13:43.026 Test: blockdev write read invalid size ...passed 00:13:43.026 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:43.026 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:43.026 Test: blockdev write read max offset ...passed 00:13:43.026 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:43.026 Test: blockdev writev readv 8 blocks ...passed 00:13:43.026 Test: blockdev writev readv 30 x 1block ...passed 00:13:43.026 Test: blockdev writev readv block ...passed 00:13:43.026 Test: blockdev writev readv size > 128k ...passed 00:13:43.026 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:43.026 Test: blockdev comparev and writev ...passed 00:13:43.026 Test: blockdev nvme passthru rw ...passed 00:13:43.026 Test: blockdev nvme passthru vendor specific ...passed 00:13:43.026 Test: blockdev nvme admin passthru ...passed 00:13:43.026 Test: blockdev copy ...passed 00:13:43.026 Suite: bdevio tests on: Malloc2p0 00:13:43.026 Test: blockdev write read block ...passed 00:13:43.026 Test: blockdev write zeroes read block ...passed 00:13:43.026 Test: blockdev write zeroes read no split ...passed 00:13:43.026 Test: blockdev write zeroes read split ...passed 00:13:43.285 Test: blockdev write zeroes read split partial ...passed 00:13:43.285 Test: blockdev reset ...passed 00:13:43.285 Test: blockdev write read 8 blocks ...passed 00:13:43.285 Test: blockdev write read size > 128k ...passed 00:13:43.285 Test: blockdev write read invalid size ...passed 00:13:43.285 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:43.285 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:43.285 Test: blockdev write read max offset ...passed 00:13:43.285 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:43.285 Test: blockdev writev readv 8 blocks ...passed 00:13:43.285 Test: blockdev writev readv 30 x 1block ...passed 00:13:43.285 Test: blockdev writev readv block ...passed 00:13:43.285 Test: blockdev writev readv size > 128k ...passed 00:13:43.285 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:43.285 Test: blockdev comparev and writev ...passed 00:13:43.285 Test: blockdev nvme passthru rw ...passed 00:13:43.285 Test: blockdev nvme passthru vendor specific ...passed 00:13:43.285 Test: blockdev nvme admin passthru ...passed 00:13:43.285 Test: blockdev copy ...passed 00:13:43.285 Suite: bdevio tests on: Malloc1p1 00:13:43.286 Test: blockdev write read block ...passed 00:13:43.286 Test: blockdev write zeroes read block ...passed 00:13:43.286 Test: blockdev write zeroes read no split ...passed 00:13:43.286 Test: blockdev write zeroes read split ...passed 00:13:43.286 Test: blockdev write zeroes read split partial ...passed 00:13:43.286 Test: blockdev reset ...passed 00:13:43.286 Test: blockdev write read 8 blocks ...passed 00:13:43.286 Test: blockdev write read size > 128k ...passed 00:13:43.286 Test: blockdev write read invalid size ...passed 00:13:43.286 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:43.286 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:43.286 Test: blockdev write read max offset ...passed 00:13:43.286 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:43.286 Test: blockdev writev readv 8 blocks ...passed 00:13:43.286 Test: blockdev writev readv 30 x 1block ...passed 00:13:43.286 Test: blockdev writev readv block ...passed 00:13:43.286 Test: blockdev writev readv size > 128k ...passed 00:13:43.286 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:43.286 Test: blockdev comparev and writev ...passed 00:13:43.286 Test: blockdev nvme passthru rw ...passed 00:13:43.286 Test: blockdev nvme passthru vendor specific ...passed 00:13:43.286 Test: blockdev nvme admin passthru ...passed 00:13:43.286 Test: blockdev copy ...passed 00:13:43.286 Suite: bdevio tests on: Malloc1p0 00:13:43.286 Test: blockdev write read block ...passed 00:13:43.286 Test: blockdev write zeroes read block ...passed 00:13:43.286 Test: blockdev write zeroes read no split ...passed 00:13:43.286 Test: blockdev write zeroes read split ...passed 00:13:43.286 Test: blockdev write zeroes read split partial ...passed 00:13:43.286 Test: blockdev reset ...passed 00:13:43.286 Test: blockdev write read 8 blocks ...passed 00:13:43.286 Test: blockdev write read size > 128k ...passed 00:13:43.286 Test: blockdev write read invalid size ...passed 00:13:43.286 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:43.286 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:43.286 Test: blockdev write read max offset ...passed 00:13:43.286 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:43.286 Test: blockdev writev readv 8 blocks ...passed 00:13:43.286 Test: blockdev writev readv 30 x 1block ...passed 00:13:43.286 Test: blockdev writev readv block ...passed 00:13:43.286 Test: blockdev writev readv size > 128k ...passed 00:13:43.286 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:43.286 Test: blockdev comparev and writev ...passed 00:13:43.286 Test: blockdev nvme passthru rw ...passed 00:13:43.286 Test: blockdev nvme passthru vendor specific ...passed 00:13:43.286 Test: blockdev nvme admin passthru ...passed 00:13:43.286 Test: blockdev copy ...passed 00:13:43.286 Suite: bdevio tests on: Malloc0 00:13:43.286 Test: blockdev write read block ...passed 00:13:43.286 Test: blockdev write zeroes read block ...passed 00:13:43.286 Test: blockdev write zeroes read no split ...passed 00:13:43.286 Test: blockdev write zeroes read split ...passed 00:13:43.545 Test: blockdev write zeroes read split partial ...passed 00:13:43.545 Test: blockdev reset ...passed 00:13:43.545 Test: blockdev write read 8 blocks ...passed 00:13:43.545 Test: blockdev write read size > 128k ...passed 00:13:43.545 Test: blockdev write read invalid size ...passed 00:13:43.545 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:43.545 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:43.545 Test: blockdev write read max offset ...passed 00:13:43.545 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:43.545 Test: blockdev writev readv 8 blocks ...passed 00:13:43.545 Test: blockdev writev readv 30 x 1block ...passed 00:13:43.545 Test: blockdev writev readv block ...passed 00:13:43.545 Test: blockdev writev readv size > 128k ...passed 00:13:43.545 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:43.545 Test: blockdev comparev and writev ...passed 00:13:43.545 Test: blockdev nvme passthru rw ...passed 00:13:43.545 Test: blockdev nvme passthru vendor specific ...passed 00:13:43.545 Test: blockdev nvme admin passthru ...passed 00:13:43.545 Test: blockdev copy ...passed 00:13:43.545 00:13:43.545 Run Summary: Type Total Ran Passed Failed Inactive 00:13:43.545 suites 16 16 n/a 0 0 00:13:43.545 tests 368 368 368 0 0 00:13:43.545 asserts 2224 2224 2224 0 n/a 00:13:43.545 00:13:43.545 Elapsed time = 4.383 seconds 00:13:43.545 0 00:13:43.545 21:27:16 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 117640 00:13:43.545 21:27:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 117640 ']' 00:13:43.545 21:27:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 117640 00:13:43.545 21:27:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:13:43.545 21:27:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:43.545 21:27:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 117640 00:13:43.545 killing process with pid 117640 00:13:43.545 21:27:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:43.545 21:27:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:43.545 21:27:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 117640' 00:13:43.545 21:27:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@967 -- # kill 117640 00:13:43.545 21:27:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@972 -- # wait 117640 00:13:46.857 ************************************ 00:13:46.857 END TEST bdev_bounds 00:13:46.857 ************************************ 00:13:46.857 21:27:19 blockdev_general.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:13:46.857 00:13:46.857 real 0m6.335s 00:13:46.857 user 0m16.326s 00:13:46.857 sys 0m0.693s 00:13:46.857 21:27:19 blockdev_general.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:46.857 21:27:19 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:46.857 21:27:19 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:46.857 21:27:19 blockdev_general -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:13:46.857 21:27:19 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:46.857 21:27:19 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:46.857 21:27:19 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:46.857 ************************************ 00:13:46.857 START TEST bdev_nbd 00:13:46.857 ************************************ 00:13:46.857 21:27:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:13:46.857 21:27:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:13:46.857 21:27:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:13:46.857 21:27:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:46.857 21:27:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:46.857 21:27:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=($2) 00:13:46.857 21:27:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:13:46.857 21:27:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=16 00:13:46.857 21:27:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:13:46.857 21:27:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=(/dev/nbd+([0-9])) 00:13:46.857 21:27:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:13:46.857 21:27:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=16 00:13:46.857 21:27:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:13:46.857 21:27:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:13:46.857 21:27:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:13:46.857 21:27:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:13:46.857 21:27:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=117751 00:13:46.857 21:27:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:46.857 21:27:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:46.857 21:27:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 117751 /var/tmp/spdk-nbd.sock 00:13:46.857 21:27:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 117751 ']' 00:13:46.857 21:27:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:46.857 21:27:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:46.857 21:27:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:46.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:46.857 21:27:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:46.857 21:27:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:46.857 [2024-07-15 21:27:19.729107] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:13:46.857 [2024-07-15 21:27:19.729396] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.857 [2024-07-15 21:27:19.897340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.857 [2024-07-15 21:27:20.155682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.426 [2024-07-15 21:27:20.631383] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:47.426 [2024-07-15 21:27:20.631572] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:47.427 [2024-07-15 21:27:20.639303] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:47.427 [2024-07-15 21:27:20.639408] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:47.427 [2024-07-15 21:27:20.647324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:47.427 [2024-07-15 21:27:20.647418] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:47.427 [2024-07-15 21:27:20.647468] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:47.687 [2024-07-15 21:27:20.882024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:47.687 [2024-07-15 21:27:20.882243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.687 [2024-07-15 21:27:20.882330] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:47.687 [2024-07-15 21:27:20.882405] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.687 [2024-07-15 21:27:20.884826] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.687 [2024-07-15 21:27:20.884921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:47.946 21:27:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:47.946 21:27:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:13:47.946 21:27:21 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:13:47.946 21:27:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:47.946 21:27:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:13:47.946 21:27:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:47.947 21:27:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:13:47.947 21:27:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:47.947 21:27:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:13:47.947 21:27:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:47.947 21:27:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:47.947 21:27:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:47.947 21:27:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:47.947 21:27:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:47.947 21:27:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:13:48.206 21:27:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:48.206 21:27:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:48.206 21:27:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:48.206 21:27:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:13:48.206 21:27:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:48.206 21:27:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:48.206 21:27:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:48.206 21:27:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:13:48.206 21:27:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:48.206 21:27:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:48.206 21:27:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:48.207 21:27:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:48.207 1+0 records in 00:13:48.207 1+0 records out 00:13:48.207 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328175 s, 12.5 MB/s 00:13:48.207 21:27:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.207 21:27:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:48.207 21:27:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.207 21:27:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:48.207 21:27:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:48.207 21:27:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:48.207 21:27:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:48.207 21:27:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:13:48.466 21:27:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:48.466 21:27:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:48.466 21:27:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:48.466 21:27:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:13:48.466 21:27:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:48.466 21:27:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:48.466 21:27:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:48.466 21:27:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:13:48.466 21:27:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:48.466 21:27:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:48.466 21:27:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:48.466 21:27:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:48.466 1+0 records in 00:13:48.466 1+0 records out 00:13:48.466 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00055895 s, 7.3 MB/s 00:13:48.466 21:27:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.466 21:27:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:48.466 21:27:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.466 21:27:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:48.466 21:27:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:48.466 21:27:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:48.466 21:27:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:48.466 21:27:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:13:48.726 21:27:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:48.726 21:27:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:48.726 21:27:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:48.726 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:13:48.726 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:48.726 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:48.726 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:48.726 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:13:48.726 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:48.726 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:48.726 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:48.726 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:48.726 1+0 records in 00:13:48.726 1+0 records out 00:13:48.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439757 s, 9.3 MB/s 00:13:48.726 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.726 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:48.726 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.726 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:48.726 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:48.726 21:27:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:48.726 21:27:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:48.726 21:27:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:13:48.986 21:27:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:48.986 21:27:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:48.986 21:27:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:48.986 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:13:48.986 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:48.986 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:48.986 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:48.986 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:13:48.986 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:48.986 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:48.986 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:48.986 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:48.986 1+0 records in 00:13:48.986 1+0 records out 00:13:48.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032848 s, 12.5 MB/s 00:13:48.986 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.986 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:48.986 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.986 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:48.986 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:48.986 21:27:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:48.986 21:27:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:48.986 21:27:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:13:49.245 21:27:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:49.245 21:27:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:49.245 21:27:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:49.245 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:13:49.245 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:49.245 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:49.245 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:49.245 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:13:49.245 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:49.245 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:49.245 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:49.245 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:49.245 1+0 records in 00:13:49.245 1+0 records out 00:13:49.245 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000631416 s, 6.5 MB/s 00:13:49.245 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.245 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:49.245 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.245 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:49.245 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:49.245 21:27:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:49.245 21:27:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:49.245 21:27:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:13:49.504 21:27:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:49.504 21:27:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:49.504 21:27:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:49.504 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:13:49.504 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:49.504 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:49.504 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:49.504 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:13:49.504 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:49.504 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:49.504 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:49.504 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:49.504 1+0 records in 00:13:49.504 1+0 records out 00:13:49.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377518 s, 10.8 MB/s 00:13:49.504 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.504 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:49.504 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.504 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:49.504 21:27:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:49.504 21:27:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:49.504 21:27:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:49.504 21:27:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:13:49.763 21:27:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:13:49.763 21:27:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:13:49.763 21:27:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:13:49.763 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:13:49.763 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:49.763 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:49.763 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:49.763 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:13:49.763 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:49.763 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:49.763 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:49.763 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:49.763 1+0 records in 00:13:49.763 1+0 records out 00:13:49.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420012 s, 9.8 MB/s 00:13:49.763 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.763 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:49.763 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.763 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:49.763 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:49.763 21:27:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:49.763 21:27:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:49.763 21:27:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:13:50.021 21:27:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:13:50.021 21:27:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:13:50.021 21:27:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:13:50.021 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:13:50.021 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:50.021 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:50.021 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:50.021 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:13:50.021 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:50.021 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:50.021 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:50.021 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:50.021 1+0 records in 00:13:50.021 1+0 records out 00:13:50.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000577686 s, 7.1 MB/s 00:13:50.021 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.021 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:50.021 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.021 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:50.021 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:50.021 21:27:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:50.021 21:27:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:50.021 21:27:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:13:50.280 21:27:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:13:50.280 21:27:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:13:50.280 21:27:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:13:50.280 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:13:50.280 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:50.280 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:50.280 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:50.280 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:13:50.280 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:50.280 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:50.280 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:50.280 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:50.280 1+0 records in 00:13:50.280 1+0 records out 00:13:50.280 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527745 s, 7.8 MB/s 00:13:50.280 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.280 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:50.280 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.280 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:50.280 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:50.280 21:27:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:50.280 21:27:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:50.281 21:27:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:13:50.539 21:27:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:13:50.539 21:27:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:13:50.539 21:27:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:13:50.539 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:13:50.539 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:50.539 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:50.539 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:50.539 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:13:50.539 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:50.539 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:50.539 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:50.539 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:50.539 1+0 records in 00:13:50.539 1+0 records out 00:13:50.539 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00058119 s, 7.0 MB/s 00:13:50.539 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.539 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:50.539 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.539 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:50.539 21:27:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:50.539 21:27:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:50.539 21:27:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:50.539 21:27:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:13:50.798 21:27:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:13:50.798 21:27:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:13:50.798 21:27:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:13:50.798 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:13:50.798 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:50.798 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:50.798 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:50.798 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:13:50.798 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:50.798 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:50.798 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:50.798 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:50.798 1+0 records in 00:13:50.798 1+0 records out 00:13:50.798 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526828 s, 7.8 MB/s 00:13:50.798 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.798 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:50.798 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.798 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:50.798 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:50.798 21:27:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:50.798 21:27:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:50.798 21:27:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:13:51.056 21:27:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:13:51.056 21:27:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:13:51.056 21:27:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:13:51.056 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:13:51.056 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:51.056 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:51.056 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:51.056 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:13:51.056 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:51.057 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:51.057 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:51.057 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:51.057 1+0 records in 00:13:51.057 1+0 records out 00:13:51.057 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000636717 s, 6.4 MB/s 00:13:51.057 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.057 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:51.057 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.057 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:51.057 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:51.057 21:27:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:51.057 21:27:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:51.057 21:27:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:13:51.315 21:27:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:13:51.315 21:27:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:13:51.315 21:27:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:13:51.315 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:13:51.315 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:51.315 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:51.316 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:51.316 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:13:51.316 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:51.316 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:51.316 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:51.316 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:51.316 1+0 records in 00:13:51.316 1+0 records out 00:13:51.316 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000698272 s, 5.9 MB/s 00:13:51.316 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.316 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:51.316 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.316 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:51.316 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:51.316 21:27:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:51.316 21:27:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:51.316 21:27:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:13:51.575 21:27:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:13:51.575 21:27:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:13:51.575 21:27:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:13:51.575 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:13:51.575 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:51.575 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:51.575 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:51.575 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:13:51.575 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:51.575 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:51.575 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:51.575 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:51.575 1+0 records in 00:13:51.575 1+0 records out 00:13:51.575 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00350998 s, 1.2 MB/s 00:13:51.575 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.575 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:51.575 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.575 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:51.575 21:27:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:51.575 21:27:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:51.575 21:27:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:51.575 21:27:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:13:51.834 21:27:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:13:51.834 21:27:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:13:51.834 21:27:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:13:51.834 21:27:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:13:51.834 21:27:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:51.834 21:27:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:51.834 21:27:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:51.834 21:27:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:13:51.834 21:27:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:51.834 21:27:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:51.834 21:27:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:51.834 21:27:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:51.834 1+0 records in 00:13:51.834 1+0 records out 00:13:51.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000729143 s, 5.6 MB/s 00:13:51.834 21:27:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.834 21:27:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:51.834 21:27:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.834 21:27:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:51.834 21:27:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:51.834 21:27:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:51.834 21:27:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:51.835 21:27:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:13:52.094 21:27:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:13:52.094 21:27:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:13:52.094 21:27:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:13:52.094 21:27:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:13:52.094 21:27:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:52.094 21:27:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:52.094 21:27:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:52.094 21:27:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:13:52.094 21:27:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:52.094 21:27:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:52.094 21:27:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:52.094 21:27:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:52.094 1+0 records in 00:13:52.094 1+0 records out 00:13:52.094 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00225438 s, 1.8 MB/s 00:13:52.094 21:27:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:52.094 21:27:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:52.094 21:27:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:52.094 21:27:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:52.094 21:27:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:52.094 21:27:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:52.094 21:27:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:52.094 21:27:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:52.662 21:27:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:52.662 { 00:13:52.662 "nbd_device": "/dev/nbd0", 00:13:52.662 "bdev_name": "Malloc0" 00:13:52.662 }, 00:13:52.662 { 00:13:52.662 "nbd_device": "/dev/nbd1", 00:13:52.662 "bdev_name": "Malloc1p0" 00:13:52.662 }, 00:13:52.662 { 00:13:52.662 "nbd_device": "/dev/nbd2", 00:13:52.662 "bdev_name": "Malloc1p1" 00:13:52.662 }, 00:13:52.662 { 00:13:52.662 "nbd_device": "/dev/nbd3", 00:13:52.662 "bdev_name": "Malloc2p0" 00:13:52.662 }, 00:13:52.662 { 00:13:52.662 "nbd_device": "/dev/nbd4", 00:13:52.662 "bdev_name": "Malloc2p1" 00:13:52.662 }, 00:13:52.662 { 00:13:52.662 "nbd_device": "/dev/nbd5", 00:13:52.662 "bdev_name": "Malloc2p2" 00:13:52.662 }, 00:13:52.662 { 00:13:52.662 "nbd_device": "/dev/nbd6", 00:13:52.662 "bdev_name": "Malloc2p3" 00:13:52.662 }, 00:13:52.662 { 00:13:52.662 "nbd_device": "/dev/nbd7", 00:13:52.662 "bdev_name": "Malloc2p4" 00:13:52.662 }, 00:13:52.662 { 00:13:52.662 "nbd_device": "/dev/nbd8", 00:13:52.662 "bdev_name": "Malloc2p5" 00:13:52.662 }, 00:13:52.662 { 00:13:52.662 "nbd_device": "/dev/nbd9", 00:13:52.662 "bdev_name": "Malloc2p6" 00:13:52.662 }, 00:13:52.662 { 00:13:52.662 "nbd_device": "/dev/nbd10", 00:13:52.662 "bdev_name": "Malloc2p7" 00:13:52.662 }, 00:13:52.662 { 00:13:52.662 "nbd_device": "/dev/nbd11", 00:13:52.662 "bdev_name": "TestPT" 00:13:52.662 }, 00:13:52.662 { 00:13:52.662 "nbd_device": "/dev/nbd12", 00:13:52.662 "bdev_name": "raid0" 00:13:52.662 }, 00:13:52.662 { 00:13:52.662 "nbd_device": "/dev/nbd13", 00:13:52.662 "bdev_name": "concat0" 00:13:52.662 }, 00:13:52.662 { 00:13:52.662 "nbd_device": "/dev/nbd14", 00:13:52.662 "bdev_name": "raid1" 00:13:52.662 }, 00:13:52.662 { 00:13:52.662 "nbd_device": "/dev/nbd15", 00:13:52.662 "bdev_name": "AIO0" 00:13:52.662 } 00:13:52.662 ]' 00:13:52.662 21:27:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:52.662 21:27:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:52.662 { 00:13:52.662 "nbd_device": "/dev/nbd0", 00:13:52.662 "bdev_name": "Malloc0" 00:13:52.662 }, 00:13:52.662 { 00:13:52.662 "nbd_device": "/dev/nbd1", 00:13:52.662 "bdev_name": "Malloc1p0" 00:13:52.662 }, 00:13:52.662 { 00:13:52.662 "nbd_device": "/dev/nbd2", 00:13:52.662 "bdev_name": "Malloc1p1" 00:13:52.662 }, 00:13:52.662 { 00:13:52.662 "nbd_device": "/dev/nbd3", 00:13:52.662 "bdev_name": "Malloc2p0" 00:13:52.662 }, 00:13:52.662 { 00:13:52.662 "nbd_device": "/dev/nbd4", 00:13:52.662 "bdev_name": "Malloc2p1" 00:13:52.662 }, 00:13:52.662 { 00:13:52.662 "nbd_device": "/dev/nbd5", 00:13:52.662 "bdev_name": "Malloc2p2" 00:13:52.662 }, 00:13:52.662 { 00:13:52.662 "nbd_device": "/dev/nbd6", 00:13:52.662 "bdev_name": "Malloc2p3" 00:13:52.662 }, 00:13:52.662 { 00:13:52.662 "nbd_device": "/dev/nbd7", 00:13:52.662 "bdev_name": "Malloc2p4" 00:13:52.662 }, 00:13:52.662 { 00:13:52.662 "nbd_device": "/dev/nbd8", 00:13:52.662 "bdev_name": "Malloc2p5" 00:13:52.662 }, 00:13:52.662 { 00:13:52.662 "nbd_device": "/dev/nbd9", 00:13:52.662 "bdev_name": "Malloc2p6" 00:13:52.662 }, 00:13:52.662 { 00:13:52.662 "nbd_device": "/dev/nbd10", 00:13:52.662 "bdev_name": "Malloc2p7" 00:13:52.662 }, 00:13:52.662 { 00:13:52.662 "nbd_device": "/dev/nbd11", 00:13:52.662 "bdev_name": "TestPT" 00:13:52.662 }, 00:13:52.662 { 00:13:52.662 "nbd_device": "/dev/nbd12", 00:13:52.662 "bdev_name": "raid0" 00:13:52.662 }, 00:13:52.662 { 00:13:52.662 "nbd_device": "/dev/nbd13", 00:13:52.662 "bdev_name": "concat0" 00:13:52.662 }, 00:13:52.662 { 00:13:52.662 "nbd_device": "/dev/nbd14", 00:13:52.662 "bdev_name": "raid1" 00:13:52.662 }, 00:13:52.662 { 00:13:52.662 "nbd_device": "/dev/nbd15", 00:13:52.662 "bdev_name": "AIO0" 00:13:52.662 } 00:13:52.662 ]' 00:13:52.662 21:27:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:52.662 21:27:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:13:52.662 21:27:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:52.662 21:27:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:13:52.662 21:27:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:52.662 21:27:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:52.662 21:27:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:52.662 21:27:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:52.662 21:27:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:52.662 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:52.662 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:52.662 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:52.662 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:52.662 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:52.662 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:52.662 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:52.662 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:52.662 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:52.922 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:52.922 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:52.922 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:52.922 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:52.922 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:52.922 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:52.922 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:52.922 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:52.922 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:52.922 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:53.180 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:53.180 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:53.180 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:53.180 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:53.180 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:53.180 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:53.180 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:53.180 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:53.180 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:53.180 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:53.438 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:53.438 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:53.438 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:53.438 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:53.438 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:53.438 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:53.438 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:53.438 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:53.438 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:53.438 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:53.698 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:53.698 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:53.698 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:53.698 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:53.698 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:53.698 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:53.698 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:53.698 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:53.698 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:53.698 21:27:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:53.957 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:53.957 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:53.957 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:53.957 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:53.957 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:53.957 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:53.957 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:53.957 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:53.957 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:53.957 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:13:54.216 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:13:54.216 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:13:54.216 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:13:54.216 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:54.216 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:54.216 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:13:54.216 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:54.216 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:54.216 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:54.216 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:13:54.476 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:13:54.476 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:13:54.476 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:13:54.476 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:54.476 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:54.476 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:13:54.476 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:54.476 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:54.476 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:54.476 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:13:54.476 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:13:54.476 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:13:54.476 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:13:54.476 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:54.476 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:54.476 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:13:54.476 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:54.476 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:54.476 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:54.476 21:27:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:13:54.735 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:13:54.735 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:13:54.735 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:13:54.735 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:54.735 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:54.735 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:13:54.735 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:54.735 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:54.735 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:54.735 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:54.995 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:54.995 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:54.995 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:54.995 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:54.995 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:54.995 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:54.995 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:54.995 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:54.995 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:54.995 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:55.255 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:55.255 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:55.255 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:55.255 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:55.255 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:55.255 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:55.255 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:55.255 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:55.255 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:55.255 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:55.514 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:55.514 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:55.514 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:55.514 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:55.514 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:55.514 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:55.514 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:55.514 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:55.514 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:55.514 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:55.782 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:55.782 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:55.782 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:55.782 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:55.782 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:55.782 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:55.782 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:55.782 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:55.782 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:55.782 21:27:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:13:55.782 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:13:56.052 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:13:56.052 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:13:56.052 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:56.052 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:56.052 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:13:56.052 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:56.052 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:56.052 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:56.052 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:13:56.052 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:13:56.052 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:13:56.052 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:13:56.052 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:56.052 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:56.052 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:13:56.052 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:56.052 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:56.052 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:56.052 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:56.052 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:56.311 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:56.311 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:56.311 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:56.311 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:56.311 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:56.311 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:56.311 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:56.311 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:56.311 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:56.311 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:56.311 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:56.311 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:56.311 21:27:29 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:13:56.311 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:56.311 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:13:56.311 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:56.311 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:13:56.311 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:56.311 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:13:56.311 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:56.311 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:13:56.311 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:56.311 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:13:56.311 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:56.311 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:56.311 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:56.311 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:56.311 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:13:56.569 /dev/nbd0 00:13:56.569 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:56.569 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:56.569 21:27:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:13:56.569 21:27:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:56.569 21:27:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:56.569 21:27:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:56.569 21:27:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:13:56.569 21:27:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:56.569 21:27:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:56.569 21:27:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:56.569 21:27:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:56.569 1+0 records in 00:13:56.569 1+0 records out 00:13:56.570 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000535234 s, 7.7 MB/s 00:13:56.570 21:27:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.570 21:27:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:56.570 21:27:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.570 21:27:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:56.570 21:27:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:56.570 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:56.570 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:56.570 21:27:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:13:56.828 /dev/nbd1 00:13:56.828 21:27:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:56.828 21:27:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:56.828 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:13:56.828 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:56.828 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:56.828 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:56.828 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:13:56.828 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:56.828 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:56.828 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:56.828 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:56.828 1+0 records in 00:13:56.828 1+0 records out 00:13:56.828 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279804 s, 14.6 MB/s 00:13:56.828 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.828 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:56.828 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.828 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:56.828 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:56.828 21:27:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:56.828 21:27:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:56.828 21:27:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:13:57.085 /dev/nbd10 00:13:57.085 21:27:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:57.085 21:27:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:57.085 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:13:57.085 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:57.085 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:57.085 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:57.085 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:13:57.085 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:57.085 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:57.085 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:57.085 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:57.085 1+0 records in 00:13:57.085 1+0 records out 00:13:57.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000535279 s, 7.7 MB/s 00:13:57.086 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.086 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:57.086 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.086 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:57.086 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:57.086 21:27:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:57.086 21:27:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:57.086 21:27:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:13:57.343 /dev/nbd11 00:13:57.343 21:27:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:57.343 21:27:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:57.343 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:13:57.343 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:57.343 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:57.343 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:57.343 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:13:57.343 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:57.343 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:57.343 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:57.343 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:57.343 1+0 records in 00:13:57.343 1+0 records out 00:13:57.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000516772 s, 7.9 MB/s 00:13:57.343 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.343 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:57.343 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.343 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:57.343 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:57.343 21:27:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:57.343 21:27:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:57.343 21:27:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:13:57.601 /dev/nbd12 00:13:57.601 21:27:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:57.601 21:27:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:57.601 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:13:57.601 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:57.601 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:57.601 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:57.601 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:13:57.601 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:57.602 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:57.602 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:57.602 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:57.602 1+0 records in 00:13:57.602 1+0 records out 00:13:57.602 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000556333 s, 7.4 MB/s 00:13:57.602 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.602 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:57.602 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.602 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:57.602 21:27:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:57.602 21:27:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:57.602 21:27:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:57.602 21:27:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:13:57.859 /dev/nbd13 00:13:57.859 21:27:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:57.859 21:27:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:57.859 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:13:57.859 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:57.859 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:57.859 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:57.859 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:13:57.859 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:57.859 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:57.859 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:57.859 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:57.859 1+0 records in 00:13:57.859 1+0 records out 00:13:57.859 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000650729 s, 6.3 MB/s 00:13:57.859 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.859 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:57.859 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.859 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:57.859 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:57.859 21:27:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:57.859 21:27:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:57.859 21:27:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:13:58.116 /dev/nbd14 00:13:58.116 21:27:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:13:58.116 21:27:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:13:58.116 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:13:58.116 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:58.116 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:58.116 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:58.116 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:13:58.116 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:58.116 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:58.116 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:58.116 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:58.116 1+0 records in 00:13:58.116 1+0 records out 00:13:58.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504813 s, 8.1 MB/s 00:13:58.117 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:58.117 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:58.117 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:58.117 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:58.117 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:58.117 21:27:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:58.117 21:27:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:58.117 21:27:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:13:58.375 /dev/nbd15 00:13:58.375 21:27:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:13:58.375 21:27:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:13:58.375 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:13:58.375 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:58.375 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:58.375 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:58.375 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:13:58.375 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:58.375 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:58.375 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:58.375 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:58.375 1+0 records in 00:13:58.375 1+0 records out 00:13:58.375 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053476 s, 7.7 MB/s 00:13:58.375 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:58.375 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:58.375 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:58.375 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:58.375 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:58.375 21:27:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:58.375 21:27:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:58.375 21:27:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:13:58.634 /dev/nbd2 00:13:58.634 21:27:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:13:58.634 21:27:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:13:58.634 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:13:58.634 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:58.634 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:58.634 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:58.634 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:13:58.634 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:58.634 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:58.634 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:58.634 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:58.634 1+0 records in 00:13:58.634 1+0 records out 00:13:58.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000678699 s, 6.0 MB/s 00:13:58.634 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:58.634 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:58.634 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:58.634 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:58.634 21:27:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:58.634 21:27:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:58.634 21:27:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:58.634 21:27:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:13:58.894 /dev/nbd3 00:13:58.894 21:27:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:13:58.894 21:27:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:13:58.894 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:13:58.894 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:58.894 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:58.894 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:58.894 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:13:58.894 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:58.894 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:58.894 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:58.894 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:58.894 1+0 records in 00:13:58.894 1+0 records out 00:13:58.894 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000476127 s, 8.6 MB/s 00:13:58.894 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:58.894 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:58.894 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:58.894 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:58.894 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:58.894 21:27:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:58.894 21:27:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:58.894 21:27:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:13:59.155 /dev/nbd4 00:13:59.156 21:27:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:13:59.156 21:27:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:13:59.156 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:13:59.156 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:59.156 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:59.156 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:59.156 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:13:59.156 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:59.156 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:59.156 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:59.156 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:59.156 1+0 records in 00:13:59.156 1+0 records out 00:13:59.156 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000692302 s, 5.9 MB/s 00:13:59.156 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.156 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:59.156 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.156 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:59.156 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:59.156 21:27:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:59.156 21:27:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:59.156 21:27:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:13:59.417 /dev/nbd5 00:13:59.417 21:27:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:13:59.417 21:27:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:13:59.417 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:13:59.417 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:59.417 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:59.417 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:59.417 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:13:59.417 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:59.417 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:59.417 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:59.417 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:59.417 1+0 records in 00:13:59.417 1+0 records out 00:13:59.417 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000636495 s, 6.4 MB/s 00:13:59.417 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.417 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:59.417 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.417 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:59.417 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:59.417 21:27:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:59.417 21:27:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:59.417 21:27:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:13:59.676 /dev/nbd6 00:13:59.676 21:27:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:13:59.676 21:27:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:13:59.676 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:13:59.676 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:59.676 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:59.676 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:59.676 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:13:59.676 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:59.676 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:59.676 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:59.676 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:59.676 1+0 records in 00:13:59.676 1+0 records out 00:13:59.676 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000760942 s, 5.4 MB/s 00:13:59.677 21:27:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.677 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:59.677 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.677 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:59.677 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:59.677 21:27:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:59.677 21:27:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:59.677 21:27:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:13:59.937 /dev/nbd7 00:13:59.937 21:27:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:13:59.937 21:27:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:13:59.937 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:13:59.937 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:59.937 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:59.937 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:59.937 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:13:59.937 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:59.937 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:59.937 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:59.937 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:59.937 1+0 records in 00:13:59.937 1+0 records out 00:13:59.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000553412 s, 7.4 MB/s 00:13:59.937 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.937 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:59.937 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.937 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:59.937 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:59.937 21:27:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:59.937 21:27:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:59.937 21:27:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:14:00.196 /dev/nbd8 00:14:00.196 21:27:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:14:00.196 21:27:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:14:00.196 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:14:00.196 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:00.196 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:00.196 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:00.196 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:14:00.196 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:00.196 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:00.196 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:00.196 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:00.196 1+0 records in 00:14:00.196 1+0 records out 00:14:00.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00063969 s, 6.4 MB/s 00:14:00.196 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.196 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:00.196 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.196 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:00.196 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:00.196 21:27:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:00.196 21:27:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:00.196 21:27:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:14:00.454 /dev/nbd9 00:14:00.454 21:27:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:14:00.454 21:27:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:14:00.454 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:14:00.454 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:00.454 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:00.454 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:00.454 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:14:00.454 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:00.454 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:00.454 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:00.454 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:00.454 1+0 records in 00:14:00.454 1+0 records out 00:14:00.454 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000807234 s, 5.1 MB/s 00:14:00.454 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.454 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:00.454 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.454 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:00.454 21:27:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:00.454 21:27:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:00.454 21:27:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:00.454 21:27:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:00.454 21:27:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:00.454 21:27:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:00.712 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:00.712 { 00:14:00.712 "nbd_device": "/dev/nbd0", 00:14:00.712 "bdev_name": "Malloc0" 00:14:00.712 }, 00:14:00.712 { 00:14:00.712 "nbd_device": "/dev/nbd1", 00:14:00.712 "bdev_name": "Malloc1p0" 00:14:00.712 }, 00:14:00.712 { 00:14:00.712 "nbd_device": "/dev/nbd10", 00:14:00.712 "bdev_name": "Malloc1p1" 00:14:00.712 }, 00:14:00.712 { 00:14:00.712 "nbd_device": "/dev/nbd11", 00:14:00.712 "bdev_name": "Malloc2p0" 00:14:00.712 }, 00:14:00.712 { 00:14:00.712 "nbd_device": "/dev/nbd12", 00:14:00.712 "bdev_name": "Malloc2p1" 00:14:00.712 }, 00:14:00.712 { 00:14:00.712 "nbd_device": "/dev/nbd13", 00:14:00.712 "bdev_name": "Malloc2p2" 00:14:00.712 }, 00:14:00.712 { 00:14:00.712 "nbd_device": "/dev/nbd14", 00:14:00.712 "bdev_name": "Malloc2p3" 00:14:00.712 }, 00:14:00.712 { 00:14:00.712 "nbd_device": "/dev/nbd15", 00:14:00.712 "bdev_name": "Malloc2p4" 00:14:00.712 }, 00:14:00.712 { 00:14:00.712 "nbd_device": "/dev/nbd2", 00:14:00.712 "bdev_name": "Malloc2p5" 00:14:00.712 }, 00:14:00.712 { 00:14:00.712 "nbd_device": "/dev/nbd3", 00:14:00.712 "bdev_name": "Malloc2p6" 00:14:00.712 }, 00:14:00.712 { 00:14:00.712 "nbd_device": "/dev/nbd4", 00:14:00.712 "bdev_name": "Malloc2p7" 00:14:00.712 }, 00:14:00.712 { 00:14:00.712 "nbd_device": "/dev/nbd5", 00:14:00.712 "bdev_name": "TestPT" 00:14:00.712 }, 00:14:00.712 { 00:14:00.712 "nbd_device": "/dev/nbd6", 00:14:00.712 "bdev_name": "raid0" 00:14:00.712 }, 00:14:00.712 { 00:14:00.712 "nbd_device": "/dev/nbd7", 00:14:00.713 "bdev_name": "concat0" 00:14:00.713 }, 00:14:00.713 { 00:14:00.713 "nbd_device": "/dev/nbd8", 00:14:00.713 "bdev_name": "raid1" 00:14:00.713 }, 00:14:00.713 { 00:14:00.713 "nbd_device": "/dev/nbd9", 00:14:00.713 "bdev_name": "AIO0" 00:14:00.713 } 00:14:00.713 ]' 00:14:00.972 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:00.972 { 00:14:00.972 "nbd_device": "/dev/nbd0", 00:14:00.972 "bdev_name": "Malloc0" 00:14:00.972 }, 00:14:00.972 { 00:14:00.972 "nbd_device": "/dev/nbd1", 00:14:00.972 "bdev_name": "Malloc1p0" 00:14:00.972 }, 00:14:00.972 { 00:14:00.972 "nbd_device": "/dev/nbd10", 00:14:00.972 "bdev_name": "Malloc1p1" 00:14:00.972 }, 00:14:00.972 { 00:14:00.972 "nbd_device": "/dev/nbd11", 00:14:00.972 "bdev_name": "Malloc2p0" 00:14:00.972 }, 00:14:00.972 { 00:14:00.972 "nbd_device": "/dev/nbd12", 00:14:00.972 "bdev_name": "Malloc2p1" 00:14:00.972 }, 00:14:00.972 { 00:14:00.972 "nbd_device": "/dev/nbd13", 00:14:00.972 "bdev_name": "Malloc2p2" 00:14:00.972 }, 00:14:00.972 { 00:14:00.972 "nbd_device": "/dev/nbd14", 00:14:00.972 "bdev_name": "Malloc2p3" 00:14:00.972 }, 00:14:00.972 { 00:14:00.972 "nbd_device": "/dev/nbd15", 00:14:00.972 "bdev_name": "Malloc2p4" 00:14:00.972 }, 00:14:00.972 { 00:14:00.972 "nbd_device": "/dev/nbd2", 00:14:00.972 "bdev_name": "Malloc2p5" 00:14:00.972 }, 00:14:00.972 { 00:14:00.972 "nbd_device": "/dev/nbd3", 00:14:00.972 "bdev_name": "Malloc2p6" 00:14:00.972 }, 00:14:00.972 { 00:14:00.972 "nbd_device": "/dev/nbd4", 00:14:00.972 "bdev_name": "Malloc2p7" 00:14:00.972 }, 00:14:00.972 { 00:14:00.973 "nbd_device": "/dev/nbd5", 00:14:00.973 "bdev_name": "TestPT" 00:14:00.973 }, 00:14:00.973 { 00:14:00.973 "nbd_device": "/dev/nbd6", 00:14:00.973 "bdev_name": "raid0" 00:14:00.973 }, 00:14:00.973 { 00:14:00.973 "nbd_device": "/dev/nbd7", 00:14:00.973 "bdev_name": "concat0" 00:14:00.973 }, 00:14:00.973 { 00:14:00.973 "nbd_device": "/dev/nbd8", 00:14:00.973 "bdev_name": "raid1" 00:14:00.973 }, 00:14:00.973 { 00:14:00.973 "nbd_device": "/dev/nbd9", 00:14:00.973 "bdev_name": "AIO0" 00:14:00.973 } 00:14:00.973 ]' 00:14:00.973 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:00.973 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:00.973 /dev/nbd1 00:14:00.973 /dev/nbd10 00:14:00.973 /dev/nbd11 00:14:00.973 /dev/nbd12 00:14:00.973 /dev/nbd13 00:14:00.973 /dev/nbd14 00:14:00.973 /dev/nbd15 00:14:00.973 /dev/nbd2 00:14:00.973 /dev/nbd3 00:14:00.973 /dev/nbd4 00:14:00.973 /dev/nbd5 00:14:00.973 /dev/nbd6 00:14:00.973 /dev/nbd7 00:14:00.973 /dev/nbd8 00:14:00.973 /dev/nbd9' 00:14:00.973 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:00.973 /dev/nbd1 00:14:00.973 /dev/nbd10 00:14:00.973 /dev/nbd11 00:14:00.973 /dev/nbd12 00:14:00.973 /dev/nbd13 00:14:00.973 /dev/nbd14 00:14:00.973 /dev/nbd15 00:14:00.973 /dev/nbd2 00:14:00.973 /dev/nbd3 00:14:00.973 /dev/nbd4 00:14:00.973 /dev/nbd5 00:14:00.973 /dev/nbd6 00:14:00.973 /dev/nbd7 00:14:00.973 /dev/nbd8 00:14:00.973 /dev/nbd9' 00:14:00.973 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:00.973 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16 00:14:00.973 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16 00:14:00.973 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16 00:14:00.973 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:14:00.973 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:14:00.973 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:14:00.973 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:00.973 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:00.973 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:00.973 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:00.973 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:14:00.973 256+0 records in 00:14:00.973 256+0 records out 00:14:00.973 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120068 s, 87.3 MB/s 00:14:00.973 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:00.973 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:00.973 256+0 records in 00:14:00.973 256+0 records out 00:14:00.973 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0817916 s, 12.8 MB/s 00:14:00.973 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:00.973 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:00.973 256+0 records in 00:14:00.973 256+0 records out 00:14:00.973 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0861738 s, 12.2 MB/s 00:14:00.973 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:00.973 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:14:01.232 256+0 records in 00:14:01.232 256+0 records out 00:14:01.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0876629 s, 12.0 MB/s 00:14:01.232 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:01.232 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:14:01.232 256+0 records in 00:14:01.232 256+0 records out 00:14:01.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0852265 s, 12.3 MB/s 00:14:01.232 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:01.232 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:14:01.492 256+0 records in 00:14:01.492 256+0 records out 00:14:01.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0850078 s, 12.3 MB/s 00:14:01.492 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:01.492 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:14:01.492 256+0 records in 00:14:01.492 256+0 records out 00:14:01.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0870009 s, 12.1 MB/s 00:14:01.492 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:01.492 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:14:01.492 256+0 records in 00:14:01.492 256+0 records out 00:14:01.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0854269 s, 12.3 MB/s 00:14:01.492 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:01.492 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:14:01.751 256+0 records in 00:14:01.751 256+0 records out 00:14:01.751 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0859108 s, 12.2 MB/s 00:14:01.751 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:01.751 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:14:01.751 256+0 records in 00:14:01.751 256+0 records out 00:14:01.751 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0807226 s, 13.0 MB/s 00:14:01.751 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:01.751 21:27:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:14:01.751 256+0 records in 00:14:01.751 256+0 records out 00:14:01.751 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0852064 s, 12.3 MB/s 00:14:01.751 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:01.751 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:14:02.010 256+0 records in 00:14:02.010 256+0 records out 00:14:02.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0973354 s, 10.8 MB/s 00:14:02.010 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:02.010 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:14:02.010 256+0 records in 00:14:02.010 256+0 records out 00:14:02.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0892109 s, 11.8 MB/s 00:14:02.010 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:02.010 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:14:02.010 256+0 records in 00:14:02.010 256+0 records out 00:14:02.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0871865 s, 12.0 MB/s 00:14:02.010 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:02.010 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:14:02.269 256+0 records in 00:14:02.269 256+0 records out 00:14:02.269 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0904432 s, 11.6 MB/s 00:14:02.269 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:02.269 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:14:02.269 256+0 records in 00:14:02.269 256+0 records out 00:14:02.269 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.09412 s, 11.1 MB/s 00:14:02.269 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:02.269 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:14:02.528 256+0 records in 00:14:02.528 256+0 records out 00:14:02.528 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133766 s, 7.8 MB/s 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:02.528 21:27:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:02.787 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:02.787 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:02.787 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:02.787 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:02.787 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:02.787 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:02.787 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:02.787 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:02.787 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:02.787 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:03.068 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:03.068 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:03.068 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:03.068 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:03.068 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:03.068 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:03.068 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:03.068 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:03.068 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:03.068 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:03.327 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:03.327 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:03.327 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:03.327 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:03.327 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:03.327 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:03.327 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:03.327 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:03.327 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:03.327 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:03.585 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:03.585 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:03.585 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:03.585 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:03.585 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:03.585 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:03.585 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:03.585 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:03.585 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:03.585 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:03.845 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:03.845 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:03.845 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:03.845 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:03.845 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:03.845 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:03.845 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:03.845 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:03.845 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:03.845 21:27:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:03.845 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:03.845 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:03.845 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:03.845 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:03.845 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:03.845 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:03.845 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:03.845 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:03.845 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:03.845 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:14:04.104 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:14:04.104 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:14:04.104 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:14:04.104 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:04.104 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:04.104 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:14:04.362 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:04.362 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:04.362 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:04.362 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:14:04.362 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:14:04.362 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:14:04.362 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:14:04.362 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:04.362 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:04.362 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:14:04.362 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:04.362 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:04.362 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:04.362 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:14:04.621 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:04.621 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:04.621 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:04.621 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:04.621 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:04.621 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:04.621 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:04.621 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:04.621 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:04.621 21:27:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:14:04.880 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:14:04.880 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:14:04.880 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:14:04.880 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:04.880 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:04.880 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:14:04.880 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:04.880 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:04.880 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:04.880 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:14:05.138 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:14:05.138 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:14:05.138 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:14:05.138 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:05.138 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:05.138 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:14:05.138 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:05.138 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:05.138 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:05.138 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:14:05.396 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:14:05.396 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:14:05.396 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:14:05.396 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:05.396 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:05.396 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:05.396 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:05.396 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:05.396 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:05.396 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:14:05.653 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:14:05.653 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:14:05.653 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:14:05.653 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:05.653 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:05.654 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:14:05.654 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:05.654 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:05.654 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:05.654 21:27:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:14:05.911 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:14:05.911 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:14:05.911 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:14:05.911 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:05.911 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:05.911 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:14:05.911 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:05.911 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:05.911 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:05.912 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:14:06.171 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:14:06.171 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:14:06.171 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:14:06.171 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:06.171 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:06.171 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:14:06.171 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:06.171 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:06.171 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:06.171 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:14:06.429 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:14:06.429 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:14:06.429 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:14:06.429 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:06.429 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:06.429 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:14:06.429 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:06.429 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:06.429 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:06.429 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:06.429 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:06.687 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:06.687 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:06.688 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:06.688 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:06.688 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:06.688 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:06.688 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:06.688 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:06.688 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:06.688 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:14:06.688 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:06.688 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:14:06.688 21:27:39 blockdev_general.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:14:06.688 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:06.688 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:14:06.688 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:14:06.688 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:14:06.688 21:27:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:14:06.946 malloc_lvol_verify 00:14:06.946 21:27:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:14:07.205 2d280564-34fa-41a0-a955-174b9f6933ad 00:14:07.205 21:27:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:14:07.205 a4a1e0d0-9b1a-43e0-b3c3-e4826a5b354a 00:14:07.465 21:27:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:14:07.465 /dev/nbd0 00:14:07.465 21:27:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:14:07.465 mke2fs 1.45.5 (07-Jan-2020) 00:14:07.465 00:14:07.465 Filesystem too small for a journal 00:14:07.465 Creating filesystem with 1024 4k blocks and 1024 inodes 00:14:07.465 00:14:07.465 Allocating group tables: 0/1 done 00:14:07.465 Writing inode tables: 0/1 done 00:14:07.465 Writing superblocks and filesystem accounting information: 0/1 done 00:14:07.465 00:14:07.465 21:27:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:14:07.465 21:27:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:07.465 21:27:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:07.465 21:27:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:14:07.465 21:27:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:07.465 21:27:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:07.465 21:27:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:07.465 21:27:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:07.725 21:27:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:07.725 21:27:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:07.725 21:27:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:07.725 21:27:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:07.725 21:27:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:07.725 21:27:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:07.725 21:27:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:07.725 21:27:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:07.725 21:27:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:14:07.725 21:27:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:14:07.725 21:27:41 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 117751 00:14:07.725 21:27:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 117751 ']' 00:14:07.725 21:27:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 117751 00:14:07.725 21:27:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:14:07.725 21:27:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:07.725 21:27:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 117751 00:14:07.725 21:27:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:07.725 21:27:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:07.725 21:27:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 117751' 00:14:07.725 killing process with pid 117751 00:14:07.725 21:27:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@967 -- # kill 117751 00:14:07.725 21:27:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@972 -- # wait 117751 00:14:11.016 21:27:43 blockdev_general.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:14:11.016 00:14:11.016 real 0m24.106s 00:14:11.016 user 0m31.859s 00:14:11.016 sys 0m9.205s 00:14:11.016 21:27:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:11.016 21:27:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:11.016 ************************************ 00:14:11.016 END TEST bdev_nbd 00:14:11.016 ************************************ 00:14:11.016 21:27:43 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:14:11.016 21:27:43 blockdev_general -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:14:11.016 21:27:43 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = nvme ']' 00:14:11.016 21:27:43 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = gpt ']' 00:14:11.016 21:27:43 blockdev_general -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:14:11.016 21:27:43 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:11.016 21:27:43 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:11.016 21:27:43 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:11.016 ************************************ 00:14:11.016 START TEST bdev_fio 00:14:11.016 ************************************ 00:14:11.016 21:27:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:14:11.016 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:14:11.016 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:14:11.016 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:14:11.016 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:14:11.016 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:14:11.016 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:14:11.016 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:14:11.016 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:14:11.016 21:27:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:11.016 21:27:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:14:11.016 21:27:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:14:11.016 21:27:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:14:11.016 21:27:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:14:11.016 21:27:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:11.016 21:27:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:14:11.016 21:27:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:14:11.016 21:27:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:11.016 21:27:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:14:11.016 21:27:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:14:11.016 21:27:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:14:11.016 21:27:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:14:11.016 21:27:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:14:11.016 21:27:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc0]' 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc0 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p0]' 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p0 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p1]' 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p1 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p0]' 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p0 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p1]' 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p1 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p2]' 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p2 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p3]' 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p3 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p4]' 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p4 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p5]' 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p5 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p6]' 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p6 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p7]' 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p7 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_TestPT]' 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=TestPT 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid0]' 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid0 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_concat0]' 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=concat0 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid1]' 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid1 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_AIO0]' 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=AIO0 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:11.017 21:27:43 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:11.017 ************************************ 00:14:11.017 START TEST bdev_fio_rw_verify 00:14:11.017 ************************************ 00:14:11.017 21:27:43 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:11.017 21:27:43 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:11.017 21:27:43 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:11.017 21:27:43 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=(libasan libclang_rt.asan) 00:14:11.017 21:27:43 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:11.017 21:27:43 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:11.017 21:27:43 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:14:11.017 21:27:43 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:11.017 21:27:43 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:11.017 21:27:43 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:11.017 21:27:43 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:14:11.017 21:27:43 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:11.017 21:27:43 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:14:11.017 21:27:43 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:14:11.017 21:27:43 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:14:11.017 21:27:43 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:11.017 21:27:43 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:11.017 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:11.017 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:11.017 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:11.017 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:11.017 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:11.017 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:11.017 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:11.017 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:11.017 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:11.017 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:11.017 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:11.017 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:11.017 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:11.017 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:11.017 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:11.017 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:11.017 fio-3.35 00:14:11.017 Starting 16 threads 00:14:23.225 00:14:23.225 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=118950: Mon Jul 15 21:27:56 2024 00:14:23.225 read: IOPS=67.6k, BW=264MiB/s (277MB/s)(2642MiB/10005msec) 00:14:23.225 slat (usec): min=2, max=28075, avg=43.90, stdev=451.88 00:14:23.225 clat (usec): min=10, max=44289, avg=335.30, stdev=1268.66 00:14:23.225 lat (usec): min=28, max=44308, avg=379.20, stdev=1346.68 00:14:23.225 clat percentiles (usec): 00:14:23.225 | 50.000th=[ 208], 99.000th=[ 1029], 99.900th=[16450], 99.990th=[24249], 00:14:23.225 | 99.999th=[44303] 00:14:23.225 write: IOPS=107k, BW=417MiB/s (437MB/s)(4125MiB/9894msec); 0 zone resets 00:14:23.225 slat (usec): min=11, max=42037, avg=74.81, stdev=653.98 00:14:23.225 clat (usec): min=10, max=42359, avg=438.97, stdev=1546.21 00:14:23.225 lat (usec): min=36, max=42408, avg=513.78, stdev=1678.41 00:14:23.225 clat percentiles (usec): 00:14:23.225 | 50.000th=[ 258], 99.000th=[ 8094], 99.900th=[16712], 99.990th=[28443], 00:14:23.225 | 99.999th=[36963] 00:14:23.225 bw ( KiB/s): min=257072, max=717320, per=99.12%, avg=423114.53, stdev=8199.96, samples=304 00:14:23.225 iops : min=64268, max=179330, avg=105778.58, stdev=2049.99, samples=304 00:14:23.225 lat (usec) : 20=0.01%, 50=0.25%, 100=7.71%, 250=47.05%, 500=39.08% 00:14:23.225 lat (usec) : 750=3.55%, 1000=0.74% 00:14:23.225 lat (msec) : 2=0.49%, 4=0.09%, 10=0.23%, 20=0.73%, 50=0.06% 00:14:23.225 cpu : usr=58.08%, sys=1.86%, ctx=267069, majf=0, minf=73293 00:14:23.225 IO depths : 1=11.2%, 2=24.0%, 4=51.8%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:23.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:23.225 complete : 0=0.0%, 4=88.8%, 8=11.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:23.225 issued rwts: total=676378,1055894,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:23.225 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:23.225 00:14:23.225 Run status group 0 (all jobs): 00:14:23.225 READ: bw=264MiB/s (277MB/s), 264MiB/s-264MiB/s (277MB/s-277MB/s), io=2642MiB (2770MB), run=10005-10005msec 00:14:23.225 WRITE: bw=417MiB/s (437MB/s), 417MiB/s-417MiB/s (437MB/s-437MB/s), io=4125MiB (4325MB), run=9894-9894msec 00:14:25.766 ----------------------------------------------------- 00:14:25.766 Suppressions used: 00:14:25.766 count bytes template 00:14:25.766 16 140 /usr/src/fio/parse.c 00:14:25.766 12040 1155840 /usr/src/fio/iolog.c 00:14:25.766 2 596 libcrypto.so 00:14:25.766 ----------------------------------------------------- 00:14:25.766 00:14:25.766 ************************************ 00:14:25.766 END TEST bdev_fio_rw_verify 00:14:25.766 ************************************ 00:14:25.766 00:14:25.766 real 0m14.796s 00:14:25.766 user 1m38.777s 00:14:25.766 sys 0m4.074s 00:14:25.766 21:27:58 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:25.766 21:27:58 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:14:25.766 21:27:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:14:25.766 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:14:25.766 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:25.766 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:14:25.766 21:27:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:25.766 21:27:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:14:25.766 21:27:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:14:25.766 21:27:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:14:25.766 21:27:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:14:25.766 21:27:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:25.766 21:27:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:14:25.766 21:27:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:14:25.766 21:27:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:25.766 21:27:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:14:25.766 21:27:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:14:25.766 21:27:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:14:25.766 21:27:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:14:25.766 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:14:25.768 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "1a85a85e-b973-4413-9489-5c6bb8cea844"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "1a85a85e-b973-4413-9489-5c6bb8cea844",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "00d4d53f-b61d-5ae3-a783-ee3443a7eb21"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "00d4d53f-b61d-5ae3-a783-ee3443a7eb21",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "838274b4-3477-5595-8f4d-232dbffbd68a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "838274b4-3477-5595-8f4d-232dbffbd68a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "35c728ff-e43f-56be-b003-b7dedc319cf5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "35c728ff-e43f-56be-b003-b7dedc319cf5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "a0ff9f61-61aa-5629-ad2a-183d19be71f6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a0ff9f61-61aa-5629-ad2a-183d19be71f6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "453eb0af-afc0-59ef-8141-2a1228a15fcc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "453eb0af-afc0-59ef-8141-2a1228a15fcc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "68c1d849-0049-54ad-b73f-f4b73c109c68"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "68c1d849-0049-54ad-b73f-f4b73c109c68",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "b62cc09b-483a-5459-9c29-d6a7289a334d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b62cc09b-483a-5459-9c29-d6a7289a334d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "119ccd64-d582-5a4e-ae61-4863139e6eca"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "119ccd64-d582-5a4e-ae61-4863139e6eca",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "ae8e436e-ed15-51ee-9da8-6438dcd8b4e9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ae8e436e-ed15-51ee-9da8-6438dcd8b4e9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "bac72eaa-f055-5643-8782-580195ec0fe9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bac72eaa-f055-5643-8782-580195ec0fe9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "1cecede1-d609-53c4-98b0-8d6900b431c3"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "1cecede1-d609-53c4-98b0-8d6900b431c3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "02aff188-d268-4ba9-87dc-8625ed7f28e3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "02aff188-d268-4ba9-87dc-8625ed7f28e3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "02aff188-d268-4ba9-87dc-8625ed7f28e3",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "31a43bb2-9524-4e48-b51d-e8d907cc7d5c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "9e6b4f39-4239-40ae-a760-96619c22928c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "78822723-9e18-4538-ba9c-b92e3996a337"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "78822723-9e18-4538-ba9c-b92e3996a337",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "78822723-9e18-4538-ba9c-b92e3996a337",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "c583a477-3670-4e24-aaad-3b4097d44576",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "8977590d-8302-401a-b3ff-d3834a30ddde",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "d3211ba3-1e62-4695-af05-6acc2f72d583"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d3211ba3-1e62-4695-af05-6acc2f72d583",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d3211ba3-1e62-4695-af05-6acc2f72d583",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "6efaea0e-e9b2-461f-9752-ea59aac8cd14",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "e73afa97-d610-4b54-b595-f875d3e9055c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "54c32968-56c8-4c92-b07d-bb5478e88751"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "54c32968-56c8-4c92-b07d-bb5478e88751",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:14:25.768 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n Malloc0 00:14:25.768 Malloc1p0 00:14:25.768 Malloc1p1 00:14:25.768 Malloc2p0 00:14:25.768 Malloc2p1 00:14:25.768 Malloc2p2 00:14:25.768 Malloc2p3 00:14:25.768 Malloc2p4 00:14:25.768 Malloc2p5 00:14:25.768 Malloc2p6 00:14:25.768 Malloc2p7 00:14:25.768 TestPT 00:14:25.768 raid0 00:14:25.768 concat0 ]] 00:14:25.768 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "1a85a85e-b973-4413-9489-5c6bb8cea844"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "1a85a85e-b973-4413-9489-5c6bb8cea844",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "00d4d53f-b61d-5ae3-a783-ee3443a7eb21"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "00d4d53f-b61d-5ae3-a783-ee3443a7eb21",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "838274b4-3477-5595-8f4d-232dbffbd68a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "838274b4-3477-5595-8f4d-232dbffbd68a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "35c728ff-e43f-56be-b003-b7dedc319cf5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "35c728ff-e43f-56be-b003-b7dedc319cf5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "a0ff9f61-61aa-5629-ad2a-183d19be71f6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a0ff9f61-61aa-5629-ad2a-183d19be71f6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "453eb0af-afc0-59ef-8141-2a1228a15fcc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "453eb0af-afc0-59ef-8141-2a1228a15fcc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "68c1d849-0049-54ad-b73f-f4b73c109c68"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "68c1d849-0049-54ad-b73f-f4b73c109c68",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "b62cc09b-483a-5459-9c29-d6a7289a334d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b62cc09b-483a-5459-9c29-d6a7289a334d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "119ccd64-d582-5a4e-ae61-4863139e6eca"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "119ccd64-d582-5a4e-ae61-4863139e6eca",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "ae8e436e-ed15-51ee-9da8-6438dcd8b4e9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ae8e436e-ed15-51ee-9da8-6438dcd8b4e9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "bac72eaa-f055-5643-8782-580195ec0fe9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bac72eaa-f055-5643-8782-580195ec0fe9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "1cecede1-d609-53c4-98b0-8d6900b431c3"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "1cecede1-d609-53c4-98b0-8d6900b431c3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "02aff188-d268-4ba9-87dc-8625ed7f28e3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "02aff188-d268-4ba9-87dc-8625ed7f28e3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "02aff188-d268-4ba9-87dc-8625ed7f28e3",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "31a43bb2-9524-4e48-b51d-e8d907cc7d5c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "9e6b4f39-4239-40ae-a760-96619c22928c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "78822723-9e18-4538-ba9c-b92e3996a337"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "78822723-9e18-4538-ba9c-b92e3996a337",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "78822723-9e18-4538-ba9c-b92e3996a337",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "c583a477-3670-4e24-aaad-3b4097d44576",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "8977590d-8302-401a-b3ff-d3834a30ddde",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "d3211ba3-1e62-4695-af05-6acc2f72d583"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d3211ba3-1e62-4695-af05-6acc2f72d583",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d3211ba3-1e62-4695-af05-6acc2f72d583",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "6efaea0e-e9b2-461f-9752-ea59aac8cd14",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "e73afa97-d610-4b54-b595-f875d3e9055c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "54c32968-56c8-4c92-b07d-bb5478e88751"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "54c32968-56c8-4c92-b07d-bb5478e88751",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc0]' 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc0 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p0]' 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p0 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p1]' 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p1 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p0]' 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p0 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p1]' 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p1 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p2]' 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p2 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p3]' 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p3 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p4]' 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p4 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p5]' 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p5 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p6]' 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p6 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p7]' 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p7 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_TestPT]' 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=TestPT 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_raid0]' 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=raid0 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_concat0]' 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=concat0 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:25.769 21:27:58 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:25.769 ************************************ 00:14:25.769 START TEST bdev_fio_trim 00:14:25.769 ************************************ 00:14:25.769 21:27:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:25.769 21:27:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:25.769 21:27:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:25.769 21:27:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=(libasan libclang_rt.asan) 00:14:25.769 21:27:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:25.769 21:27:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:25.769 21:27:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:14:25.769 21:27:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:25.769 21:27:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:25.769 21:27:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:25.769 21:27:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:14:25.769 21:27:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:25.769 21:27:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:14:25.769 21:27:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:14:25.770 21:27:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # break 00:14:25.770 21:27:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:25.770 21:27:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:25.770 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:25.770 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:25.770 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:25.770 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:25.770 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:25.770 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:25.770 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:25.770 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:25.770 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:25.770 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:25.770 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:25.770 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:25.770 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:25.770 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:25.770 fio-3.35 00:14:25.770 Starting 14 threads 00:14:37.980 00:14:37.980 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=119191: Mon Jul 15 21:28:10 2024 00:14:37.980 write: IOPS=111k, BW=434MiB/s (455MB/s)(4345MiB/10006msec); 0 zone resets 00:14:37.980 slat (usec): min=2, max=28056, avg=45.67, stdev=455.04 00:14:37.980 clat (usec): min=21, max=32551, avg=304.63, stdev=1174.57 00:14:37.980 lat (usec): min=28, max=32618, avg=350.29, stdev=1259.36 00:14:37.980 clat percentiles (usec): 00:14:37.980 | 50.000th=[ 208], 99.000th=[ 478], 99.900th=[16319], 99.990th=[24249], 00:14:37.980 | 99.999th=[28181] 00:14:37.980 bw ( KiB/s): min=315168, max=599776, per=100.00%, avg=447037.05, stdev=6651.29, samples=266 00:14:37.980 iops : min=78792, max=149944, avg=111759.11, stdev=1662.81, samples=266 00:14:37.980 trim: IOPS=111k, BW=434MiB/s (455MB/s)(4345MiB/10006msec); 0 zone resets 00:14:37.980 slat (usec): min=4, max=29718, avg=32.11, stdev=389.24 00:14:37.980 clat (usec): min=4, max=32618, avg=346.30, stdev=1250.85 00:14:37.980 lat (usec): min=10, max=32648, avg=378.41, stdev=1309.75 00:14:37.980 clat percentiles (usec): 00:14:37.980 | 50.000th=[ 239], 99.000th=[ 529], 99.900th=[16319], 99.990th=[24249], 00:14:37.980 | 99.999th=[28443] 00:14:37.980 bw ( KiB/s): min=315168, max=599776, per=100.00%, avg=447037.05, stdev=6651.23, samples=266 00:14:37.981 iops : min=78792, max=149944, avg=111759.21, stdev=1662.80, samples=266 00:14:37.981 lat (usec) : 10=0.01%, 20=0.02%, 50=0.22%, 100=3.28%, 250=57.27% 00:14:37.981 lat (usec) : 500=38.11%, 750=0.44%, 1000=0.02% 00:14:37.981 lat (msec) : 2=0.02%, 4=0.01%, 10=0.04%, 20=0.54%, 50=0.03% 00:14:37.981 cpu : usr=69.29%, sys=0.35%, ctx=176851, majf=0, minf=677 00:14:37.981 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:37.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.981 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.981 issued rwts: total=0,1112300,1112303,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.981 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:37.981 00:14:37.981 Run status group 0 (all jobs): 00:14:37.981 WRITE: bw=434MiB/s (455MB/s), 434MiB/s-434MiB/s (455MB/s-455MB/s), io=4345MiB (4556MB), run=10006-10006msec 00:14:37.981 TRIM: bw=434MiB/s (455MB/s), 434MiB/s-434MiB/s (455MB/s-455MB/s), io=4345MiB (4556MB), run=10006-10006msec 00:14:40.513 ----------------------------------------------------- 00:14:40.513 Suppressions used: 00:14:40.513 count bytes template 00:14:40.513 14 129 /usr/src/fio/parse.c 00:14:40.513 2 596 libcrypto.so 00:14:40.513 ----------------------------------------------------- 00:14:40.513 00:14:40.513 ************************************ 00:14:40.513 END TEST bdev_fio_trim 00:14:40.513 ************************************ 00:14:40.513 00:14:40.513 real 0m14.487s 00:14:40.513 user 1m42.853s 00:14:40.513 sys 0m1.303s 00:14:40.513 21:28:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:40.513 21:28:13 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:14:40.513 21:28:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:14:40.513 21:28:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:14:40.513 21:28:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:40.513 21:28:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:14:40.513 /home/vagrant/spdk_repo/spdk 00:14:40.513 21:28:13 blockdev_general.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:14:40.513 00:14:40.513 real 0m29.646s 00:14:40.513 user 3m21.849s 00:14:40.513 sys 0m5.530s 00:14:40.513 21:28:13 blockdev_general.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:40.513 21:28:13 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:40.513 ************************************ 00:14:40.513 END TEST bdev_fio 00:14:40.513 ************************************ 00:14:40.513 21:28:13 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:14:40.513 21:28:13 blockdev_general -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:40.513 21:28:13 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:40.513 21:28:13 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:14:40.513 21:28:13 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:40.513 21:28:13 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:40.513 ************************************ 00:14:40.513 START TEST bdev_verify 00:14:40.513 ************************************ 00:14:40.513 21:28:13 blockdev_general.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:40.513 [2024-07-15 21:28:13.600052] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:14:40.513 [2024-07-15 21:28:13.600335] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119411 ] 00:14:40.513 [2024-07-15 21:28:13.775086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:40.772 [2024-07-15 21:28:14.040795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.772 [2024-07-15 21:28:14.040800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.340 [2024-07-15 21:28:14.481434] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:41.340 [2024-07-15 21:28:14.481607] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:41.340 [2024-07-15 21:28:14.489332] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:41.340 [2024-07-15 21:28:14.489404] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:41.340 [2024-07-15 21:28:14.497348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:41.340 [2024-07-15 21:28:14.497465] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:14:41.340 [2024-07-15 21:28:14.497506] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:14:41.599 [2024-07-15 21:28:14.734719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:41.599 [2024-07-15 21:28:14.734917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.599 [2024-07-15 21:28:14.734999] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:41.599 [2024-07-15 21:28:14.735093] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.599 [2024-07-15 21:28:14.737722] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.599 [2024-07-15 21:28:14.737802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:14:42.189 Running I/O for 5 seconds... 00:14:47.462 00:14:47.462 Latency(us) 00:14:47.462 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.462 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:47.462 Verification LBA range: start 0x0 length 0x1000 00:14:47.462 Malloc0 : 5.16 1315.27 5.14 0.00 0.00 97194.72 837.09 282062.37 00:14:47.462 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:47.462 Verification LBA range: start 0x1000 length 0x1000 00:14:47.462 Malloc0 : 5.20 886.08 3.46 0.00 0.00 144267.58 973.02 346167.45 00:14:47.462 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:47.462 Verification LBA range: start 0x0 length 0x800 00:14:47.462 Malloc1p0 : 5.21 687.64 2.69 0.00 0.00 185612.53 2847.52 148357.48 00:14:47.462 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:47.462 Verification LBA range: start 0x800 length 0x800 00:14:47.462 Malloc1p0 : 5.20 467.28 1.83 0.00 0.00 272939.81 3548.67 190483.68 00:14:47.462 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:47.462 Verification LBA range: start 0x0 length 0x800 00:14:47.462 Malloc1p1 : 5.22 687.16 2.68 0.00 0.00 185416.67 2947.69 145610.12 00:14:47.462 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:47.462 Verification LBA range: start 0x800 length 0x800 00:14:47.462 Malloc1p1 : 5.21 466.96 1.82 0.00 0.00 272488.69 6038.47 188652.10 00:14:47.462 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:47.462 Verification LBA range: start 0x0 length 0x200 00:14:47.462 Malloc2p0 : 5.22 686.90 2.68 0.00 0.00 185151.48 4607.55 141031.18 00:14:47.462 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:47.462 Verification LBA range: start 0x200 length 0x200 00:14:47.462 Malloc2p0 : 5.21 466.60 1.82 0.00 0.00 271821.74 3663.15 186820.53 00:14:47.462 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:47.462 Verification LBA range: start 0x0 length 0x200 00:14:47.462 Malloc2p1 : 5.22 686.54 2.68 0.00 0.00 184831.48 2947.69 138283.82 00:14:47.462 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:47.462 Verification LBA range: start 0x200 length 0x200 00:14:47.462 Malloc2p1 : 5.22 466.18 1.82 0.00 0.00 271439.77 3019.23 184073.17 00:14:47.462 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:47.462 Verification LBA range: start 0x0 length 0x200 00:14:47.462 Malloc2p2 : 5.22 685.94 2.68 0.00 0.00 184640.39 3720.38 136452.25 00:14:47.462 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:47.462 Verification LBA range: start 0x200 length 0x200 00:14:47.462 Malloc2p2 : 5.22 465.90 1.82 0.00 0.00 271019.45 2990.62 183157.38 00:14:47.462 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:47.462 Verification LBA range: start 0x0 length 0x200 00:14:47.462 Malloc2p3 : 5.23 685.68 2.68 0.00 0.00 184331.22 3834.86 134620.67 00:14:47.462 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:47.462 Verification LBA range: start 0x200 length 0x200 00:14:47.462 Malloc2p3 : 5.23 465.41 1.82 0.00 0.00 270802.07 3863.48 179494.23 00:14:47.462 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:47.463 Verification LBA range: start 0x0 length 0x200 00:14:47.463 Malloc2p4 : 5.23 685.38 2.68 0.00 0.00 184004.99 4407.22 131873.31 00:14:47.463 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:47.463 Verification LBA range: start 0x200 length 0x200 00:14:47.463 Malloc2p4 : 5.23 465.11 1.82 0.00 0.00 270261.62 3334.04 171252.15 00:14:47.463 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:47.463 Verification LBA range: start 0x0 length 0x200 00:14:47.463 Malloc2p5 : 5.23 684.77 2.67 0.00 0.00 183730.31 3691.77 130041.74 00:14:47.463 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:47.463 Verification LBA range: start 0x200 length 0x200 00:14:47.463 Malloc2p5 : 5.23 464.61 1.81 0.00 0.00 269989.17 4435.84 167589.00 00:14:47.463 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:47.463 Verification LBA range: start 0x0 length 0x200 00:14:47.463 Malloc2p6 : 5.24 684.50 2.67 0.00 0.00 183450.60 2833.22 128210.17 00:14:47.463 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:47.463 Verification LBA range: start 0x200 length 0x200 00:14:47.463 Malloc2p6 : 5.24 464.34 1.81 0.00 0.00 269447.18 2260.85 168504.79 00:14:47.463 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:47.463 Verification LBA range: start 0x0 length 0x200 00:14:47.463 Malloc2p7 : 5.24 684.21 2.67 0.00 0.00 183194.20 3405.58 126378.59 00:14:47.463 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:47.463 Verification LBA range: start 0x200 length 0x200 00:14:47.463 Malloc2p7 : 5.24 464.03 1.81 0.00 0.00 269158.61 5523.34 171252.15 00:14:47.463 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:47.463 Verification LBA range: start 0x0 length 0x1000 00:14:47.463 TestPT : 5.24 664.43 2.60 0.00 0.00 186999.49 7669.72 126378.59 00:14:47.463 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:47.463 Verification LBA range: start 0x1000 length 0x1000 00:14:47.463 TestPT : 5.26 462.60 1.81 0.00 0.00 269259.40 11504.57 171252.15 00:14:47.463 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:47.463 Verification LBA range: start 0x0 length 0x2000 00:14:47.463 raid0 : 5.24 683.59 2.67 0.00 0.00 182659.82 4149.66 119052.30 00:14:47.463 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:47.463 Verification LBA range: start 0x2000 length 0x2000 00:14:47.463 raid0 : 5.25 463.35 1.81 0.00 0.00 268264.27 2504.10 172167.94 00:14:47.463 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:47.463 Verification LBA range: start 0x0 length 0x2000 00:14:47.463 concat0 : 5.25 683.29 2.67 0.00 0.00 182369.79 2146.38 123631.23 00:14:47.463 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:47.463 Verification LBA range: start 0x2000 length 0x2000 00:14:47.463 concat0 : 5.25 463.07 1.81 0.00 0.00 267983.51 4407.22 178578.45 00:14:47.463 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:47.463 Verification LBA range: start 0x0 length 0x1000 00:14:47.463 raid1 : 5.25 682.88 2.67 0.00 0.00 182182.31 3691.77 130041.74 00:14:47.463 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:47.463 Verification LBA range: start 0x1000 length 0x1000 00:14:47.463 raid1 : 5.25 462.82 1.81 0.00 0.00 267452.44 5551.96 184073.17 00:14:47.463 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:47.463 Verification LBA range: start 0x0 length 0x4e2 00:14:47.463 AIO0 : 5.25 682.47 2.67 0.00 0.00 181929.70 1144.73 135536.46 00:14:47.463 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:47.463 Verification LBA range: start 0x4e2 length 0x4e2 00:14:47.463 AIO0 : 5.26 462.53 1.81 0.00 0.00 266739.91 2489.80 190483.68 00:14:47.463 =================================================================================================================== 00:14:47.463 Total : 19427.53 75.89 0.00 0.00 207289.10 837.09 346167.45 00:14:50.002 ************************************ 00:14:50.002 END TEST bdev_verify 00:14:50.002 ************************************ 00:14:50.002 00:14:50.002 real 0m9.796s 00:14:50.002 user 0m17.609s 00:14:50.002 sys 0m0.724s 00:14:50.002 21:28:23 blockdev_general.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:50.002 21:28:23 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:14:50.002 21:28:23 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:14:50.002 21:28:23 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:50.002 21:28:23 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:14:50.002 21:28:23 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:50.002 21:28:23 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:50.002 ************************************ 00:14:50.002 START TEST bdev_verify_big_io 00:14:50.002 ************************************ 00:14:50.002 21:28:23 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:50.260 [2024-07-15 21:28:23.438133] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:14:50.260 [2024-07-15 21:28:23.438406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119570 ] 00:14:50.260 [2024-07-15 21:28:23.607894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:50.519 [2024-07-15 21:28:23.863230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.519 [2024-07-15 21:28:23.863237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.097 [2024-07-15 21:28:24.324783] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:51.097 [2024-07-15 21:28:24.324967] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:51.097 [2024-07-15 21:28:24.332746] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:51.097 [2024-07-15 21:28:24.332881] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:51.097 [2024-07-15 21:28:24.340733] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:51.097 [2024-07-15 21:28:24.340886] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:14:51.097 [2024-07-15 21:28:24.340935] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:14:51.368 [2024-07-15 21:28:24.572166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:51.368 [2024-07-15 21:28:24.572374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.368 [2024-07-15 21:28:24.572434] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:51.368 [2024-07-15 21:28:24.572474] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.368 [2024-07-15 21:28:24.575078] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.368 [2024-07-15 21:28:24.575155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:14:51.935 [2024-07-15 21:28:25.004469] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:14:51.935 [2024-07-15 21:28:25.008386] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:14:51.935 [2024-07-15 21:28:25.012773] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:14:51.935 [2024-07-15 21:28:25.017491] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:14:51.935 [2024-07-15 21:28:25.021308] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:14:51.935 [2024-07-15 21:28:25.025739] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:14:51.935 [2024-07-15 21:28:25.029643] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:14:51.935 [2024-07-15 21:28:25.034035] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:14:51.935 [2024-07-15 21:28:25.037750] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:14:51.935 [2024-07-15 21:28:25.042158] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:14:51.935 [2024-07-15 21:28:25.046008] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:14:51.935 [2024-07-15 21:28:25.050626] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:14:51.935 [2024-07-15 21:28:25.054210] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:14:51.935 [2024-07-15 21:28:25.058635] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:14:51.935 [2024-07-15 21:28:25.063079] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:14:51.935 [2024-07-15 21:28:25.066497] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:14:51.935 [2024-07-15 21:28:25.170800] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:14:51.935 [2024-07-15 21:28:25.179640] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:14:51.935 Running I/O for 5 seconds... 00:15:00.070 00:15:00.070 Latency(us) 00:15:00.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.070 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:00.070 Verification LBA range: start 0x0 length 0x100 00:15:00.070 Malloc0 : 5.41 260.44 16.28 0.00 0.00 485874.27 693.99 2095320.43 00:15:00.070 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:00.070 Verification LBA range: start 0x100 length 0x100 00:15:00.070 Malloc0 : 5.78 154.99 9.69 0.00 0.00 803532.41 715.46 2417677.41 00:15:00.070 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:00.070 Verification LBA range: start 0x0 length 0x80 00:15:00.070 Malloc1p0 : 5.65 182.07 11.38 0.00 0.00 665092.91 2632.89 1208838.71 00:15:00.070 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:00.070 Verification LBA range: start 0x80 length 0x80 00:15:00.070 Malloc1p0 : 6.58 36.45 2.28 0.00 0.00 3114792.57 1531.08 5274932.54 00:15:00.070 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:00.070 Verification LBA range: start 0x0 length 0x80 00:15:00.070 Malloc1p1 : 5.85 62.95 3.93 0.00 0.00 1889347.41 1409.45 2637466.27 00:15:00.070 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:00.070 Verification LBA range: start 0x80 length 0x80 00:15:00.070 Malloc1p1 : 6.59 36.44 2.28 0.00 0.00 2978059.00 1516.77 5011185.91 00:15:00.070 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:00.070 Verification LBA range: start 0x0 length 0x20 00:15:00.070 Malloc2p0 : 5.65 50.99 3.19 0.00 0.00 587502.00 761.96 926776.34 00:15:00.070 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:00.070 Verification LBA range: start 0x20 length 0x20 00:15:00.070 Malloc2p0 : 6.20 28.37 1.77 0.00 0.00 967220.51 726.19 1860878.98 00:15:00.070 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:00.070 Verification LBA range: start 0x0 length 0x20 00:15:00.070 Malloc2p1 : 5.65 50.98 3.19 0.00 0.00 584458.97 815.62 912123.75 00:15:00.070 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:00.070 Verification LBA range: start 0x20 length 0x20 00:15:00.070 Malloc2p1 : 6.21 28.36 1.77 0.00 0.00 956442.91 2017.59 1824247.50 00:15:00.070 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:00.070 Verification LBA range: start 0x0 length 0x20 00:15:00.070 Malloc2p2 : 5.65 50.96 3.19 0.00 0.00 581944.47 790.58 901134.31 00:15:00.070 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:00.070 Verification LBA range: start 0x20 length 0x20 00:15:00.070 Malloc2p2 : 6.21 28.36 1.77 0.00 0.00 945236.16 779.85 1772963.44 00:15:00.070 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:00.070 Verification LBA range: start 0x0 length 0x20 00:15:00.070 Malloc2p3 : 5.65 50.95 3.18 0.00 0.00 579394.61 783.43 886481.72 00:15:00.070 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:00.070 Verification LBA range: start 0x20 length 0x20 00:15:00.070 Malloc2p3 : 6.21 28.35 1.77 0.00 0.00 934661.50 704.73 1743658.26 00:15:00.070 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:00.070 Verification LBA range: start 0x0 length 0x20 00:15:00.070 Malloc2p4 : 5.65 50.94 3.18 0.00 0.00 576399.92 740.50 875492.28 00:15:00.070 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:00.070 Verification LBA range: start 0x20 length 0x20 00:15:00.070 Malloc2p4 : 6.34 30.31 1.89 0.00 0.00 868398.48 729.77 1714353.08 00:15:00.070 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:00.070 Verification LBA range: start 0x0 length 0x20 00:15:00.070 Malloc2p5 : 5.65 50.93 3.18 0.00 0.00 573909.76 708.30 868165.98 00:15:00.070 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:00.070 Verification LBA range: start 0x20 length 0x20 00:15:00.070 Malloc2p5 : 6.34 30.30 1.89 0.00 0.00 859964.90 736.92 1685047.90 00:15:00.070 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:00.070 Verification LBA range: start 0x0 length 0x20 00:15:00.070 Malloc2p6 : 5.66 50.92 3.18 0.00 0.00 571418.69 719.04 857176.54 00:15:00.070 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:00.070 Verification LBA range: start 0x20 length 0x20 00:15:00.070 Malloc2p6 : 6.43 32.34 2.02 0.00 0.00 803410.13 740.50 1663069.01 00:15:00.070 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:00.070 Verification LBA range: start 0x0 length 0x20 00:15:00.070 Malloc2p7 : 5.66 50.91 3.18 0.00 0.00 568893.11 794.16 842523.95 00:15:00.070 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:00.070 Verification LBA range: start 0x20 length 0x20 00:15:00.070 Malloc2p7 : 6.43 32.33 2.02 0.00 0.00 795114.67 840.66 1641090.12 00:15:00.070 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:00.070 Verification LBA range: start 0x0 length 0x100 00:15:00.070 TestPT : 5.88 57.82 3.61 0.00 0.00 1955701.98 65020.87 2461635.19 00:15:00.070 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:00.070 Verification LBA range: start 0x100 length 0x100 00:15:00.070 TestPT : 6.53 68.64 4.29 0.00 0.00 1467158.07 81505.03 3707105.37 00:15:00.070 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:00.070 Verification LBA range: start 0x0 length 0x200 00:15:00.070 raid0 : 5.95 69.91 4.37 0.00 0.00 1591516.41 1709.95 2373719.64 00:15:00.070 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:00.070 Verification LBA range: start 0x200 length 0x200 00:15:00.070 raid0 : 6.59 85.59 5.35 0.00 0.00 1143183.78 2618.58 4395777.12 00:15:00.070 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:00.070 Verification LBA range: start 0x0 length 0x200 00:15:00.070 concat0 : 5.96 72.54 4.53 0.00 0.00 1511966.22 1681.33 2315109.28 00:15:00.070 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:00.070 Verification LBA range: start 0x200 length 0x200 00:15:00.070 concat0 : 6.76 106.58 6.66 0.00 0.00 896280.06 1674.17 4190640.85 00:15:00.070 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:00.070 Verification LBA range: start 0x0 length 0x100 00:15:00.070 raid1 : 5.92 81.03 5.06 0.00 0.00 1341782.33 2976.31 2315109.28 00:15:00.070 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:00.070 Verification LBA range: start 0x100 length 0x100 00:15:00.070 raid1 : 6.86 135.37 8.46 0.00 0.00 685375.84 2017.59 4014809.77 00:15:00.070 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:15:00.070 Verification LBA range: start 0x0 length 0x4e 00:15:00.070 AIO0 : 5.96 90.65 5.67 0.00 0.00 724376.37 1416.61 1340712.02 00:15:00.070 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:15:00.070 Verification LBA range: start 0x4e length 0x4e 00:15:00.070 AIO0 : 7.00 151.92 9.49 0.00 0.00 363857.89 1209.12 2696076.63 00:15:00.070 =================================================================================================================== 00:15:00.070 Total : 2299.66 143.73 0.00 0.00 926490.38 693.99 5274932.54 00:15:02.616 ************************************ 00:15:02.616 END TEST bdev_verify_big_io 00:15:02.616 ************************************ 00:15:02.616 00:15:02.616 real 0m12.057s 00:15:02.616 user 0m22.159s 00:15:02.616 sys 0m0.661s 00:15:02.616 21:28:35 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:02.616 21:28:35 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.616 21:28:35 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:15:02.616 21:28:35 blockdev_general -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:02.616 21:28:35 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:15:02.616 21:28:35 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:02.616 21:28:35 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:15:02.616 ************************************ 00:15:02.616 START TEST bdev_write_zeroes 00:15:02.616 ************************************ 00:15:02.616 21:28:35 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:02.616 [2024-07-15 21:28:35.563931] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:15:02.616 [2024-07-15 21:28:35.564189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119747 ] 00:15:02.616 [2024-07-15 21:28:35.726711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.904 [2024-07-15 21:28:35.996021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.161 [2024-07-15 21:28:36.489376] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:15:03.161 [2024-07-15 21:28:36.489568] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:15:03.161 [2024-07-15 21:28:36.497286] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:15:03.161 [2024-07-15 21:28:36.497408] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:15:03.161 [2024-07-15 21:28:36.505309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:15:03.161 [2024-07-15 21:28:36.505420] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:15:03.161 [2024-07-15 21:28:36.505488] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:15:03.419 [2024-07-15 21:28:36.750310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:15:03.419 [2024-07-15 21:28:36.750552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.419 [2024-07-15 21:28:36.750600] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:03.419 [2024-07-15 21:28:36.750647] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.419 [2024-07-15 21:28:36.753323] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.419 [2024-07-15 21:28:36.753441] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:15:03.983 Running I/O for 1 seconds... 00:15:05.360 00:15:05.360 Latency(us) 00:15:05.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.360 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:05.360 Malloc0 : 1.04 4939.67 19.30 0.00 0.00 25896.20 715.46 41897.25 00:15:05.360 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:05.360 Malloc1p0 : 1.04 4932.99 19.27 0.00 0.00 25886.85 980.18 40981.46 00:15:05.360 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:05.360 Malloc1p1 : 1.04 4926.22 19.24 0.00 0.00 25861.32 944.41 40065.68 00:15:05.360 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:05.360 Malloc2p0 : 1.04 4919.70 19.22 0.00 0.00 25838.25 973.02 39149.89 00:15:05.361 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:05.361 Malloc2p1 : 1.04 4913.13 19.19 0.00 0.00 25816.68 912.21 38234.10 00:15:05.361 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:05.361 Malloc2p2 : 1.04 4906.55 19.17 0.00 0.00 25796.37 937.25 37318.32 00:15:05.361 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:05.361 Malloc2p3 : 1.04 4899.99 19.14 0.00 0.00 25775.59 930.10 36402.53 00:15:05.361 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:05.361 Malloc2p4 : 1.05 4893.54 19.12 0.00 0.00 25747.55 922.94 35486.74 00:15:05.361 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:05.361 Malloc2p5 : 1.05 4887.31 19.09 0.00 0.00 25725.84 987.33 34570.96 00:15:05.361 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:05.361 Malloc2p6 : 1.05 4880.80 19.07 0.00 0.00 25702.49 908.63 33655.17 00:15:05.361 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:05.361 Malloc2p7 : 1.05 4874.62 19.04 0.00 0.00 25681.60 987.33 32739.38 00:15:05.361 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:05.361 TestPT : 1.05 4868.44 19.02 0.00 0.00 25659.04 937.25 31823.59 00:15:05.361 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:05.361 raid0 : 1.06 4936.51 19.28 0.00 0.00 25247.02 1502.46 30449.91 00:15:05.361 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:05.361 concat0 : 1.06 4929.52 19.26 0.00 0.00 25187.96 1545.39 28961.76 00:15:05.361 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:05.361 raid1 : 1.07 4920.73 19.22 0.00 0.00 25137.94 2518.41 26786.77 00:15:05.361 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:05.361 AIO0 : 1.07 4896.74 19.13 0.00 0.00 25135.09 1108.96 26901.24 00:15:05.361 =================================================================================================================== 00:15:05.361 Total : 78526.44 306.74 0.00 0.00 25628.29 715.46 41897.25 00:15:07.957 ************************************ 00:15:07.957 END TEST bdev_write_zeroes 00:15:07.957 ************************************ 00:15:07.957 00:15:07.957 real 0m5.758s 00:15:07.957 user 0m5.023s 00:15:07.957 sys 0m0.530s 00:15:07.957 21:28:41 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:07.957 21:28:41 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:07.957 21:28:41 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:15:07.957 21:28:41 blockdev_general -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:07.957 21:28:41 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:15:07.957 21:28:41 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:07.957 21:28:41 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:15:07.957 ************************************ 00:15:07.957 START TEST bdev_json_nonenclosed 00:15:07.957 ************************************ 00:15:07.957 21:28:41 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:08.216 [2024-07-15 21:28:41.396685] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:15:08.217 [2024-07-15 21:28:41.396936] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119833 ] 00:15:08.217 [2024-07-15 21:28:41.562048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.475 [2024-07-15 21:28:41.833669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.475 [2024-07-15 21:28:41.833896] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:08.475 [2024-07-15 21:28:41.833979] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:08.475 [2024-07-15 21:28:41.834045] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:09.043 ************************************ 00:15:09.043 END TEST bdev_json_nonenclosed 00:15:09.043 ************************************ 00:15:09.043 00:15:09.043 real 0m1.007s 00:15:09.043 user 0m0.766s 00:15:09.043 sys 0m0.141s 00:15:09.043 21:28:42 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:15:09.043 21:28:42 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:09.043 21:28:42 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:09.043 21:28:42 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:15:09.043 21:28:42 blockdev_general -- bdev/blockdev.sh@782 -- # true 00:15:09.043 21:28:42 blockdev_general -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:09.043 21:28:42 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:15:09.043 21:28:42 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:09.043 21:28:42 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:15:09.043 ************************************ 00:15:09.043 START TEST bdev_json_nonarray 00:15:09.043 ************************************ 00:15:09.043 21:28:42 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:09.302 [2024-07-15 21:28:42.469625] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:15:09.302 [2024-07-15 21:28:42.469843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119871 ] 00:15:09.302 [2024-07-15 21:28:42.633121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.561 [2024-07-15 21:28:42.910496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.561 [2024-07-15 21:28:42.910746] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:09.561 [2024-07-15 21:28:42.910831] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:09.561 [2024-07-15 21:28:42.910877] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:10.126 ************************************ 00:15:10.126 END TEST bdev_json_nonarray 00:15:10.126 ************************************ 00:15:10.126 00:15:10.126 real 0m1.021s 00:15:10.126 user 0m0.746s 00:15:10.126 sys 0m0.173s 00:15:10.126 21:28:43 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:15:10.126 21:28:43 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:10.126 21:28:43 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:10.126 21:28:43 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:15:10.126 21:28:43 blockdev_general -- bdev/blockdev.sh@785 -- # true 00:15:10.126 21:28:43 blockdev_general -- bdev/blockdev.sh@787 -- # [[ bdev == bdev ]] 00:15:10.126 21:28:43 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qos qos_test_suite '' 00:15:10.126 21:28:43 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:10.126 21:28:43 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:10.126 21:28:43 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:15:10.126 ************************************ 00:15:10.126 START TEST bdev_qos 00:15:10.126 ************************************ 00:15:10.126 21:28:43 blockdev_general.bdev_qos -- common/autotest_common.sh@1123 -- # qos_test_suite '' 00:15:10.126 21:28:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # QOS_PID=119929 00:15:10.126 21:28:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # echo 'Process qos testing pid: 119929' 00:15:10.126 21:28:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:15:10.126 Process qos testing pid: 119929 00:15:10.126 21:28:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:15:10.126 21:28:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@449 -- # waitforlisten 119929 00:15:10.126 21:28:43 blockdev_general.bdev_qos -- common/autotest_common.sh@829 -- # '[' -z 119929 ']' 00:15:10.126 21:28:43 blockdev_general.bdev_qos -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.126 21:28:43 blockdev_general.bdev_qos -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:10.126 21:28:43 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.126 21:28:43 blockdev_general.bdev_qos -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:10.126 21:28:43 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:10.382 [2024-07-15 21:28:43.554444] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:15:10.382 [2024-07-15 21:28:43.554687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119929 ] 00:15:10.382 [2024-07-15 21:28:43.722707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.639 [2024-07-15 21:28:43.947328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.202 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:11.202 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@862 -- # return 0 00:15:11.202 21:28:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:15:11.202 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.202 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:11.459 Malloc_0 00:15:11.459 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.459 21:28:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # waitforbdev Malloc_0 00:15:11.459 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_0 00:15:11.459 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:11.459 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:15:11.459 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:11.459 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:11.459 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:15:11.459 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.459 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:11.459 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.459 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:15:11.459 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.459 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:11.459 [ 00:15:11.459 { 00:15:11.459 "name": "Malloc_0", 00:15:11.459 "aliases": [ 00:15:11.459 "f8f28b45-e763-4a20-84f9-25c54ac36734" 00:15:11.459 ], 00:15:11.459 "product_name": "Malloc disk", 00:15:11.459 "block_size": 512, 00:15:11.459 "num_blocks": 262144, 00:15:11.459 "uuid": "f8f28b45-e763-4a20-84f9-25c54ac36734", 00:15:11.459 "assigned_rate_limits": { 00:15:11.459 "rw_ios_per_sec": 0, 00:15:11.459 "rw_mbytes_per_sec": 0, 00:15:11.459 "r_mbytes_per_sec": 0, 00:15:11.459 "w_mbytes_per_sec": 0 00:15:11.459 }, 00:15:11.459 "claimed": false, 00:15:11.459 "zoned": false, 00:15:11.459 "supported_io_types": { 00:15:11.459 "read": true, 00:15:11.459 "write": true, 00:15:11.459 "unmap": true, 00:15:11.459 "flush": true, 00:15:11.459 "reset": true, 00:15:11.459 "nvme_admin": false, 00:15:11.459 "nvme_io": false, 00:15:11.459 "nvme_io_md": false, 00:15:11.459 "write_zeroes": true, 00:15:11.459 "zcopy": true, 00:15:11.459 "get_zone_info": false, 00:15:11.459 "zone_management": false, 00:15:11.459 "zone_append": false, 00:15:11.459 "compare": false, 00:15:11.459 "compare_and_write": false, 00:15:11.459 "abort": true, 00:15:11.459 "seek_hole": false, 00:15:11.459 "seek_data": false, 00:15:11.459 "copy": true, 00:15:11.459 "nvme_iov_md": false 00:15:11.459 }, 00:15:11.459 "memory_domains": [ 00:15:11.459 { 00:15:11.459 "dma_device_id": "system", 00:15:11.459 "dma_device_type": 1 00:15:11.459 }, 00:15:11.459 { 00:15:11.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.459 "dma_device_type": 2 00:15:11.459 } 00:15:11.459 ], 00:15:11.459 "driver_specific": {} 00:15:11.459 } 00:15:11.459 ] 00:15:11.459 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.459 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:15:11.459 21:28:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # rpc_cmd bdev_null_create Null_1 128 512 00:15:11.459 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.460 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:11.460 Null_1 00:15:11.460 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.460 21:28:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@454 -- # waitforbdev Null_1 00:15:11.460 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Null_1 00:15:11.460 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:11.460 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:15:11.460 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:11.460 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:11.460 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:15:11.460 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.460 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:11.460 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.460 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:15:11.460 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.460 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:11.460 [ 00:15:11.460 { 00:15:11.460 "name": "Null_1", 00:15:11.460 "aliases": [ 00:15:11.460 "26687364-a368-423d-bede-bc33431665ce" 00:15:11.460 ], 00:15:11.460 "product_name": "Null disk", 00:15:11.460 "block_size": 512, 00:15:11.460 "num_blocks": 262144, 00:15:11.460 "uuid": "26687364-a368-423d-bede-bc33431665ce", 00:15:11.460 "assigned_rate_limits": { 00:15:11.460 "rw_ios_per_sec": 0, 00:15:11.460 "rw_mbytes_per_sec": 0, 00:15:11.460 "r_mbytes_per_sec": 0, 00:15:11.460 "w_mbytes_per_sec": 0 00:15:11.460 }, 00:15:11.460 "claimed": false, 00:15:11.460 "zoned": false, 00:15:11.460 "supported_io_types": { 00:15:11.460 "read": true, 00:15:11.460 "write": true, 00:15:11.460 "unmap": false, 00:15:11.460 "flush": false, 00:15:11.460 "reset": true, 00:15:11.460 "nvme_admin": false, 00:15:11.460 "nvme_io": false, 00:15:11.460 "nvme_io_md": false, 00:15:11.460 "write_zeroes": true, 00:15:11.460 "zcopy": false, 00:15:11.460 "get_zone_info": false, 00:15:11.460 "zone_management": false, 00:15:11.460 "zone_append": false, 00:15:11.460 "compare": false, 00:15:11.460 "compare_and_write": false, 00:15:11.460 "abort": true, 00:15:11.460 "seek_hole": false, 00:15:11.460 "seek_data": false, 00:15:11.460 "copy": false, 00:15:11.460 "nvme_iov_md": false 00:15:11.460 }, 00:15:11.460 "driver_specific": {} 00:15:11.460 } 00:15:11.460 ] 00:15:11.460 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.460 21:28:44 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:15:11.460 21:28:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@457 -- # qos_function_test 00:15:11.460 21:28:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:11.460 21:28:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_iops_limit=1000 00:15:11.460 21:28:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local qos_lower_bw_limit=2 00:15:11.460 21:28:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local io_result=0 00:15:11.460 21:28:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local iops_limit=0 00:15:11.460 21:28:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@414 -- # local bw_limit=0 00:15:11.460 21:28:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # get_io_result IOPS Malloc_0 00:15:11.460 21:28:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:15:11.460 21:28:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:15:11.460 21:28:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:15:11.460 21:28:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:15:11.460 21:28:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:15:11.460 21:28:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:15:11.460 Running I/O for 60 seconds... 00:15:16.728 21:28:49 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 68393.62 273574.50 0.00 0.00 278528.00 0.00 0.00 ' 00:15:16.728 21:28:49 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:15:16.728 21:28:49 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:15:16.728 21:28:49 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # iostat_result=68393.62 00:15:16.728 21:28:49 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 68393 00:15:16.728 21:28:49 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # io_result=68393 00:15:16.728 21:28:49 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # iops_limit=17000 00:15:16.728 21:28:49 blockdev_general.bdev_qos -- bdev/blockdev.sh@419 -- # '[' 17000 -gt 1000 ']' 00:15:16.728 21:28:49 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 17000 Malloc_0 00:15:16.728 21:28:49 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.728 21:28:49 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:16.728 21:28:49 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.728 21:28:49 blockdev_general.bdev_qos -- bdev/blockdev.sh@423 -- # run_test bdev_qos_iops run_qos_test 17000 IOPS Malloc_0 00:15:16.728 21:28:49 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:16.728 21:28:49 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:16.728 21:28:49 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:16.728 ************************************ 00:15:16.728 START TEST bdev_qos_iops 00:15:16.728 ************************************ 00:15:16.728 21:28:49 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1123 -- # run_qos_test 17000 IOPS Malloc_0 00:15:16.728 21:28:49 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_limit=17000 00:15:16.728 21:28:49 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@390 -- # local qos_result=0 00:15:16.728 21:28:49 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # get_io_result IOPS Malloc_0 00:15:16.728 21:28:49 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:15:16.728 21:28:49 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:15:16.728 21:28:49 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # local iostat_result 00:15:16.728 21:28:49 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:15:16.728 21:28:49 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:15:16.728 21:28:49 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # tail -1 00:15:21.999 21:28:55 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 16999.53 67998.12 0.00 0.00 69156.00 0.00 0.00 ' 00:15:21.999 21:28:55 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:15:21.999 21:28:55 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:15:21.999 21:28:55 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # iostat_result=16999.53 00:15:21.999 21:28:55 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@385 -- # echo 16999 00:15:21.999 ************************************ 00:15:21.999 END TEST bdev_qos_iops 00:15:21.999 ************************************ 00:15:21.999 21:28:55 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # qos_result=16999 00:15:21.999 21:28:55 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@393 -- # '[' IOPS = BANDWIDTH ']' 00:15:21.999 21:28:55 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # lower_limit=15300 00:15:21.999 21:28:55 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@397 -- # upper_limit=18700 00:15:21.999 21:28:55 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 16999 -lt 15300 ']' 00:15:21.999 21:28:55 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 16999 -gt 18700 ']' 00:15:21.999 00:15:21.999 real 0m5.212s 00:15:21.999 user 0m0.109s 00:15:21.999 sys 0m0.033s 00:15:21.999 21:28:55 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:21.999 21:28:55 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:15:21.999 21:28:55 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:15:21.999 21:28:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # get_io_result BANDWIDTH Null_1 00:15:21.999 21:28:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:15:21.999 21:28:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:15:21.999 21:28:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:15:21.999 21:28:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:15:21.999 21:28:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:15:21.999 21:28:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Null_1 00:15:27.266 21:29:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 23149.17 92596.67 0.00 0.00 94208.00 0.00 0.00 ' 00:15:27.266 21:29:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:15:27.266 21:29:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:27.266 21:29:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:15:27.266 21:29:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # iostat_result=94208.00 00:15:27.266 21:29:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 94208 00:15:27.266 21:29:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=94208 00:15:27.266 21:29:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # bw_limit=9 00:15:27.266 21:29:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@429 -- # '[' 9 -lt 2 ']' 00:15:27.266 21:29:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 9 Null_1 00:15:27.266 21:29:00 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.266 21:29:00 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:27.266 21:29:00 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.266 21:29:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@433 -- # run_test bdev_qos_bw run_qos_test 9 BANDWIDTH Null_1 00:15:27.266 21:29:00 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:27.266 21:29:00 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:27.266 21:29:00 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:27.266 ************************************ 00:15:27.266 START TEST bdev_qos_bw 00:15:27.266 ************************************ 00:15:27.266 21:29:00 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1123 -- # run_qos_test 9 BANDWIDTH Null_1 00:15:27.266 21:29:00 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_limit=9 00:15:27.266 21:29:00 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:15:27.266 21:29:00 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Null_1 00:15:27.266 21:29:00 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:15:27.266 21:29:00 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:15:27.266 21:29:00 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:15:27.266 21:29:00 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:15:27.266 21:29:00 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # grep Null_1 00:15:27.266 21:29:00 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # tail -1 00:15:32.540 21:29:05 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 2308.03 9232.12 0.00 0.00 9476.00 0.00 0.00 ' 00:15:32.540 21:29:05 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:15:32.540 21:29:05 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:32.540 21:29:05 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:15:32.540 21:29:05 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # iostat_result=9476.00 00:15:32.540 21:29:05 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@385 -- # echo 9476 00:15:32.540 ************************************ 00:15:32.540 END TEST bdev_qos_bw 00:15:32.540 ************************************ 00:15:32.540 21:29:05 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # qos_result=9476 00:15:32.540 21:29:05 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:32.540 21:29:05 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@394 -- # qos_limit=9216 00:15:32.540 21:29:05 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # lower_limit=8294 00:15:32.540 21:29:05 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@397 -- # upper_limit=10137 00:15:32.540 21:29:05 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 9476 -lt 8294 ']' 00:15:32.540 21:29:05 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 9476 -gt 10137 ']' 00:15:32.540 00:15:32.540 real 0m5.261s 00:15:32.540 user 0m0.130s 00:15:32.540 sys 0m0.021s 00:15:32.540 21:29:05 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:32.540 21:29:05 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:15:32.540 21:29:05 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:15:32.540 21:29:05 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:15:32.540 21:29:05 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.540 21:29:05 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:32.540 21:29:05 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.540 21:29:05 blockdev_general.bdev_qos -- bdev/blockdev.sh@437 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:15:32.540 21:29:05 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:32.540 21:29:05 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:32.540 21:29:05 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:32.540 ************************************ 00:15:32.540 START TEST bdev_qos_ro_bw 00:15:32.540 ************************************ 00:15:32.540 21:29:05 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1123 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:15:32.540 21:29:05 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_limit=2 00:15:32.540 21:29:05 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:15:32.540 21:29:05 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Malloc_0 00:15:32.540 21:29:05 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:15:32.540 21:29:05 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:15:32.540 21:29:05 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:15:32.540 21:29:05 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:15:32.540 21:29:05 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:15:32.540 21:29:05 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # tail -1 00:15:37.806 21:29:10 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 512.67 2050.67 0.00 0.00 2072.00 0.00 0.00 ' 00:15:37.807 21:29:10 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:15:37.807 21:29:10 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:37.807 21:29:10 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:15:37.807 21:29:10 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # iostat_result=2072.00 00:15:37.807 21:29:10 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@385 -- # echo 2072 00:15:37.807 ************************************ 00:15:37.807 END TEST bdev_qos_ro_bw 00:15:37.807 ************************************ 00:15:37.807 21:29:10 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # qos_result=2072 00:15:37.807 21:29:10 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:37.807 21:29:10 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@394 -- # qos_limit=2048 00:15:37.807 21:29:10 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # lower_limit=1843 00:15:37.807 21:29:10 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@397 -- # upper_limit=2252 00:15:37.807 21:29:10 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2072 -lt 1843 ']' 00:15:37.807 21:29:10 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2072 -gt 2252 ']' 00:15:37.807 00:15:37.807 real 0m5.171s 00:15:37.807 user 0m0.122s 00:15:37.807 sys 0m0.020s 00:15:37.807 21:29:10 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:37.807 21:29:10 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:15:37.807 21:29:11 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:15:37.807 21:29:11 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:15:37.807 21:29:11 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.807 21:29:11 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:38.396 21:29:11 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.396 21:29:11 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # rpc_cmd bdev_null_delete Null_1 00:15:38.396 21:29:11 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.396 21:29:11 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:38.672 00:15:38.672 Latency(us) 00:15:38.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.672 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:15:38.672 Malloc_0 : 26.80 22999.06 89.84 0.00 0.00 11024.10 2089.14 509177.52 00:15:38.672 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:15:38.672 Null_1 : 27.04 22669.64 88.55 0.00 0.00 11262.90 726.19 245430.89 00:15:38.672 =================================================================================================================== 00:15:38.672 Total : 45668.70 178.39 0.00 0.00 11143.18 726.19 509177.52 00:15:38.672 0 00:15:38.672 21:29:11 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.672 21:29:11 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # killprocess 119929 00:15:38.672 21:29:11 blockdev_general.bdev_qos -- common/autotest_common.sh@948 -- # '[' -z 119929 ']' 00:15:38.672 21:29:11 blockdev_general.bdev_qos -- common/autotest_common.sh@952 -- # kill -0 119929 00:15:38.672 21:29:11 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # uname 00:15:38.672 21:29:11 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:38.672 21:29:11 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119929 00:15:38.672 21:29:11 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:38.672 21:29:11 blockdev_general.bdev_qos -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:38.672 21:29:11 blockdev_general.bdev_qos -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119929' 00:15:38.672 killing process with pid 119929 00:15:38.672 21:29:11 blockdev_general.bdev_qos -- common/autotest_common.sh@967 -- # kill 119929 00:15:38.672 Received shutdown signal, test time was about 27.081208 seconds 00:15:38.672 00:15:38.672 Latency(us) 00:15:38.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.672 =================================================================================================================== 00:15:38.672 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:38.672 21:29:11 blockdev_general.bdev_qos -- common/autotest_common.sh@972 -- # wait 119929 00:15:40.575 ************************************ 00:15:40.575 END TEST bdev_qos 00:15:40.575 ************************************ 00:15:40.575 21:29:13 blockdev_general.bdev_qos -- bdev/blockdev.sh@462 -- # trap - SIGINT SIGTERM EXIT 00:15:40.575 00:15:40.575 real 0m29.952s 00:15:40.575 user 0m30.658s 00:15:40.575 sys 0m0.668s 00:15:40.575 21:29:13 blockdev_general.bdev_qos -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:40.575 21:29:13 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:40.575 21:29:13 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:15:40.575 21:29:13 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:15:40.575 21:29:13 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:40.575 21:29:13 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:40.575 21:29:13 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:15:40.575 ************************************ 00:15:40.575 START TEST bdev_qd_sampling 00:15:40.575 ************************************ 00:15:40.575 21:29:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1123 -- # qd_sampling_test_suite '' 00:15:40.575 21:29:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@538 -- # QD_DEV=Malloc_QD 00:15:40.575 21:29:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # QD_PID=120439 00:15:40.575 21:29:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # echo 'Process bdev QD sampling period testing pid: 120439' 00:15:40.575 Process bdev QD sampling period testing pid: 120439 00:15:40.575 21:29:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:15:40.575 21:29:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:15:40.575 21:29:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@544 -- # waitforlisten 120439 00:15:40.575 21:29:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@829 -- # '[' -z 120439 ']' 00:15:40.575 21:29:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.575 21:29:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:40.575 21:29:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.575 21:29:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:40.575 21:29:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:40.575 [2024-07-15 21:29:13.572744] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:15:40.575 [2024-07-15 21:29:13.573001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120439 ] 00:15:40.575 [2024-07-15 21:29:13.740414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:40.833 [2024-07-15 21:29:13.963934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.833 [2024-07-15 21:29:13.963939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.402 21:29:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:41.402 21:29:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@862 -- # return 0 00:15:41.402 21:29:14 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:15:41.402 21:29:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.402 21:29:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:41.402 Malloc_QD 00:15:41.402 21:29:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.402 21:29:14 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@547 -- # waitforbdev Malloc_QD 00:15:41.402 21:29:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_QD 00:15:41.402 21:29:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:41.402 21:29:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local i 00:15:41.402 21:29:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:41.402 21:29:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:41.402 21:29:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:15:41.402 21:29:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.402 21:29:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:41.402 21:29:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.402 21:29:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:15:41.402 21:29:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.402 21:29:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:41.402 [ 00:15:41.402 { 00:15:41.402 "name": "Malloc_QD", 00:15:41.402 "aliases": [ 00:15:41.402 "a860663f-65ad-42e5-8279-9db8adc9922a" 00:15:41.402 ], 00:15:41.402 "product_name": "Malloc disk", 00:15:41.402 "block_size": 512, 00:15:41.403 "num_blocks": 262144, 00:15:41.403 "uuid": "a860663f-65ad-42e5-8279-9db8adc9922a", 00:15:41.403 "assigned_rate_limits": { 00:15:41.403 "rw_ios_per_sec": 0, 00:15:41.403 "rw_mbytes_per_sec": 0, 00:15:41.403 "r_mbytes_per_sec": 0, 00:15:41.403 "w_mbytes_per_sec": 0 00:15:41.403 }, 00:15:41.403 "claimed": false, 00:15:41.403 "zoned": false, 00:15:41.403 "supported_io_types": { 00:15:41.403 "read": true, 00:15:41.403 "write": true, 00:15:41.403 "unmap": true, 00:15:41.403 "flush": true, 00:15:41.403 "reset": true, 00:15:41.403 "nvme_admin": false, 00:15:41.403 "nvme_io": false, 00:15:41.403 "nvme_io_md": false, 00:15:41.403 "write_zeroes": true, 00:15:41.403 "zcopy": true, 00:15:41.403 "get_zone_info": false, 00:15:41.403 "zone_management": false, 00:15:41.403 "zone_append": false, 00:15:41.403 "compare": false, 00:15:41.403 "compare_and_write": false, 00:15:41.403 "abort": true, 00:15:41.403 "seek_hole": false, 00:15:41.403 "seek_data": false, 00:15:41.403 "copy": true, 00:15:41.403 "nvme_iov_md": false 00:15:41.403 }, 00:15:41.403 "memory_domains": [ 00:15:41.403 { 00:15:41.403 "dma_device_id": "system", 00:15:41.403 "dma_device_type": 1 00:15:41.403 }, 00:15:41.403 { 00:15:41.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.403 "dma_device_type": 2 00:15:41.403 } 00:15:41.403 ], 00:15:41.403 "driver_specific": {} 00:15:41.403 } 00:15:41.403 ] 00:15:41.403 21:29:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.403 21:29:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@905 -- # return 0 00:15:41.403 21:29:14 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # sleep 2 00:15:41.403 21:29:14 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:41.669 Running I/O for 5 seconds... 00:15:43.579 21:29:16 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@551 -- # qd_sampling_function_test Malloc_QD 00:15:43.579 21:29:16 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local bdev_name=Malloc_QD 00:15:43.579 21:29:16 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local sampling_period=10 00:15:43.579 21:29:16 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@521 -- # local iostats 00:15:43.579 21:29:16 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:15:43.579 21:29:16 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.579 21:29:16 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:43.579 21:29:16 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.579 21:29:16 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:15:43.579 21:29:16 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.579 21:29:16 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:43.579 21:29:16 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.579 21:29:16 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # iostats='{ 00:15:43.579 "tick_rate": 2290000000, 00:15:43.579 "ticks": 2021785950868, 00:15:43.579 "bdevs": [ 00:15:43.579 { 00:15:43.579 "name": "Malloc_QD", 00:15:43.579 "bytes_read": 784372224, 00:15:43.579 "num_read_ops": 191491, 00:15:43.579 "bytes_written": 0, 00:15:43.579 "num_write_ops": 0, 00:15:43.579 "bytes_unmapped": 0, 00:15:43.579 "num_unmap_ops": 0, 00:15:43.579 "bytes_copied": 0, 00:15:43.579 "num_copy_ops": 0, 00:15:43.579 "read_latency_ticks": 2255631925300, 00:15:43.579 "max_read_latency_ticks": 13944692, 00:15:43.579 "min_read_latency_ticks": 370210, 00:15:43.579 "write_latency_ticks": 0, 00:15:43.579 "max_write_latency_ticks": 0, 00:15:43.579 "min_write_latency_ticks": 0, 00:15:43.579 "unmap_latency_ticks": 0, 00:15:43.579 "max_unmap_latency_ticks": 0, 00:15:43.579 "min_unmap_latency_ticks": 0, 00:15:43.579 "copy_latency_ticks": 0, 00:15:43.579 "max_copy_latency_ticks": 0, 00:15:43.579 "min_copy_latency_ticks": 0, 00:15:43.579 "io_error": {}, 00:15:43.579 "queue_depth_polling_period": 10, 00:15:43.579 "queue_depth": 512, 00:15:43.579 "io_time": 30, 00:15:43.579 "weighted_io_time": 15360 00:15:43.579 } 00:15:43.579 ] 00:15:43.579 }' 00:15:43.579 21:29:16 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:15:43.579 21:29:16 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # qd_sampling_period=10 00:15:43.579 21:29:16 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 == null ']' 00:15:43.579 21:29:16 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 -ne 10 ']' 00:15:43.579 21:29:16 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:15:43.579 21:29:16 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.579 21:29:16 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:43.579 00:15:43.579 Latency(us) 00:15:43.579 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:43.579 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:15:43.579 Malloc_QD : 2.01 49503.76 193.37 0.00 0.00 5158.06 1194.82 6610.84 00:15:43.579 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:15:43.579 Malloc_QD : 2.01 49863.58 194.78 0.00 0.00 5121.18 779.85 5752.29 00:15:43.579 =================================================================================================================== 00:15:43.579 Total : 99367.34 388.15 0.00 0.00 5139.55 779.85 6610.84 00:15:43.579 0 00:15:43.579 21:29:16 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.579 21:29:16 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # killprocess 120439 00:15:43.579 21:29:16 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@948 -- # '[' -z 120439 ']' 00:15:43.579 21:29:16 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@952 -- # kill -0 120439 00:15:43.579 21:29:16 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # uname 00:15:43.579 21:29:16 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:43.579 21:29:16 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 120439 00:15:43.838 21:29:16 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:43.838 21:29:16 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:43.838 killing process with pid 120439 00:15:43.838 21:29:16 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@966 -- # echo 'killing process with pid 120439' 00:15:43.838 21:29:16 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@967 -- # kill 120439 00:15:43.838 Received shutdown signal, test time was about 2.183341 seconds 00:15:43.838 00:15:43.838 Latency(us) 00:15:43.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:43.838 =================================================================================================================== 00:15:43.838 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:43.838 21:29:16 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@972 -- # wait 120439 00:15:45.218 21:29:18 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@555 -- # trap - SIGINT SIGTERM EXIT 00:15:45.218 00:15:45.218 real 0m5.073s 00:15:45.218 user 0m9.466s 00:15:45.218 sys 0m0.349s 00:15:45.218 21:29:18 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:45.218 21:29:18 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:45.218 ************************************ 00:15:45.218 END TEST bdev_qd_sampling 00:15:45.218 ************************************ 00:15:45.477 21:29:18 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:15:45.477 21:29:18 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_error error_test_suite '' 00:15:45.477 21:29:18 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:45.477 21:29:18 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:45.477 21:29:18 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:15:45.477 ************************************ 00:15:45.477 START TEST bdev_error 00:15:45.477 ************************************ 00:15:45.477 21:29:18 blockdev_general.bdev_error -- common/autotest_common.sh@1123 -- # error_test_suite '' 00:15:45.477 21:29:18 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_1=Dev_1 00:15:45.477 21:29:18 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # DEV_2=Dev_2 00:15:45.477 21:29:18 blockdev_general.bdev_error -- bdev/blockdev.sh@468 -- # ERR_DEV=EE_Dev_1 00:15:45.477 21:29:18 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # ERR_PID=120533 00:15:45.477 21:29:18 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # echo 'Process error testing pid: 120533' 00:15:45.477 Process error testing pid: 120533 00:15:45.477 21:29:18 blockdev_general.bdev_error -- bdev/blockdev.sh@474 -- # waitforlisten 120533 00:15:45.477 21:29:18 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 120533 ']' 00:15:45.477 21:29:18 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:15:45.477 21:29:18 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.477 21:29:18 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:45.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.477 21:29:18 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.477 21:29:18 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:45.477 21:29:18 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:45.477 [2024-07-15 21:29:18.712834] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:15:45.477 [2024-07-15 21:29:18.713086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120533 ] 00:15:45.735 [2024-07-15 21:29:18.874913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.735 [2024-07-15 21:29:19.092051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.302 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:46.302 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:15:46.302 21:29:19 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:15:46.302 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.302 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:46.561 Dev_1 00:15:46.561 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.561 21:29:19 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # waitforbdev Dev_1 00:15:46.561 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:15:46.561 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:46.561 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:15:46.561 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:46.561 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:46.561 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:15:46.561 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.561 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:46.561 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.561 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:15:46.561 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.561 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:46.561 [ 00:15:46.561 { 00:15:46.561 "name": "Dev_1", 00:15:46.561 "aliases": [ 00:15:46.561 "0de0cf7a-9c62-4e52-9898-d32e18be5de0" 00:15:46.561 ], 00:15:46.561 "product_name": "Malloc disk", 00:15:46.561 "block_size": 512, 00:15:46.561 "num_blocks": 262144, 00:15:46.561 "uuid": "0de0cf7a-9c62-4e52-9898-d32e18be5de0", 00:15:46.561 "assigned_rate_limits": { 00:15:46.561 "rw_ios_per_sec": 0, 00:15:46.561 "rw_mbytes_per_sec": 0, 00:15:46.561 "r_mbytes_per_sec": 0, 00:15:46.561 "w_mbytes_per_sec": 0 00:15:46.561 }, 00:15:46.561 "claimed": false, 00:15:46.561 "zoned": false, 00:15:46.561 "supported_io_types": { 00:15:46.561 "read": true, 00:15:46.561 "write": true, 00:15:46.561 "unmap": true, 00:15:46.561 "flush": true, 00:15:46.561 "reset": true, 00:15:46.561 "nvme_admin": false, 00:15:46.561 "nvme_io": false, 00:15:46.561 "nvme_io_md": false, 00:15:46.561 "write_zeroes": true, 00:15:46.561 "zcopy": true, 00:15:46.561 "get_zone_info": false, 00:15:46.561 "zone_management": false, 00:15:46.561 "zone_append": false, 00:15:46.561 "compare": false, 00:15:46.561 "compare_and_write": false, 00:15:46.561 "abort": true, 00:15:46.561 "seek_hole": false, 00:15:46.561 "seek_data": false, 00:15:46.561 "copy": true, 00:15:46.561 "nvme_iov_md": false 00:15:46.561 }, 00:15:46.561 "memory_domains": [ 00:15:46.561 { 00:15:46.561 "dma_device_id": "system", 00:15:46.561 "dma_device_type": 1 00:15:46.561 }, 00:15:46.561 { 00:15:46.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.561 "dma_device_type": 2 00:15:46.561 } 00:15:46.561 ], 00:15:46.561 "driver_specific": {} 00:15:46.561 } 00:15:46.561 ] 00:15:46.561 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.561 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:15:46.561 21:29:19 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_error_create Dev_1 00:15:46.561 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.561 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:46.561 true 00:15:46.561 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.561 21:29:19 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:15:46.561 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.561 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:46.561 Dev_2 00:15:46.561 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.561 21:29:19 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # waitforbdev Dev_2 00:15:46.561 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:15:46.561 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:46.561 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:15:46.561 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:46.561 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:46.561 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:15:46.561 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.561 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:46.561 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.561 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:15:46.561 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.561 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:46.561 [ 00:15:46.561 { 00:15:46.561 "name": "Dev_2", 00:15:46.561 "aliases": [ 00:15:46.561 "572e6c5b-9f5f-4eb0-91f0-790044214000" 00:15:46.561 ], 00:15:46.561 "product_name": "Malloc disk", 00:15:46.561 "block_size": 512, 00:15:46.561 "num_blocks": 262144, 00:15:46.561 "uuid": "572e6c5b-9f5f-4eb0-91f0-790044214000", 00:15:46.561 "assigned_rate_limits": { 00:15:46.561 "rw_ios_per_sec": 0, 00:15:46.561 "rw_mbytes_per_sec": 0, 00:15:46.561 "r_mbytes_per_sec": 0, 00:15:46.561 "w_mbytes_per_sec": 0 00:15:46.561 }, 00:15:46.561 "claimed": false, 00:15:46.561 "zoned": false, 00:15:46.561 "supported_io_types": { 00:15:46.561 "read": true, 00:15:46.561 "write": true, 00:15:46.561 "unmap": true, 00:15:46.561 "flush": true, 00:15:46.561 "reset": true, 00:15:46.561 "nvme_admin": false, 00:15:46.561 "nvme_io": false, 00:15:46.561 "nvme_io_md": false, 00:15:46.561 "write_zeroes": true, 00:15:46.561 "zcopy": true, 00:15:46.561 "get_zone_info": false, 00:15:46.561 "zone_management": false, 00:15:46.561 "zone_append": false, 00:15:46.561 "compare": false, 00:15:46.561 "compare_and_write": false, 00:15:46.561 "abort": true, 00:15:46.561 "seek_hole": false, 00:15:46.561 "seek_data": false, 00:15:46.561 "copy": true, 00:15:46.561 "nvme_iov_md": false 00:15:46.561 }, 00:15:46.561 "memory_domains": [ 00:15:46.561 { 00:15:46.561 "dma_device_id": "system", 00:15:46.561 "dma_device_type": 1 00:15:46.561 }, 00:15:46.561 { 00:15:46.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.561 "dma_device_type": 2 00:15:46.561 } 00:15:46.561 ], 00:15:46.820 "driver_specific": {} 00:15:46.820 } 00:15:46.820 ] 00:15:46.820 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.820 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:15:46.820 21:29:19 blockdev_general.bdev_error -- bdev/blockdev.sh@481 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:15:46.820 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.820 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:46.820 21:29:19 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.820 21:29:19 blockdev_general.bdev_error -- bdev/blockdev.sh@484 -- # sleep 1 00:15:46.820 21:29:19 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:15:46.820 Running I/O for 5 seconds... 00:15:47.755 Process is existed as continue on error is set. Pid: 120533 00:15:47.755 21:29:20 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # kill -0 120533 00:15:47.755 21:29:20 blockdev_general.bdev_error -- bdev/blockdev.sh@488 -- # echo 'Process is existed as continue on error is set. Pid: 120533' 00:15:47.755 21:29:20 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:15:47.755 21:29:20 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.755 21:29:20 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:47.755 21:29:20 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.755 21:29:20 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # rpc_cmd bdev_malloc_delete Dev_1 00:15:47.755 21:29:20 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.755 21:29:20 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:47.755 Timeout while waiting for response: 00:15:47.755 00:15:47.755 00:15:48.013 21:29:21 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.013 21:29:21 blockdev_general.bdev_error -- bdev/blockdev.sh@497 -- # sleep 5 00:15:52.199 00:15:52.199 Latency(us) 00:15:52.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.199 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:15:52.199 EE_Dev_1 : 0.92 43784.99 171.04 5.42 0.00 362.64 139.51 672.53 00:15:52.199 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:15:52.199 Dev_2 : 5.00 82432.29 322.00 0.00 0.00 191.30 58.13 390125.22 00:15:52.199 =================================================================================================================== 00:15:52.199 Total : 126217.28 493.04 5.42 0.00 206.58 58.13 390125.22 00:15:53.135 21:29:26 blockdev_general.bdev_error -- bdev/blockdev.sh@499 -- # killprocess 120533 00:15:53.135 21:29:26 blockdev_general.bdev_error -- common/autotest_common.sh@948 -- # '[' -z 120533 ']' 00:15:53.135 21:29:26 blockdev_general.bdev_error -- common/autotest_common.sh@952 -- # kill -0 120533 00:15:53.135 21:29:26 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # uname 00:15:53.135 21:29:26 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:53.135 21:29:26 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 120533 00:15:53.135 killing process with pid 120533 00:15:53.135 Received shutdown signal, test time was about 5.000000 seconds 00:15:53.135 00:15:53.136 Latency(us) 00:15:53.136 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.136 =================================================================================================================== 00:15:53.136 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:53.136 21:29:26 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:53.136 21:29:26 blockdev_general.bdev_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:53.136 21:29:26 blockdev_general.bdev_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 120533' 00:15:53.136 21:29:26 blockdev_general.bdev_error -- common/autotest_common.sh@967 -- # kill 120533 00:15:53.136 21:29:26 blockdev_general.bdev_error -- common/autotest_common.sh@972 -- # wait 120533 00:15:55.036 Process error testing pid: 120665 00:15:55.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.036 21:29:28 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # ERR_PID=120665 00:15:55.036 21:29:28 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # echo 'Process error testing pid: 120665' 00:15:55.036 21:29:28 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:15:55.036 21:29:28 blockdev_general.bdev_error -- bdev/blockdev.sh@505 -- # waitforlisten 120665 00:15:55.036 21:29:28 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 120665 ']' 00:15:55.036 21:29:28 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.036 21:29:28 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:55.036 21:29:28 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.036 21:29:28 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:55.036 21:29:28 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:55.036 [2024-07-15 21:29:28.233190] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:15:55.036 [2024-07-15 21:29:28.233579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120665 ] 00:15:55.293 [2024-07-15 21:29:28.412329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.293 [2024-07-15 21:29:28.635544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.865 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:55.865 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:15:55.865 21:29:29 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:15:55.865 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.865 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:56.123 Dev_1 00:15:56.123 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.123 21:29:29 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # waitforbdev Dev_1 00:15:56.123 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:15:56.123 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:56.123 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:15:56.123 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:56.123 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:56.123 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:15:56.123 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.124 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:56.124 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.124 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:15:56.124 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.124 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:56.124 [ 00:15:56.124 { 00:15:56.124 "name": "Dev_1", 00:15:56.124 "aliases": [ 00:15:56.124 "07f1914f-fc2c-4f85-ad85-0e8beb39c8fd" 00:15:56.124 ], 00:15:56.124 "product_name": "Malloc disk", 00:15:56.124 "block_size": 512, 00:15:56.124 "num_blocks": 262144, 00:15:56.124 "uuid": "07f1914f-fc2c-4f85-ad85-0e8beb39c8fd", 00:15:56.124 "assigned_rate_limits": { 00:15:56.124 "rw_ios_per_sec": 0, 00:15:56.124 "rw_mbytes_per_sec": 0, 00:15:56.124 "r_mbytes_per_sec": 0, 00:15:56.124 "w_mbytes_per_sec": 0 00:15:56.124 }, 00:15:56.124 "claimed": false, 00:15:56.124 "zoned": false, 00:15:56.124 "supported_io_types": { 00:15:56.124 "read": true, 00:15:56.124 "write": true, 00:15:56.124 "unmap": true, 00:15:56.124 "flush": true, 00:15:56.124 "reset": true, 00:15:56.124 "nvme_admin": false, 00:15:56.124 "nvme_io": false, 00:15:56.124 "nvme_io_md": false, 00:15:56.124 "write_zeroes": true, 00:15:56.124 "zcopy": true, 00:15:56.124 "get_zone_info": false, 00:15:56.124 "zone_management": false, 00:15:56.124 "zone_append": false, 00:15:56.124 "compare": false, 00:15:56.124 "compare_and_write": false, 00:15:56.124 "abort": true, 00:15:56.124 "seek_hole": false, 00:15:56.124 "seek_data": false, 00:15:56.124 "copy": true, 00:15:56.124 "nvme_iov_md": false 00:15:56.124 }, 00:15:56.124 "memory_domains": [ 00:15:56.124 { 00:15:56.124 "dma_device_id": "system", 00:15:56.124 "dma_device_type": 1 00:15:56.124 }, 00:15:56.124 { 00:15:56.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.124 "dma_device_type": 2 00:15:56.124 } 00:15:56.124 ], 00:15:56.124 "driver_specific": {} 00:15:56.124 } 00:15:56.124 ] 00:15:56.124 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.124 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:15:56.124 21:29:29 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_error_create Dev_1 00:15:56.124 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.124 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:56.124 true 00:15:56.124 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.124 21:29:29 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:15:56.124 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.124 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:56.384 Dev_2 00:15:56.384 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.384 21:29:29 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # waitforbdev Dev_2 00:15:56.384 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:15:56.384 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:56.384 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:15:56.384 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:56.384 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:56.384 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:15:56.384 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.384 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:56.384 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.384 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:15:56.384 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.384 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:56.384 [ 00:15:56.384 { 00:15:56.384 "name": "Dev_2", 00:15:56.384 "aliases": [ 00:15:56.384 "47a0af78-8c61-477a-a5ab-ba0e99f2d221" 00:15:56.384 ], 00:15:56.384 "product_name": "Malloc disk", 00:15:56.384 "block_size": 512, 00:15:56.384 "num_blocks": 262144, 00:15:56.384 "uuid": "47a0af78-8c61-477a-a5ab-ba0e99f2d221", 00:15:56.384 "assigned_rate_limits": { 00:15:56.384 "rw_ios_per_sec": 0, 00:15:56.384 "rw_mbytes_per_sec": 0, 00:15:56.384 "r_mbytes_per_sec": 0, 00:15:56.384 "w_mbytes_per_sec": 0 00:15:56.384 }, 00:15:56.384 "claimed": false, 00:15:56.384 "zoned": false, 00:15:56.384 "supported_io_types": { 00:15:56.384 "read": true, 00:15:56.384 "write": true, 00:15:56.384 "unmap": true, 00:15:56.384 "flush": true, 00:15:56.384 "reset": true, 00:15:56.384 "nvme_admin": false, 00:15:56.384 "nvme_io": false, 00:15:56.384 "nvme_io_md": false, 00:15:56.384 "write_zeroes": true, 00:15:56.384 "zcopy": true, 00:15:56.384 "get_zone_info": false, 00:15:56.384 "zone_management": false, 00:15:56.384 "zone_append": false, 00:15:56.384 "compare": false, 00:15:56.384 "compare_and_write": false, 00:15:56.384 "abort": true, 00:15:56.384 "seek_hole": false, 00:15:56.384 "seek_data": false, 00:15:56.384 "copy": true, 00:15:56.384 "nvme_iov_md": false 00:15:56.384 }, 00:15:56.384 "memory_domains": [ 00:15:56.384 { 00:15:56.384 "dma_device_id": "system", 00:15:56.384 "dma_device_type": 1 00:15:56.384 }, 00:15:56.384 { 00:15:56.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.384 "dma_device_type": 2 00:15:56.384 } 00:15:56.384 ], 00:15:56.384 "driver_specific": {} 00:15:56.384 } 00:15:56.384 ] 00:15:56.384 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.384 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:15:56.384 21:29:29 blockdev_general.bdev_error -- bdev/blockdev.sh@512 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:15:56.384 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.384 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:56.384 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.384 21:29:29 blockdev_general.bdev_error -- bdev/blockdev.sh@515 -- # NOT wait 120665 00:15:56.384 21:29:29 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:15:56.384 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@648 -- # local es=0 00:15:56.384 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # valid_exec_arg wait 120665 00:15:56.384 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@636 -- # local arg=wait 00:15:56.384 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:56.384 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # type -t wait 00:15:56.384 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:56.384 21:29:29 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # wait 120665 00:15:56.384 Running I/O for 5 seconds... 00:15:56.384 task offset: 181496 on job bdev=EE_Dev_1 fails 00:15:56.384 00:15:56.384 Latency(us) 00:15:56.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.384 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:15:56.384 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:15:56.384 EE_Dev_1 : 0.00 28985.51 113.22 6587.62 0.00 355.46 135.94 661.80 00:15:56.384 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:15:56.384 Dev_2 : 0.00 20125.79 78.62 0.00 0.00 555.18 129.68 1023.11 00:15:56.384 =================================================================================================================== 00:15:56.384 Total : 49111.29 191.84 6587.62 0.00 463.78 129.68 1023.11 00:15:56.384 [2024-07-15 21:29:29.674856] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:56.384 request: 00:15:56.384 { 00:15:56.384 "method": "perform_tests", 00:15:56.384 "req_id": 1 00:15:56.384 } 00:15:56.384 Got JSON-RPC error response 00:15:56.384 response: 00:15:56.384 { 00:15:56.384 "code": -32603, 00:15:56.384 "message": "bdevperf failed with error Operation not permitted" 00:15:56.384 } 00:15:58.950 ************************************ 00:15:58.950 END TEST bdev_error 00:15:58.950 ************************************ 00:15:58.950 21:29:31 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # es=255 00:15:58.950 21:29:31 blockdev_general.bdev_error -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:58.950 21:29:31 blockdev_general.bdev_error -- common/autotest_common.sh@660 -- # es=127 00:15:58.950 21:29:31 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # case "$es" in 00:15:58.950 21:29:31 blockdev_general.bdev_error -- common/autotest_common.sh@668 -- # es=1 00:15:58.950 21:29:31 blockdev_general.bdev_error -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:58.950 00:15:58.950 real 0m13.052s 00:15:58.950 user 0m13.256s 00:15:58.950 sys 0m0.769s 00:15:58.950 21:29:31 blockdev_general.bdev_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:58.950 21:29:31 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:58.950 21:29:31 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:15:58.951 21:29:31 blockdev_general -- bdev/blockdev.sh@791 -- # run_test bdev_stat stat_test_suite '' 00:15:58.951 21:29:31 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:58.951 21:29:31 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:58.951 21:29:31 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:15:58.951 ************************************ 00:15:58.951 START TEST bdev_stat 00:15:58.951 ************************************ 00:15:58.951 21:29:31 blockdev_general.bdev_stat -- common/autotest_common.sh@1123 -- # stat_test_suite '' 00:15:58.951 21:29:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@592 -- # STAT_DEV=Malloc_STAT 00:15:58.951 21:29:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # STAT_PID=120739 00:15:58.951 21:29:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # echo 'Process Bdev IO statistics testing pid: 120739' 00:15:58.951 Process Bdev IO statistics testing pid: 120739 00:15:58.951 21:29:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:15:58.951 21:29:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:15:58.951 21:29:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@599 -- # waitforlisten 120739 00:15:58.951 21:29:31 blockdev_general.bdev_stat -- common/autotest_common.sh@829 -- # '[' -z 120739 ']' 00:15:58.951 21:29:31 blockdev_general.bdev_stat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.951 21:29:31 blockdev_general.bdev_stat -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:58.951 21:29:31 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.951 21:29:31 blockdev_general.bdev_stat -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:58.951 21:29:31 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:15:58.951 [2024-07-15 21:29:31.835848] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:15:58.951 [2024-07-15 21:29:31.836081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120739 ] 00:15:58.951 [2024-07-15 21:29:32.001384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:58.951 [2024-07-15 21:29:32.226337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.951 [2024-07-15 21:29:32.226342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:59.520 21:29:32 blockdev_general.bdev_stat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:59.520 21:29:32 blockdev_general.bdev_stat -- common/autotest_common.sh@862 -- # return 0 00:15:59.520 21:29:32 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:15:59.520 21:29:32 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.520 21:29:32 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:15:59.778 Malloc_STAT 00:15:59.778 21:29:32 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.778 21:29:32 blockdev_general.bdev_stat -- bdev/blockdev.sh@602 -- # waitforbdev Malloc_STAT 00:15:59.778 21:29:32 blockdev_general.bdev_stat -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_STAT 00:15:59.778 21:29:32 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:59.778 21:29:32 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local i 00:15:59.778 21:29:32 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:59.778 21:29:32 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:59.778 21:29:32 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:15:59.778 21:29:32 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.778 21:29:32 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:15:59.778 21:29:32 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.778 21:29:32 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:15:59.778 21:29:32 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.778 21:29:32 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:15:59.778 [ 00:15:59.778 { 00:15:59.778 "name": "Malloc_STAT", 00:15:59.778 "aliases": [ 00:15:59.778 "9cc1a901-8c60-4110-9dde-d0cd44636ed5" 00:15:59.778 ], 00:15:59.778 "product_name": "Malloc disk", 00:15:59.778 "block_size": 512, 00:15:59.778 "num_blocks": 262144, 00:15:59.779 "uuid": "9cc1a901-8c60-4110-9dde-d0cd44636ed5", 00:15:59.779 "assigned_rate_limits": { 00:15:59.779 "rw_ios_per_sec": 0, 00:15:59.779 "rw_mbytes_per_sec": 0, 00:15:59.779 "r_mbytes_per_sec": 0, 00:15:59.779 "w_mbytes_per_sec": 0 00:15:59.779 }, 00:15:59.779 "claimed": false, 00:15:59.779 "zoned": false, 00:15:59.779 "supported_io_types": { 00:15:59.779 "read": true, 00:15:59.779 "write": true, 00:15:59.779 "unmap": true, 00:15:59.779 "flush": true, 00:15:59.779 "reset": true, 00:15:59.779 "nvme_admin": false, 00:15:59.779 "nvme_io": false, 00:15:59.779 "nvme_io_md": false, 00:15:59.779 "write_zeroes": true, 00:15:59.779 "zcopy": true, 00:15:59.779 "get_zone_info": false, 00:15:59.779 "zone_management": false, 00:15:59.779 "zone_append": false, 00:15:59.779 "compare": false, 00:15:59.779 "compare_and_write": false, 00:15:59.779 "abort": true, 00:15:59.779 "seek_hole": false, 00:15:59.779 "seek_data": false, 00:15:59.779 "copy": true, 00:15:59.779 "nvme_iov_md": false 00:15:59.779 }, 00:15:59.779 "memory_domains": [ 00:15:59.779 { 00:15:59.779 "dma_device_id": "system", 00:15:59.779 "dma_device_type": 1 00:15:59.779 }, 00:15:59.779 { 00:15:59.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.779 "dma_device_type": 2 00:15:59.779 } 00:15:59.779 ], 00:15:59.779 "driver_specific": {} 00:15:59.779 } 00:15:59.779 ] 00:15:59.779 21:29:32 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.779 21:29:32 blockdev_general.bdev_stat -- common/autotest_common.sh@905 -- # return 0 00:15:59.779 21:29:32 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # sleep 2 00:15:59.779 21:29:32 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:59.779 Running I/O for 10 seconds... 00:16:01.708 21:29:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@606 -- # stat_function_test Malloc_STAT 00:16:01.708 21:29:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local bdev_name=Malloc_STAT 00:16:01.708 21:29:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local iostats 00:16:01.708 21:29:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count1 00:16:01.708 21:29:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local io_count2 00:16:01.708 21:29:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local iostats_per_channel 00:16:01.708 21:29:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel1 00:16:01.708 21:29:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel2 00:16:01.708 21:29:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@566 -- # local io_count_per_channel_all=0 00:16:01.708 21:29:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:16:01.708 21:29:34 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.708 21:29:34 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:16:01.708 21:29:34 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.708 21:29:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # iostats='{ 00:16:01.708 "tick_rate": 2290000000, 00:16:01.708 "ticks": 2063659029162, 00:16:01.708 "bdevs": [ 00:16:01.708 { 00:16:01.708 "name": "Malloc_STAT", 00:16:01.708 "bytes_read": 757109248, 00:16:01.708 "num_read_ops": 184835, 00:16:01.708 "bytes_written": 0, 00:16:01.708 "num_write_ops": 0, 00:16:01.708 "bytes_unmapped": 0, 00:16:01.708 "num_unmap_ops": 0, 00:16:01.708 "bytes_copied": 0, 00:16:01.708 "num_copy_ops": 0, 00:16:01.708 "read_latency_ticks": 2240753678674, 00:16:01.708 "max_read_latency_ticks": 15783124, 00:16:01.708 "min_read_latency_ticks": 368284, 00:16:01.708 "write_latency_ticks": 0, 00:16:01.708 "max_write_latency_ticks": 0, 00:16:01.708 "min_write_latency_ticks": 0, 00:16:01.708 "unmap_latency_ticks": 0, 00:16:01.708 "max_unmap_latency_ticks": 0, 00:16:01.708 "min_unmap_latency_ticks": 0, 00:16:01.708 "copy_latency_ticks": 0, 00:16:01.708 "max_copy_latency_ticks": 0, 00:16:01.708 "min_copy_latency_ticks": 0, 00:16:01.708 "io_error": {} 00:16:01.708 } 00:16:01.708 ] 00:16:01.708 }' 00:16:01.708 21:29:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # jq -r '.bdevs[0].num_read_ops' 00:16:01.708 21:29:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # io_count1=184835 00:16:01.708 21:29:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:16:01.708 21:29:35 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.708 21:29:35 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:16:01.708 21:29:35 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.708 21:29:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # iostats_per_channel='{ 00:16:01.708 "tick_rate": 2290000000, 00:16:01.708 "ticks": 2063834679276, 00:16:01.708 "name": "Malloc_STAT", 00:16:01.708 "channels": [ 00:16:01.708 { 00:16:01.708 "thread_id": 2, 00:16:01.708 "bytes_read": 384827392, 00:16:01.708 "num_read_ops": 93952, 00:16:01.708 "bytes_written": 0, 00:16:01.708 "num_write_ops": 0, 00:16:01.708 "bytes_unmapped": 0, 00:16:01.708 "num_unmap_ops": 0, 00:16:01.708 "bytes_copied": 0, 00:16:01.708 "num_copy_ops": 0, 00:16:01.708 "read_latency_ticks": 1164528076348, 00:16:01.708 "max_read_latency_ticks": 15783124, 00:16:01.708 "min_read_latency_ticks": 8943374, 00:16:01.708 "write_latency_ticks": 0, 00:16:01.708 "max_write_latency_ticks": 0, 00:16:01.708 "min_write_latency_ticks": 0, 00:16:01.708 "unmap_latency_ticks": 0, 00:16:01.708 "max_unmap_latency_ticks": 0, 00:16:01.708 "min_unmap_latency_ticks": 0, 00:16:01.708 "copy_latency_ticks": 0, 00:16:01.708 "max_copy_latency_ticks": 0, 00:16:01.708 "min_copy_latency_ticks": 0 00:16:01.708 }, 00:16:01.708 { 00:16:01.708 "thread_id": 3, 00:16:01.708 "bytes_read": 402653184, 00:16:01.708 "num_read_ops": 98304, 00:16:01.708 "bytes_written": 0, 00:16:01.708 "num_write_ops": 0, 00:16:01.708 "bytes_unmapped": 0, 00:16:01.708 "num_unmap_ops": 0, 00:16:01.708 "bytes_copied": 0, 00:16:01.708 "num_copy_ops": 0, 00:16:01.708 "read_latency_ticks": 1166576931688, 00:16:01.708 "max_read_latency_ticks": 12975792, 00:16:01.708 "min_read_latency_ticks": 8954070, 00:16:01.708 "write_latency_ticks": 0, 00:16:01.708 "max_write_latency_ticks": 0, 00:16:01.708 "min_write_latency_ticks": 0, 00:16:01.708 "unmap_latency_ticks": 0, 00:16:01.708 "max_unmap_latency_ticks": 0, 00:16:01.708 "min_unmap_latency_ticks": 0, 00:16:01.708 "copy_latency_ticks": 0, 00:16:01.708 "max_copy_latency_ticks": 0, 00:16:01.708 "min_copy_latency_ticks": 0 00:16:01.708 } 00:16:01.708 ] 00:16:01.708 }' 00:16:01.708 21:29:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # jq -r '.channels[0].num_read_ops' 00:16:01.966 21:29:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel1=93952 00:16:01.966 21:29:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=93952 00:16:01.966 21:29:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # jq -r '.channels[1].num_read_ops' 00:16:01.966 21:29:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel2=98304 00:16:01.966 21:29:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@575 -- # io_count_per_channel_all=192256 00:16:01.966 21:29:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:16:01.966 21:29:35 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.966 21:29:35 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:16:01.966 21:29:35 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.966 21:29:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # iostats='{ 00:16:01.966 "tick_rate": 2290000000, 00:16:01.966 "ticks": 2064131843234, 00:16:01.966 "bdevs": [ 00:16:01.966 { 00:16:01.966 "name": "Malloc_STAT", 00:16:01.966 "bytes_read": 838898176, 00:16:01.966 "num_read_ops": 204803, 00:16:01.966 "bytes_written": 0, 00:16:01.966 "num_write_ops": 0, 00:16:01.966 "bytes_unmapped": 0, 00:16:01.966 "num_unmap_ops": 0, 00:16:01.966 "bytes_copied": 0, 00:16:01.966 "num_copy_ops": 0, 00:16:01.966 "read_latency_ticks": 2483856174900, 00:16:01.966 "max_read_latency_ticks": 15824820, 00:16:01.966 "min_read_latency_ticks": 368284, 00:16:01.966 "write_latency_ticks": 0, 00:16:01.966 "max_write_latency_ticks": 0, 00:16:01.966 "min_write_latency_ticks": 0, 00:16:01.966 "unmap_latency_ticks": 0, 00:16:01.966 "max_unmap_latency_ticks": 0, 00:16:01.966 "min_unmap_latency_ticks": 0, 00:16:01.966 "copy_latency_ticks": 0, 00:16:01.966 "max_copy_latency_ticks": 0, 00:16:01.966 "min_copy_latency_ticks": 0, 00:16:01.966 "io_error": {} 00:16:01.966 } 00:16:01.966 ] 00:16:01.966 }' 00:16:01.966 21:29:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # jq -r '.bdevs[0].num_read_ops' 00:16:01.967 21:29:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # io_count2=204803 00:16:01.967 21:29:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 192256 -lt 184835 ']' 00:16:01.967 21:29:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 192256 -gt 204803 ']' 00:16:01.967 21:29:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:16:01.967 21:29:35 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.967 21:29:35 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:16:01.967 00:16:01.967 Latency(us) 00:16:01.967 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.967 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:16:01.967 Malloc_STAT : 2.20 47191.94 184.34 0.00 0.00 5410.90 1473.84 6925.64 00:16:01.967 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:16:01.967 Malloc_STAT : 2.20 49273.28 192.47 0.00 0.00 5182.81 1201.97 5666.43 00:16:01.967 =================================================================================================================== 00:16:01.967 Total : 96465.22 376.82 0.00 0.00 5294.38 1201.97 6925.64 00:16:02.224 0 00:16:02.225 21:29:35 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.225 21:29:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # killprocess 120739 00:16:02.225 21:29:35 blockdev_general.bdev_stat -- common/autotest_common.sh@948 -- # '[' -z 120739 ']' 00:16:02.225 21:29:35 blockdev_general.bdev_stat -- common/autotest_common.sh@952 -- # kill -0 120739 00:16:02.225 21:29:35 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # uname 00:16:02.225 21:29:35 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:02.225 21:29:35 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 120739 00:16:02.225 21:29:35 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:02.225 21:29:35 blockdev_general.bdev_stat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:02.225 21:29:35 blockdev_general.bdev_stat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 120739' 00:16:02.225 killing process with pid 120739 00:16:02.225 21:29:35 blockdev_general.bdev_stat -- common/autotest_common.sh@967 -- # kill 120739 00:16:02.225 Received shutdown signal, test time was about 2.379943 seconds 00:16:02.225 00:16:02.225 Latency(us) 00:16:02.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.225 =================================================================================================================== 00:16:02.225 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:02.225 21:29:35 blockdev_general.bdev_stat -- common/autotest_common.sh@972 -- # wait 120739 00:16:04.125 ************************************ 00:16:04.125 END TEST bdev_stat 00:16:04.125 ************************************ 00:16:04.125 21:29:37 blockdev_general.bdev_stat -- bdev/blockdev.sh@610 -- # trap - SIGINT SIGTERM EXIT 00:16:04.125 00:16:04.125 real 0m5.344s 00:16:04.125 user 0m10.124s 00:16:04.125 sys 0m0.392s 00:16:04.125 21:29:37 blockdev_general.bdev_stat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:04.125 21:29:37 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:16:04.125 21:29:37 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:16:04.125 21:29:37 blockdev_general -- bdev/blockdev.sh@794 -- # [[ bdev == gpt ]] 00:16:04.125 21:29:37 blockdev_general -- bdev/blockdev.sh@798 -- # [[ bdev == crypto_sw ]] 00:16:04.125 21:29:37 blockdev_general -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:16:04.125 21:29:37 blockdev_general -- bdev/blockdev.sh@811 -- # cleanup 00:16:04.125 21:29:37 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:04.125 21:29:37 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:04.125 21:29:37 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:16:04.125 21:29:37 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:16:04.125 21:29:37 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:16:04.125 21:29:37 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:16:04.125 ************************************ 00:16:04.125 END TEST blockdev_general 00:16:04.125 ************************************ 00:16:04.125 00:16:04.125 real 2m35.945s 00:16:04.125 user 6m11.081s 00:16:04.125 sys 0m21.848s 00:16:04.125 21:29:37 blockdev_general -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:04.125 21:29:37 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:16:04.125 21:29:37 -- common/autotest_common.sh@1142 -- # return 0 00:16:04.125 21:29:37 -- spdk/autotest.sh@190 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:16:04.125 21:29:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:04.125 21:29:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:04.125 21:29:37 -- common/autotest_common.sh@10 -- # set +x 00:16:04.125 ************************************ 00:16:04.125 START TEST bdev_raid 00:16:04.125 ************************************ 00:16:04.125 21:29:37 bdev_raid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:16:04.125 * Looking for test storage... 00:16:04.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:04.125 21:29:37 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:04.125 21:29:37 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:16:04.125 21:29:37 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:16:04.125 21:29:37 bdev_raid -- bdev/bdev_raid.sh@851 -- # mkdir -p /raidtest 00:16:04.125 21:29:37 bdev_raid -- bdev/bdev_raid.sh@852 -- # trap 'cleanup; exit 1' EXIT 00:16:04.125 21:29:37 bdev_raid -- bdev/bdev_raid.sh@854 -- # base_blocklen=512 00:16:04.125 21:29:37 bdev_raid -- bdev/bdev_raid.sh@856 -- # uname -s 00:16:04.125 21:29:37 bdev_raid -- bdev/bdev_raid.sh@856 -- # '[' Linux = Linux ']' 00:16:04.125 21:29:37 bdev_raid -- bdev/bdev_raid.sh@856 -- # modprobe -n nbd 00:16:04.125 21:29:37 bdev_raid -- bdev/bdev_raid.sh@857 -- # has_nbd=true 00:16:04.125 21:29:37 bdev_raid -- bdev/bdev_raid.sh@858 -- # modprobe nbd 00:16:04.125 21:29:37 bdev_raid -- bdev/bdev_raid.sh@859 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:16:04.125 21:29:37 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:04.125 21:29:37 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:04.125 21:29:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:04.125 ************************************ 00:16:04.125 START TEST raid_function_test_raid0 00:16:04.125 ************************************ 00:16:04.125 21:29:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1123 -- # raid_function_test raid0 00:16:04.125 21:29:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@80 -- # local raid_level=raid0 00:16:04.125 21:29:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:16:04.125 21:29:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:16:04.125 21:29:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # raid_pid=120915 00:16:04.125 21:29:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 120915' 00:16:04.125 Process raid pid: 120915 00:16:04.125 21:29:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@87 -- # waitforlisten 120915 /var/tmp/spdk-raid.sock 00:16:04.125 21:29:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@829 -- # '[' -z 120915 ']' 00:16:04.125 21:29:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:04.125 21:29:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:04.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:04.125 21:29:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:04.125 21:29:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:04.125 21:29:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:04.125 21:29:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:16:04.125 [2024-07-15 21:29:37.391212] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:16:04.125 [2024-07-15 21:29:37.391457] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:04.383 [2024-07-15 21:29:37.557157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.641 [2024-07-15 21:29:37.783634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.641 [2024-07-15 21:29:38.003622] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:05.208 21:29:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:05.208 21:29:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # return 0 00:16:05.208 21:29:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev raid0 00:16:05.208 21:29:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_level=raid0 00:16:05.208 21:29:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@67 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:16:05.208 21:29:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # cat 00:16:05.208 21:29:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:16:05.467 [2024-07-15 21:29:38.628678] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:16:05.467 [2024-07-15 21:29:38.630687] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:16:05.467 [2024-07-15 21:29:38.630834] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:16:05.467 [2024-07-15 21:29:38.630893] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:05.467 [2024-07-15 21:29:38.631086] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:16:05.467 [2024-07-15 21:29:38.631471] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:16:05.467 [2024-07-15 21:29:38.631519] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000007580 00:16:05.467 [2024-07-15 21:29:38.631738] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.467 Base_1 00:16:05.467 Base_2 00:16:05.467 21:29:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@76 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:16:05.467 21:29:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:16:05.467 21:29:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:16:05.725 21:29:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:16:05.725 21:29:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:16:05.725 21:29:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:16:05.725 21:29:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:05.725 21:29:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:16:05.725 21:29:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:05.725 21:29:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:16:05.725 21:29:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:05.725 21:29:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:16:05.725 21:29:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:05.725 21:29:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:05.725 21:29:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:16:05.983 [2024-07-15 21:29:39.143821] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:16:05.983 /dev/nbd0 00:16:05.983 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:05.983 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:05.983 21:29:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:16:05.983 21:29:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@867 -- # local i 00:16:05.983 21:29:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:16:05.983 21:29:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:16:05.983 21:29:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:16:05.983 21:29:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # break 00:16:05.983 21:29:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:16:05.983 21:29:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:16:05.983 21:29:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:05.983 1+0 records in 00:16:05.983 1+0 records out 00:16:05.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283879 s, 14.4 MB/s 00:16:05.983 21:29:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.984 21:29:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # size=4096 00:16:05.984 21:29:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.984 21:29:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:16:05.984 21:29:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # return 0 00:16:05.984 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:05.984 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:05.984 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:16:05.984 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:05.984 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:16:06.242 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:06.242 { 00:16:06.242 "nbd_device": "/dev/nbd0", 00:16:06.242 "bdev_name": "raid" 00:16:06.242 } 00:16:06.242 ]' 00:16:06.242 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:06.242 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:06.242 { 00:16:06.242 "nbd_device": "/dev/nbd0", 00:16:06.242 "bdev_name": "raid" 00:16:06.242 } 00:16:06.242 ]' 00:16:06.242 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:16:06.242 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:06.242 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:16:06.242 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:16:06.242 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:16:06.242 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # count=1 00:16:06.242 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:16:06.242 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:16:06.242 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:16:06.242 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:16:06.242 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:06.242 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local blksize 00:16:06.242 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:16:06.242 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:16:06.242 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:16:06.242 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # blksize=512 00:16:06.242 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:16:06.242 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:16:06.242 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=(0 1028 321) 00:16:06.242 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:16:06.242 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=(128 2035 456) 00:16:06.242 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:16:06.242 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:16:06.242 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:16:06.242 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:16:06.242 4096+0 records in 00:16:06.242 4096+0 records out 00:16:06.242 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0213111 s, 98.4 MB/s 00:16:06.242 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:16:06.537 4096+0 records in 00:16:06.537 4096+0 records out 00:16:06.537 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.181113 s, 11.6 MB/s 00:16:06.537 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:16:06.537 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:16:06.537 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:16:06.537 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:16:06.537 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:16:06.537 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:16:06.537 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:16:06.537 128+0 records in 00:16:06.537 128+0 records out 00:16:06.537 65536 bytes (66 kB, 64 KiB) copied, 0.000737103 s, 88.9 MB/s 00:16:06.537 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:16:06.537 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:16:06.538 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:16:06.538 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:16:06.538 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:16:06.538 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:16:06.538 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:16:06.538 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:16:06.538 2035+0 records in 00:16:06.538 2035+0 records out 00:16:06.538 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00393849 s, 265 MB/s 00:16:06.538 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:16:06.538 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:16:06.538 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:16:06.538 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:16:06.538 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:16:06.538 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:16:06.538 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:16:06.538 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:16:06.538 456+0 records in 00:16:06.538 456+0 records out 00:16:06.538 233472 bytes (233 kB, 228 KiB) copied, 0.00188078 s, 124 MB/s 00:16:06.538 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:16:06.538 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:16:06.538 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:16:06.538 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:16:06.538 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:16:06.538 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@54 -- # return 0 00:16:06.538 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:16:06.538 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:06.538 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:16:06.538 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:06.538 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:16:06.538 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:06.538 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:16:06.820 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:06.820 [2024-07-15 21:29:39.995013] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.820 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:06.820 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:06.820 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.820 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.820 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:06.820 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:16:06.820 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.820 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:16:06.820 21:29:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:06.820 21:29:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:16:07.079 21:29:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:07.079 21:29:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:07.079 21:29:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:07.079 21:29:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:07.079 21:29:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:16:07.079 21:29:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:07.079 21:29:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:16:07.079 21:29:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:16:07.079 21:29:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:16:07.079 21:29:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # count=0 00:16:07.079 21:29:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:16:07.079 21:29:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@110 -- # killprocess 120915 00:16:07.079 21:29:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@948 -- # '[' -z 120915 ']' 00:16:07.079 21:29:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # kill -0 120915 00:16:07.079 21:29:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@953 -- # uname 00:16:07.079 21:29:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:07.079 21:29:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 120915 00:16:07.079 killing process with pid 120915 00:16:07.079 21:29:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:07.079 21:29:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:07.079 21:29:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 120915' 00:16:07.079 21:29:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@967 -- # kill 120915 00:16:07.080 21:29:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # wait 120915 00:16:07.080 [2024-07-15 21:29:40.339693] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:07.080 [2024-07-15 21:29:40.339797] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:07.080 [2024-07-15 21:29:40.339848] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:07.080 [2024-07-15 21:29:40.339857] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name raid, state offline 00:16:07.338 [2024-07-15 21:29:40.559532] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:08.715 ************************************ 00:16:08.715 END TEST raid_function_test_raid0 00:16:08.715 ************************************ 00:16:08.715 21:29:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@112 -- # return 0 00:16:08.715 00:16:08.715 real 0m4.587s 00:16:08.715 user 0m5.883s 00:16:08.715 sys 0m0.771s 00:16:08.715 21:29:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:08.715 21:29:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:16:08.715 21:29:41 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:08.715 21:29:41 bdev_raid -- bdev/bdev_raid.sh@860 -- # run_test raid_function_test_concat raid_function_test concat 00:16:08.715 21:29:41 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:08.715 21:29:41 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:08.715 21:29:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:08.715 ************************************ 00:16:08.715 START TEST raid_function_test_concat 00:16:08.715 ************************************ 00:16:08.715 21:29:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1123 -- # raid_function_test concat 00:16:08.715 21:29:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@80 -- # local raid_level=concat 00:16:08.715 21:29:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:16:08.715 21:29:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:16:08.715 21:29:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # raid_pid=121087 00:16:08.715 21:29:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:08.716 21:29:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 121087' 00:16:08.716 Process raid pid: 121087 00:16:08.716 21:29:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@87 -- # waitforlisten 121087 /var/tmp/spdk-raid.sock 00:16:08.716 21:29:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@829 -- # '[' -z 121087 ']' 00:16:08.716 21:29:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:08.716 21:29:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:08.716 21:29:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:08.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:08.716 21:29:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:08.716 21:29:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:16:08.716 [2024-07-15 21:29:42.047725] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:16:08.716 [2024-07-15 21:29:42.047962] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.975 [2024-07-15 21:29:42.212483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.234 [2024-07-15 21:29:42.425111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.491 [2024-07-15 21:29:42.636668] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:09.749 21:29:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:09.749 21:29:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # return 0 00:16:09.749 21:29:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev concat 00:16:09.749 21:29:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_level=concat 00:16:09.749 21:29:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@67 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:16:09.749 21:29:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # cat 00:16:09.749 21:29:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:16:10.008 [2024-07-15 21:29:43.230681] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:16:10.008 [2024-07-15 21:29:43.232552] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:16:10.008 [2024-07-15 21:29:43.232685] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:16:10.008 [2024-07-15 21:29:43.232721] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:10.008 [2024-07-15 21:29:43.232908] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:16:10.008 [2024-07-15 21:29:43.233251] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:16:10.008 [2024-07-15 21:29:43.233315] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000007580 00:16:10.008 [2024-07-15 21:29:43.233526] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.008 Base_1 00:16:10.008 Base_2 00:16:10.008 21:29:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@76 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:16:10.008 21:29:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:16:10.008 21:29:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:16:10.266 21:29:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:16:10.266 21:29:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:16:10.266 21:29:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:16:10.266 21:29:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:10.266 21:29:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:16:10.266 21:29:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:10.266 21:29:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:16:10.267 21:29:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:10.267 21:29:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:16:10.267 21:29:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:10.267 21:29:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:10.267 21:29:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:16:10.531 [2024-07-15 21:29:43.682065] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:16:10.531 /dev/nbd0 00:16:10.531 21:29:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:10.531 21:29:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:10.531 21:29:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:16:10.531 21:29:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@867 -- # local i 00:16:10.531 21:29:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:16:10.531 21:29:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:16:10.531 21:29:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:16:10.531 21:29:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # break 00:16:10.531 21:29:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:16:10.531 21:29:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:16:10.531 21:29:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:10.531 1+0 records in 00:16:10.531 1+0 records out 00:16:10.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000538708 s, 7.6 MB/s 00:16:10.531 21:29:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:10.531 21:29:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # size=4096 00:16:10.531 21:29:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:10.531 21:29:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:16:10.531 21:29:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # return 0 00:16:10.531 21:29:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:10.531 21:29:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:10.532 21:29:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:16:10.532 21:29:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:10.532 21:29:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:16:10.799 21:29:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:10.799 { 00:16:10.799 "nbd_device": "/dev/nbd0", 00:16:10.799 "bdev_name": "raid" 00:16:10.799 } 00:16:10.799 ]' 00:16:10.799 21:29:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:10.799 { 00:16:10.799 "nbd_device": "/dev/nbd0", 00:16:10.799 "bdev_name": "raid" 00:16:10.799 } 00:16:10.799 ]' 00:16:10.799 21:29:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:10.799 21:29:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:16:10.799 21:29:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:16:10.799 21:29:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:10.799 21:29:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:16:10.799 21:29:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:16:10.799 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # count=1 00:16:10.799 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:16:10.799 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:16:10.799 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:16:10.799 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:16:10.799 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:10.799 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local blksize 00:16:10.799 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:16:10.799 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:16:10.799 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:16:10.799 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # blksize=512 00:16:10.799 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:16:10.799 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:16:10.799 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=(0 1028 321) 00:16:10.799 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:16:10.799 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=(128 2035 456) 00:16:10.799 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:16:10.799 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:16:10.799 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:16:10.799 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:16:10.799 4096+0 records in 00:16:10.799 4096+0 records out 00:16:10.799 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0311258 s, 67.4 MB/s 00:16:10.799 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:16:11.057 4096+0 records in 00:16:11.057 4096+0 records out 00:16:11.057 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.212569 s, 9.9 MB/s 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:16:11.057 128+0 records in 00:16:11.057 128+0 records out 00:16:11.057 65536 bytes (66 kB, 64 KiB) copied, 0.00200076 s, 32.8 MB/s 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:16:11.057 2035+0 records in 00:16:11.057 2035+0 records out 00:16:11.057 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00364333 s, 286 MB/s 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:16:11.057 456+0 records in 00:16:11.057 456+0 records out 00:16:11.057 233472 bytes (233 kB, 228 KiB) copied, 0.00194824 s, 120 MB/s 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@54 -- # return 0 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:11.057 21:29:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:16:11.314 21:29:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:11.314 [2024-07-15 21:29:44.605410] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.314 21:29:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:11.314 21:29:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:11.314 21:29:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:11.314 21:29:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:11.314 21:29:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:11.314 21:29:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:16:11.314 21:29:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:16:11.314 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:16:11.314 21:29:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:11.314 21:29:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:16:11.572 21:29:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:11.572 21:29:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:11.572 21:29:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:11.572 21:29:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:11.572 21:29:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:16:11.572 21:29:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:11.572 21:29:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:16:11.572 21:29:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:16:11.572 21:29:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:16:11.572 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # count=0 00:16:11.572 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:16:11.572 21:29:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@110 -- # killprocess 121087 00:16:11.572 21:29:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@948 -- # '[' -z 121087 ']' 00:16:11.572 21:29:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # kill -0 121087 00:16:11.572 21:29:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@953 -- # uname 00:16:11.572 21:29:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:11.572 21:29:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 121087 00:16:11.572 killing process with pid 121087 00:16:11.572 21:29:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:11.572 21:29:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:11.572 21:29:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 121087' 00:16:11.572 21:29:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@967 -- # kill 121087 00:16:11.572 21:29:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # wait 121087 00:16:11.572 [2024-07-15 21:29:44.924262] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:11.572 [2024-07-15 21:29:44.924364] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:11.572 [2024-07-15 21:29:44.924412] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:11.572 [2024-07-15 21:29:44.924421] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name raid, state offline 00:16:11.829 [2024-07-15 21:29:45.123171] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:13.199 ************************************ 00:16:13.199 END TEST raid_function_test_concat 00:16:13.199 ************************************ 00:16:13.199 21:29:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@112 -- # return 0 00:16:13.199 00:16:13.199 real 0m4.462s 00:16:13.199 user 0m5.620s 00:16:13.199 sys 0m0.830s 00:16:13.199 21:29:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:13.199 21:29:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:16:13.199 21:29:46 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:13.199 21:29:46 bdev_raid -- bdev/bdev_raid.sh@863 -- # run_test raid0_resize_test raid0_resize_test 00:16:13.199 21:29:46 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:13.199 21:29:46 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:13.199 21:29:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:13.199 ************************************ 00:16:13.199 START TEST raid0_resize_test 00:16:13.199 ************************************ 00:16:13.199 21:29:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1123 -- # raid0_resize_test 00:16:13.199 21:29:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local blksize=512 00:16:13.199 21:29:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local bdev_size_mb=32 00:16:13.199 21:29:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local new_bdev_size_mb=64 00:16:13.199 21:29:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local blkcnt 00:16:13.199 21:29:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local raid_size_mb 00:16:13.199 21:29:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local new_raid_size_mb 00:16:13.199 21:29:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # raid_pid=121245 00:16:13.199 21:29:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:13.199 21:29:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # echo 'Process raid pid: 121245' 00:16:13.199 Process raid pid: 121245 00:16:13.199 21:29:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # waitforlisten 121245 /var/tmp/spdk-raid.sock 00:16:13.199 21:29:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@829 -- # '[' -z 121245 ']' 00:16:13.199 21:29:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:13.199 21:29:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:13.199 21:29:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:13.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:13.199 21:29:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:13.199 21:29:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.199 [2024-07-15 21:29:46.565056] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:16:13.199 [2024-07-15 21:29:46.565401] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.456 [2024-07-15 21:29:46.751120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.714 [2024-07-15 21:29:46.965433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.973 [2024-07-15 21:29:47.189411] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.231 21:29:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:14.231 21:29:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # return 0 00:16:14.231 21:29:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:16:14.489 Base_1 00:16:14.489 21:29:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:16:14.489 Base_2 00:16:14.748 21:29:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:16:14.748 [2024-07-15 21:29:48.073887] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:16:14.748 [2024-07-15 21:29:48.075932] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:16:14.748 [2024-07-15 21:29:48.076057] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:16:14.748 [2024-07-15 21:29:48.076095] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:14.748 [2024-07-15 21:29:48.076282] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:16:14.748 [2024-07-15 21:29:48.076601] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:16:14.748 [2024-07-15 21:29:48.076641] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000007580 00:16:14.748 [2024-07-15 21:29:48.076821] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.748 21:29:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:16:15.006 [2024-07-15 21:29:48.305493] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:16:15.006 [2024-07-15 21:29:48.305594] bdev_raid.c:2275:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:16:15.006 true 00:16:15.006 21:29:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:16:15.006 21:29:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # jq '.[].num_blocks' 00:16:15.264 [2024-07-15 21:29:48.561195] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:15.264 21:29:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # blkcnt=131072 00:16:15.264 21:29:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # raid_size_mb=64 00:16:15.264 21:29:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # '[' 64 '!=' 64 ']' 00:16:15.264 21:29:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:16:15.522 [2024-07-15 21:29:48.784618] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:16:15.522 [2024-07-15 21:29:48.784726] bdev_raid.c:2275:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:16:15.522 [2024-07-15 21:29:48.784802] bdev_raid.c:2289:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:16:15.522 true 00:16:15.522 21:29:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # jq '.[].num_blocks' 00:16:15.522 21:29:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:16:15.781 [2024-07-15 21:29:49.012394] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:15.781 21:29:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # blkcnt=262144 00:16:15.781 21:29:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # raid_size_mb=128 00:16:15.781 21:29:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 128 '!=' 128 ']' 00:16:15.781 21:29:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@386 -- # killprocess 121245 00:16:15.781 21:29:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@948 -- # '[' -z 121245 ']' 00:16:15.781 21:29:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # kill -0 121245 00:16:15.781 21:29:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # uname 00:16:15.781 21:29:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:15.781 21:29:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 121245 00:16:15.781 killing process with pid 121245 00:16:15.781 21:29:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:15.781 21:29:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:15.781 21:29:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 121245' 00:16:15.781 21:29:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@967 -- # kill 121245 00:16:15.781 21:29:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # wait 121245 00:16:15.781 [2024-07-15 21:29:49.057518] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:15.781 [2024-07-15 21:29:49.057613] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:15.781 [2024-07-15 21:29:49.057674] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:15.781 [2024-07-15 21:29:49.057683] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Raid, state offline 00:16:15.781 [2024-07-15 21:29:49.058276] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:17.157 ************************************ 00:16:17.157 END TEST raid0_resize_test 00:16:17.157 ************************************ 00:16:17.157 21:29:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@388 -- # return 0 00:16:17.157 00:16:17.157 real 0m3.910s 00:16:17.157 user 0m5.468s 00:16:17.157 sys 0m0.454s 00:16:17.157 21:29:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:17.157 21:29:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.157 21:29:50 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:17.157 21:29:50 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:16:17.157 21:29:50 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:16:17.157 21:29:50 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:16:17.157 21:29:50 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:17.157 21:29:50 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:17.157 21:29:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:17.157 ************************************ 00:16:17.157 START TEST raid_state_function_test 00:16:17.157 ************************************ 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 false 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=121337 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 121337' 00:16:17.157 Process raid pid: 121337 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 121337 /var/tmp/spdk-raid.sock 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 121337 ']' 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:17.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:17.157 21:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.415 [2024-07-15 21:29:50.550287] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:16:17.415 [2024-07-15 21:29:50.550989] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.415 [2024-07-15 21:29:50.701413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.673 [2024-07-15 21:29:50.916464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.930 [2024-07-15 21:29:51.137786] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:18.189 21:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:18.189 21:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:16:18.189 21:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:18.448 [2024-07-15 21:29:51.691717] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:18.448 [2024-07-15 21:29:51.691909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:18.448 [2024-07-15 21:29:51.691970] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:18.448 [2024-07-15 21:29:51.692027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:18.448 21:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:18.448 21:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:18.448 21:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:18.448 21:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:18.448 21:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:18.448 21:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:18.448 21:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:18.448 21:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:18.448 21:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:18.448 21:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:18.448 21:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.448 21:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.705 21:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:18.705 "name": "Existed_Raid", 00:16:18.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.705 "strip_size_kb": 64, 00:16:18.705 "state": "configuring", 00:16:18.705 "raid_level": "raid0", 00:16:18.705 "superblock": false, 00:16:18.705 "num_base_bdevs": 2, 00:16:18.705 "num_base_bdevs_discovered": 0, 00:16:18.705 "num_base_bdevs_operational": 2, 00:16:18.705 "base_bdevs_list": [ 00:16:18.705 { 00:16:18.705 "name": "BaseBdev1", 00:16:18.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.705 "is_configured": false, 00:16:18.705 "data_offset": 0, 00:16:18.705 "data_size": 0 00:16:18.705 }, 00:16:18.705 { 00:16:18.705 "name": "BaseBdev2", 00:16:18.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.705 "is_configured": false, 00:16:18.705 "data_offset": 0, 00:16:18.705 "data_size": 0 00:16:18.705 } 00:16:18.705 ] 00:16:18.705 }' 00:16:18.705 21:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:18.705 21:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.272 21:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:19.529 [2024-07-15 21:29:52.785738] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:19.529 [2024-07-15 21:29:52.785873] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:19.529 21:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:19.786 [2024-07-15 21:29:53.033343] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:19.787 [2024-07-15 21:29:53.033482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:19.787 [2024-07-15 21:29:53.033512] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:19.787 [2024-07-15 21:29:53.033548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:19.787 21:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:20.043 [2024-07-15 21:29:53.292263] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:20.043 BaseBdev1 00:16:20.044 21:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:20.044 21:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:20.044 21:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:20.044 21:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:20.044 21:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:20.044 21:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:20.044 21:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:20.302 21:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:20.560 [ 00:16:20.560 { 00:16:20.560 "name": "BaseBdev1", 00:16:20.560 "aliases": [ 00:16:20.560 "6bf9cfb4-3fcc-4169-9df1-8c74e1b265d4" 00:16:20.560 ], 00:16:20.560 "product_name": "Malloc disk", 00:16:20.560 "block_size": 512, 00:16:20.560 "num_blocks": 65536, 00:16:20.560 "uuid": "6bf9cfb4-3fcc-4169-9df1-8c74e1b265d4", 00:16:20.560 "assigned_rate_limits": { 00:16:20.560 "rw_ios_per_sec": 0, 00:16:20.560 "rw_mbytes_per_sec": 0, 00:16:20.560 "r_mbytes_per_sec": 0, 00:16:20.560 "w_mbytes_per_sec": 0 00:16:20.560 }, 00:16:20.560 "claimed": true, 00:16:20.560 "claim_type": "exclusive_write", 00:16:20.560 "zoned": false, 00:16:20.560 "supported_io_types": { 00:16:20.560 "read": true, 00:16:20.560 "write": true, 00:16:20.560 "unmap": true, 00:16:20.560 "flush": true, 00:16:20.560 "reset": true, 00:16:20.560 "nvme_admin": false, 00:16:20.560 "nvme_io": false, 00:16:20.560 "nvme_io_md": false, 00:16:20.560 "write_zeroes": true, 00:16:20.560 "zcopy": true, 00:16:20.560 "get_zone_info": false, 00:16:20.560 "zone_management": false, 00:16:20.560 "zone_append": false, 00:16:20.560 "compare": false, 00:16:20.560 "compare_and_write": false, 00:16:20.560 "abort": true, 00:16:20.560 "seek_hole": false, 00:16:20.560 "seek_data": false, 00:16:20.560 "copy": true, 00:16:20.560 "nvme_iov_md": false 00:16:20.560 }, 00:16:20.560 "memory_domains": [ 00:16:20.560 { 00:16:20.560 "dma_device_id": "system", 00:16:20.560 "dma_device_type": 1 00:16:20.560 }, 00:16:20.560 { 00:16:20.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.560 "dma_device_type": 2 00:16:20.560 } 00:16:20.560 ], 00:16:20.560 "driver_specific": {} 00:16:20.560 } 00:16:20.560 ] 00:16:20.560 21:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:20.560 21:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:20.560 21:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:20.560 21:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:20.560 21:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:20.560 21:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:20.560 21:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:20.560 21:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:20.560 21:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:20.560 21:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:20.560 21:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:20.560 21:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.560 21:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.817 21:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:20.817 "name": "Existed_Raid", 00:16:20.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.817 "strip_size_kb": 64, 00:16:20.817 "state": "configuring", 00:16:20.817 "raid_level": "raid0", 00:16:20.817 "superblock": false, 00:16:20.817 "num_base_bdevs": 2, 00:16:20.817 "num_base_bdevs_discovered": 1, 00:16:20.817 "num_base_bdevs_operational": 2, 00:16:20.817 "base_bdevs_list": [ 00:16:20.817 { 00:16:20.817 "name": "BaseBdev1", 00:16:20.817 "uuid": "6bf9cfb4-3fcc-4169-9df1-8c74e1b265d4", 00:16:20.817 "is_configured": true, 00:16:20.817 "data_offset": 0, 00:16:20.817 "data_size": 65536 00:16:20.817 }, 00:16:20.817 { 00:16:20.817 "name": "BaseBdev2", 00:16:20.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.817 "is_configured": false, 00:16:20.817 "data_offset": 0, 00:16:20.817 "data_size": 0 00:16:20.817 } 00:16:20.817 ] 00:16:20.817 }' 00:16:20.817 21:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:20.817 21:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.384 21:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:21.642 [2024-07-15 21:29:54.833741] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:21.642 [2024-07-15 21:29:54.833879] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:16:21.642 21:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:21.900 [2024-07-15 21:29:55.053421] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:21.900 [2024-07-15 21:29:55.055323] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:21.900 [2024-07-15 21:29:55.055433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:21.900 21:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:21.900 21:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:21.900 21:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:21.900 21:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:21.900 21:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:21.900 21:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:21.900 21:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:21.900 21:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:21.900 21:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:21.900 21:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:21.900 21:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:21.900 21:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:21.900 21:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.900 21:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.158 21:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:22.158 "name": "Existed_Raid", 00:16:22.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.158 "strip_size_kb": 64, 00:16:22.159 "state": "configuring", 00:16:22.159 "raid_level": "raid0", 00:16:22.159 "superblock": false, 00:16:22.159 "num_base_bdevs": 2, 00:16:22.159 "num_base_bdevs_discovered": 1, 00:16:22.159 "num_base_bdevs_operational": 2, 00:16:22.159 "base_bdevs_list": [ 00:16:22.159 { 00:16:22.159 "name": "BaseBdev1", 00:16:22.159 "uuid": "6bf9cfb4-3fcc-4169-9df1-8c74e1b265d4", 00:16:22.159 "is_configured": true, 00:16:22.159 "data_offset": 0, 00:16:22.159 "data_size": 65536 00:16:22.159 }, 00:16:22.159 { 00:16:22.159 "name": "BaseBdev2", 00:16:22.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.159 "is_configured": false, 00:16:22.159 "data_offset": 0, 00:16:22.159 "data_size": 0 00:16:22.159 } 00:16:22.159 ] 00:16:22.159 }' 00:16:22.159 21:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:22.159 21:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.725 21:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:22.983 [2024-07-15 21:29:56.236796] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:22.983 [2024-07-15 21:29:56.236929] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:16:22.983 [2024-07-15 21:29:56.236952] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:22.983 [2024-07-15 21:29:56.237150] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:16:22.983 [2024-07-15 21:29:56.237524] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:16:22.983 [2024-07-15 21:29:56.237571] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:16:22.983 [2024-07-15 21:29:56.237884] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.983 BaseBdev2 00:16:22.983 21:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:22.983 21:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:22.983 21:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:22.983 21:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:22.983 21:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:22.983 21:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:22.983 21:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:23.241 21:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:23.500 [ 00:16:23.500 { 00:16:23.500 "name": "BaseBdev2", 00:16:23.500 "aliases": [ 00:16:23.500 "ca3d9281-5fd6-4ed5-bd2e-30f97922ca6e" 00:16:23.500 ], 00:16:23.500 "product_name": "Malloc disk", 00:16:23.500 "block_size": 512, 00:16:23.500 "num_blocks": 65536, 00:16:23.500 "uuid": "ca3d9281-5fd6-4ed5-bd2e-30f97922ca6e", 00:16:23.500 "assigned_rate_limits": { 00:16:23.500 "rw_ios_per_sec": 0, 00:16:23.500 "rw_mbytes_per_sec": 0, 00:16:23.500 "r_mbytes_per_sec": 0, 00:16:23.500 "w_mbytes_per_sec": 0 00:16:23.500 }, 00:16:23.500 "claimed": true, 00:16:23.500 "claim_type": "exclusive_write", 00:16:23.500 "zoned": false, 00:16:23.500 "supported_io_types": { 00:16:23.500 "read": true, 00:16:23.500 "write": true, 00:16:23.500 "unmap": true, 00:16:23.500 "flush": true, 00:16:23.500 "reset": true, 00:16:23.500 "nvme_admin": false, 00:16:23.500 "nvme_io": false, 00:16:23.500 "nvme_io_md": false, 00:16:23.500 "write_zeroes": true, 00:16:23.500 "zcopy": true, 00:16:23.500 "get_zone_info": false, 00:16:23.500 "zone_management": false, 00:16:23.500 "zone_append": false, 00:16:23.500 "compare": false, 00:16:23.500 "compare_and_write": false, 00:16:23.500 "abort": true, 00:16:23.500 "seek_hole": false, 00:16:23.500 "seek_data": false, 00:16:23.500 "copy": true, 00:16:23.500 "nvme_iov_md": false 00:16:23.500 }, 00:16:23.500 "memory_domains": [ 00:16:23.500 { 00:16:23.500 "dma_device_id": "system", 00:16:23.500 "dma_device_type": 1 00:16:23.500 }, 00:16:23.500 { 00:16:23.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.500 "dma_device_type": 2 00:16:23.500 } 00:16:23.500 ], 00:16:23.500 "driver_specific": {} 00:16:23.500 } 00:16:23.500 ] 00:16:23.500 21:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:23.500 21:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:23.500 21:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:23.500 21:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:16:23.500 21:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:23.500 21:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:23.501 21:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:23.501 21:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:23.501 21:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:23.501 21:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:23.501 21:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:23.501 21:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:23.501 21:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:23.501 21:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.501 21:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.760 21:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:23.760 "name": "Existed_Raid", 00:16:23.760 "uuid": "8c9d68e8-cd30-4340-b6d0-9f5d95a4dd33", 00:16:23.760 "strip_size_kb": 64, 00:16:23.760 "state": "online", 00:16:23.760 "raid_level": "raid0", 00:16:23.760 "superblock": false, 00:16:23.760 "num_base_bdevs": 2, 00:16:23.760 "num_base_bdevs_discovered": 2, 00:16:23.760 "num_base_bdevs_operational": 2, 00:16:23.760 "base_bdevs_list": [ 00:16:23.760 { 00:16:23.760 "name": "BaseBdev1", 00:16:23.760 "uuid": "6bf9cfb4-3fcc-4169-9df1-8c74e1b265d4", 00:16:23.760 "is_configured": true, 00:16:23.760 "data_offset": 0, 00:16:23.760 "data_size": 65536 00:16:23.760 }, 00:16:23.760 { 00:16:23.760 "name": "BaseBdev2", 00:16:23.760 "uuid": "ca3d9281-5fd6-4ed5-bd2e-30f97922ca6e", 00:16:23.760 "is_configured": true, 00:16:23.760 "data_offset": 0, 00:16:23.760 "data_size": 65536 00:16:23.760 } 00:16:23.760 ] 00:16:23.760 }' 00:16:23.760 21:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:23.760 21:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.327 21:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:24.327 21:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:24.327 21:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:24.327 21:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:24.327 21:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:24.327 21:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:24.327 21:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:24.327 21:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:24.586 [2024-07-15 21:29:57.794590] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:24.586 21:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:24.586 "name": "Existed_Raid", 00:16:24.586 "aliases": [ 00:16:24.586 "8c9d68e8-cd30-4340-b6d0-9f5d95a4dd33" 00:16:24.586 ], 00:16:24.586 "product_name": "Raid Volume", 00:16:24.586 "block_size": 512, 00:16:24.586 "num_blocks": 131072, 00:16:24.586 "uuid": "8c9d68e8-cd30-4340-b6d0-9f5d95a4dd33", 00:16:24.586 "assigned_rate_limits": { 00:16:24.586 "rw_ios_per_sec": 0, 00:16:24.586 "rw_mbytes_per_sec": 0, 00:16:24.586 "r_mbytes_per_sec": 0, 00:16:24.586 "w_mbytes_per_sec": 0 00:16:24.586 }, 00:16:24.586 "claimed": false, 00:16:24.586 "zoned": false, 00:16:24.586 "supported_io_types": { 00:16:24.586 "read": true, 00:16:24.586 "write": true, 00:16:24.586 "unmap": true, 00:16:24.586 "flush": true, 00:16:24.586 "reset": true, 00:16:24.586 "nvme_admin": false, 00:16:24.586 "nvme_io": false, 00:16:24.586 "nvme_io_md": false, 00:16:24.586 "write_zeroes": true, 00:16:24.586 "zcopy": false, 00:16:24.586 "get_zone_info": false, 00:16:24.586 "zone_management": false, 00:16:24.586 "zone_append": false, 00:16:24.586 "compare": false, 00:16:24.586 "compare_and_write": false, 00:16:24.586 "abort": false, 00:16:24.586 "seek_hole": false, 00:16:24.586 "seek_data": false, 00:16:24.586 "copy": false, 00:16:24.586 "nvme_iov_md": false 00:16:24.586 }, 00:16:24.586 "memory_domains": [ 00:16:24.586 { 00:16:24.586 "dma_device_id": "system", 00:16:24.586 "dma_device_type": 1 00:16:24.586 }, 00:16:24.586 { 00:16:24.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.586 "dma_device_type": 2 00:16:24.586 }, 00:16:24.586 { 00:16:24.586 "dma_device_id": "system", 00:16:24.586 "dma_device_type": 1 00:16:24.586 }, 00:16:24.586 { 00:16:24.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.586 "dma_device_type": 2 00:16:24.586 } 00:16:24.586 ], 00:16:24.586 "driver_specific": { 00:16:24.586 "raid": { 00:16:24.586 "uuid": "8c9d68e8-cd30-4340-b6d0-9f5d95a4dd33", 00:16:24.586 "strip_size_kb": 64, 00:16:24.586 "state": "online", 00:16:24.586 "raid_level": "raid0", 00:16:24.586 "superblock": false, 00:16:24.586 "num_base_bdevs": 2, 00:16:24.586 "num_base_bdevs_discovered": 2, 00:16:24.586 "num_base_bdevs_operational": 2, 00:16:24.586 "base_bdevs_list": [ 00:16:24.586 { 00:16:24.586 "name": "BaseBdev1", 00:16:24.586 "uuid": "6bf9cfb4-3fcc-4169-9df1-8c74e1b265d4", 00:16:24.586 "is_configured": true, 00:16:24.586 "data_offset": 0, 00:16:24.586 "data_size": 65536 00:16:24.586 }, 00:16:24.586 { 00:16:24.586 "name": "BaseBdev2", 00:16:24.586 "uuid": "ca3d9281-5fd6-4ed5-bd2e-30f97922ca6e", 00:16:24.586 "is_configured": true, 00:16:24.586 "data_offset": 0, 00:16:24.586 "data_size": 65536 00:16:24.586 } 00:16:24.586 ] 00:16:24.586 } 00:16:24.586 } 00:16:24.586 }' 00:16:24.586 21:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:24.586 21:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:24.586 BaseBdev2' 00:16:24.586 21:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:24.586 21:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:24.586 21:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:24.846 21:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:24.846 "name": "BaseBdev1", 00:16:24.846 "aliases": [ 00:16:24.846 "6bf9cfb4-3fcc-4169-9df1-8c74e1b265d4" 00:16:24.846 ], 00:16:24.846 "product_name": "Malloc disk", 00:16:24.846 "block_size": 512, 00:16:24.846 "num_blocks": 65536, 00:16:24.846 "uuid": "6bf9cfb4-3fcc-4169-9df1-8c74e1b265d4", 00:16:24.846 "assigned_rate_limits": { 00:16:24.846 "rw_ios_per_sec": 0, 00:16:24.846 "rw_mbytes_per_sec": 0, 00:16:24.846 "r_mbytes_per_sec": 0, 00:16:24.846 "w_mbytes_per_sec": 0 00:16:24.846 }, 00:16:24.846 "claimed": true, 00:16:24.846 "claim_type": "exclusive_write", 00:16:24.846 "zoned": false, 00:16:24.846 "supported_io_types": { 00:16:24.846 "read": true, 00:16:24.846 "write": true, 00:16:24.846 "unmap": true, 00:16:24.846 "flush": true, 00:16:24.846 "reset": true, 00:16:24.846 "nvme_admin": false, 00:16:24.846 "nvme_io": false, 00:16:24.846 "nvme_io_md": false, 00:16:24.846 "write_zeroes": true, 00:16:24.846 "zcopy": true, 00:16:24.846 "get_zone_info": false, 00:16:24.846 "zone_management": false, 00:16:24.846 "zone_append": false, 00:16:24.846 "compare": false, 00:16:24.846 "compare_and_write": false, 00:16:24.846 "abort": true, 00:16:24.846 "seek_hole": false, 00:16:24.846 "seek_data": false, 00:16:24.846 "copy": true, 00:16:24.846 "nvme_iov_md": false 00:16:24.846 }, 00:16:24.846 "memory_domains": [ 00:16:24.846 { 00:16:24.846 "dma_device_id": "system", 00:16:24.846 "dma_device_type": 1 00:16:24.846 }, 00:16:24.846 { 00:16:24.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.846 "dma_device_type": 2 00:16:24.846 } 00:16:24.846 ], 00:16:24.846 "driver_specific": {} 00:16:24.846 }' 00:16:24.846 21:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:24.846 21:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:24.846 21:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:24.846 21:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:25.106 21:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:25.106 21:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:25.106 21:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:25.106 21:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:25.106 21:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:25.106 21:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:25.365 21:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:25.365 21:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:25.365 21:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:25.365 21:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:25.365 21:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:25.625 21:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:25.625 "name": "BaseBdev2", 00:16:25.625 "aliases": [ 00:16:25.625 "ca3d9281-5fd6-4ed5-bd2e-30f97922ca6e" 00:16:25.625 ], 00:16:25.625 "product_name": "Malloc disk", 00:16:25.625 "block_size": 512, 00:16:25.625 "num_blocks": 65536, 00:16:25.625 "uuid": "ca3d9281-5fd6-4ed5-bd2e-30f97922ca6e", 00:16:25.625 "assigned_rate_limits": { 00:16:25.625 "rw_ios_per_sec": 0, 00:16:25.625 "rw_mbytes_per_sec": 0, 00:16:25.625 "r_mbytes_per_sec": 0, 00:16:25.625 "w_mbytes_per_sec": 0 00:16:25.625 }, 00:16:25.625 "claimed": true, 00:16:25.625 "claim_type": "exclusive_write", 00:16:25.625 "zoned": false, 00:16:25.625 "supported_io_types": { 00:16:25.625 "read": true, 00:16:25.625 "write": true, 00:16:25.625 "unmap": true, 00:16:25.625 "flush": true, 00:16:25.625 "reset": true, 00:16:25.625 "nvme_admin": false, 00:16:25.625 "nvme_io": false, 00:16:25.625 "nvme_io_md": false, 00:16:25.625 "write_zeroes": true, 00:16:25.625 "zcopy": true, 00:16:25.625 "get_zone_info": false, 00:16:25.625 "zone_management": false, 00:16:25.625 "zone_append": false, 00:16:25.625 "compare": false, 00:16:25.625 "compare_and_write": false, 00:16:25.625 "abort": true, 00:16:25.625 "seek_hole": false, 00:16:25.625 "seek_data": false, 00:16:25.625 "copy": true, 00:16:25.625 "nvme_iov_md": false 00:16:25.625 }, 00:16:25.625 "memory_domains": [ 00:16:25.625 { 00:16:25.625 "dma_device_id": "system", 00:16:25.625 "dma_device_type": 1 00:16:25.625 }, 00:16:25.625 { 00:16:25.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.625 "dma_device_type": 2 00:16:25.625 } 00:16:25.625 ], 00:16:25.625 "driver_specific": {} 00:16:25.625 }' 00:16:25.625 21:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:25.625 21:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:25.625 21:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:25.625 21:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:25.625 21:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:25.625 21:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:25.625 21:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:25.884 21:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:25.884 21:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:25.884 21:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:25.884 21:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:25.884 21:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:25.884 21:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:26.144 [2024-07-15 21:29:59.363706] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:26.144 [2024-07-15 21:29:59.363823] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:26.144 [2024-07-15 21:29:59.363903] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:26.144 21:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:26.144 21:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:16:26.144 21:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:26.144 21:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:26.144 21:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:26.144 21:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:16:26.144 21:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:26.144 21:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:26.144 21:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:26.144 21:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:26.144 21:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:26.144 21:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:26.144 21:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:26.144 21:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:26.144 21:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:26.144 21:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.144 21:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.405 21:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:26.405 "name": "Existed_Raid", 00:16:26.405 "uuid": "8c9d68e8-cd30-4340-b6d0-9f5d95a4dd33", 00:16:26.405 "strip_size_kb": 64, 00:16:26.405 "state": "offline", 00:16:26.405 "raid_level": "raid0", 00:16:26.405 "superblock": false, 00:16:26.405 "num_base_bdevs": 2, 00:16:26.405 "num_base_bdevs_discovered": 1, 00:16:26.405 "num_base_bdevs_operational": 1, 00:16:26.405 "base_bdevs_list": [ 00:16:26.405 { 00:16:26.405 "name": null, 00:16:26.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.405 "is_configured": false, 00:16:26.405 "data_offset": 0, 00:16:26.405 "data_size": 65536 00:16:26.405 }, 00:16:26.405 { 00:16:26.405 "name": "BaseBdev2", 00:16:26.405 "uuid": "ca3d9281-5fd6-4ed5-bd2e-30f97922ca6e", 00:16:26.405 "is_configured": true, 00:16:26.405 "data_offset": 0, 00:16:26.405 "data_size": 65536 00:16:26.405 } 00:16:26.405 ] 00:16:26.405 }' 00:16:26.405 21:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:26.405 21:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.343 21:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:27.343 21:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:27.343 21:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.343 21:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:27.343 21:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:27.343 21:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:27.343 21:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:27.603 [2024-07-15 21:30:00.759326] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:27.603 [2024-07-15 21:30:00.759493] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:16:27.603 21:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:27.603 21:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:27.603 21:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.603 21:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:27.862 21:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:27.862 21:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:27.862 21:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:16:27.862 21:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 121337 00:16:27.862 21:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 121337 ']' 00:16:27.862 21:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 121337 00:16:27.862 21:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:16:27.862 21:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:27.862 21:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 121337 00:16:27.862 killing process with pid 121337 00:16:27.862 21:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:27.862 21:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:27.862 21:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 121337' 00:16:27.862 21:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 121337 00:16:27.862 21:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 121337 00:16:27.862 [2024-07-15 21:30:01.138408] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:27.862 [2024-07-15 21:30:01.138562] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:29.243 ************************************ 00:16:29.243 END TEST raid_state_function_test 00:16:29.243 ************************************ 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:16:29.243 00:16:29.243 real 0m11.976s 00:16:29.243 user 0m20.947s 00:16:29.243 sys 0m1.460s 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.243 21:30:02 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:29.243 21:30:02 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:16:29.243 21:30:02 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:29.243 21:30:02 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:29.243 21:30:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:29.243 ************************************ 00:16:29.243 START TEST raid_state_function_test_sb 00:16:29.243 ************************************ 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 true 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=121744 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 121744' 00:16:29.243 Process raid pid: 121744 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 121744 /var/tmp/spdk-raid.sock 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 121744 ']' 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:29.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:29.243 21:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.243 [2024-07-15 21:30:02.592327] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:16:29.243 [2024-07-15 21:30:02.592572] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.502 [2024-07-15 21:30:02.757042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.761 [2024-07-15 21:30:02.972582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.021 [2024-07-15 21:30:03.182258] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:30.375 21:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:30.375 21:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:16:30.375 21:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:30.375 [2024-07-15 21:30:03.651393] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:30.375 [2024-07-15 21:30:03.651567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:30.375 [2024-07-15 21:30:03.651606] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:30.375 [2024-07-15 21:30:03.651645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:30.375 21:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:30.375 21:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:30.375 21:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:30.375 21:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:30.375 21:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:30.375 21:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:30.375 21:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:30.375 21:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:30.375 21:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:30.375 21:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:30.375 21:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.375 21:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.635 21:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:30.635 "name": "Existed_Raid", 00:16:30.635 "uuid": "1af278f9-6fe3-49f2-8f16-e8f865c19ce1", 00:16:30.635 "strip_size_kb": 64, 00:16:30.635 "state": "configuring", 00:16:30.635 "raid_level": "raid0", 00:16:30.635 "superblock": true, 00:16:30.635 "num_base_bdevs": 2, 00:16:30.635 "num_base_bdevs_discovered": 0, 00:16:30.635 "num_base_bdevs_operational": 2, 00:16:30.635 "base_bdevs_list": [ 00:16:30.635 { 00:16:30.635 "name": "BaseBdev1", 00:16:30.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.635 "is_configured": false, 00:16:30.635 "data_offset": 0, 00:16:30.635 "data_size": 0 00:16:30.635 }, 00:16:30.635 { 00:16:30.635 "name": "BaseBdev2", 00:16:30.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.635 "is_configured": false, 00:16:30.635 "data_offset": 0, 00:16:30.635 "data_size": 0 00:16:30.635 } 00:16:30.635 ] 00:16:30.635 }' 00:16:30.635 21:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:30.635 21:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.205 21:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:31.465 [2024-07-15 21:30:04.781334] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:31.465 [2024-07-15 21:30:04.781452] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:31.465 21:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:31.724 [2024-07-15 21:30:04.977220] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:31.724 [2024-07-15 21:30:04.977367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:31.725 [2024-07-15 21:30:04.977399] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:31.725 [2024-07-15 21:30:04.977436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:31.725 21:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:31.983 [2024-07-15 21:30:05.237054] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:31.983 BaseBdev1 00:16:31.983 21:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:31.983 21:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:31.983 21:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:31.983 21:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:31.983 21:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:31.983 21:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:31.983 21:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:32.240 21:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:32.498 [ 00:16:32.498 { 00:16:32.498 "name": "BaseBdev1", 00:16:32.498 "aliases": [ 00:16:32.498 "3651cc77-50bb-4a60-b840-9b244d846b3a" 00:16:32.498 ], 00:16:32.498 "product_name": "Malloc disk", 00:16:32.498 "block_size": 512, 00:16:32.498 "num_blocks": 65536, 00:16:32.498 "uuid": "3651cc77-50bb-4a60-b840-9b244d846b3a", 00:16:32.498 "assigned_rate_limits": { 00:16:32.498 "rw_ios_per_sec": 0, 00:16:32.498 "rw_mbytes_per_sec": 0, 00:16:32.498 "r_mbytes_per_sec": 0, 00:16:32.498 "w_mbytes_per_sec": 0 00:16:32.498 }, 00:16:32.498 "claimed": true, 00:16:32.498 "claim_type": "exclusive_write", 00:16:32.498 "zoned": false, 00:16:32.498 "supported_io_types": { 00:16:32.498 "read": true, 00:16:32.498 "write": true, 00:16:32.498 "unmap": true, 00:16:32.498 "flush": true, 00:16:32.498 "reset": true, 00:16:32.498 "nvme_admin": false, 00:16:32.498 "nvme_io": false, 00:16:32.498 "nvme_io_md": false, 00:16:32.498 "write_zeroes": true, 00:16:32.498 "zcopy": true, 00:16:32.498 "get_zone_info": false, 00:16:32.498 "zone_management": false, 00:16:32.498 "zone_append": false, 00:16:32.498 "compare": false, 00:16:32.498 "compare_and_write": false, 00:16:32.498 "abort": true, 00:16:32.498 "seek_hole": false, 00:16:32.498 "seek_data": false, 00:16:32.498 "copy": true, 00:16:32.498 "nvme_iov_md": false 00:16:32.498 }, 00:16:32.498 "memory_domains": [ 00:16:32.498 { 00:16:32.498 "dma_device_id": "system", 00:16:32.498 "dma_device_type": 1 00:16:32.498 }, 00:16:32.498 { 00:16:32.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.498 "dma_device_type": 2 00:16:32.499 } 00:16:32.499 ], 00:16:32.499 "driver_specific": {} 00:16:32.499 } 00:16:32.499 ] 00:16:32.499 21:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:32.499 21:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:32.499 21:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:32.499 21:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:32.499 21:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:32.499 21:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:32.499 21:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:32.499 21:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:32.499 21:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:32.499 21:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:32.499 21:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:32.499 21:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.499 21:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.757 21:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:32.757 "name": "Existed_Raid", 00:16:32.757 "uuid": "bd32d57f-6f7a-4d91-9b80-72522e458382", 00:16:32.757 "strip_size_kb": 64, 00:16:32.757 "state": "configuring", 00:16:32.757 "raid_level": "raid0", 00:16:32.757 "superblock": true, 00:16:32.757 "num_base_bdevs": 2, 00:16:32.757 "num_base_bdevs_discovered": 1, 00:16:32.757 "num_base_bdevs_operational": 2, 00:16:32.757 "base_bdevs_list": [ 00:16:32.757 { 00:16:32.757 "name": "BaseBdev1", 00:16:32.757 "uuid": "3651cc77-50bb-4a60-b840-9b244d846b3a", 00:16:32.757 "is_configured": true, 00:16:32.757 "data_offset": 2048, 00:16:32.757 "data_size": 63488 00:16:32.757 }, 00:16:32.757 { 00:16:32.757 "name": "BaseBdev2", 00:16:32.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.757 "is_configured": false, 00:16:32.757 "data_offset": 0, 00:16:32.757 "data_size": 0 00:16:32.757 } 00:16:32.757 ] 00:16:32.757 }' 00:16:32.757 21:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:32.757 21:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.324 21:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:33.584 [2024-07-15 21:30:06.746758] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:33.584 [2024-07-15 21:30:06.746881] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:16:33.584 21:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:33.844 [2024-07-15 21:30:06.966437] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:33.844 [2024-07-15 21:30:06.968434] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:33.844 [2024-07-15 21:30:06.968547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:33.844 21:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:33.844 21:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:33.844 21:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:33.844 21:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:33.844 21:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:33.844 21:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:33.844 21:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:33.844 21:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:33.844 21:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:33.844 21:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:33.844 21:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:33.844 21:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:33.844 21:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.844 21:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.844 21:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:33.844 "name": "Existed_Raid", 00:16:33.844 "uuid": "65f7ba90-1ec1-4a8d-9ff8-4c5c0dd5fe9e", 00:16:33.844 "strip_size_kb": 64, 00:16:33.844 "state": "configuring", 00:16:33.844 "raid_level": "raid0", 00:16:33.844 "superblock": true, 00:16:33.844 "num_base_bdevs": 2, 00:16:33.844 "num_base_bdevs_discovered": 1, 00:16:33.844 "num_base_bdevs_operational": 2, 00:16:33.844 "base_bdevs_list": [ 00:16:33.844 { 00:16:33.844 "name": "BaseBdev1", 00:16:33.844 "uuid": "3651cc77-50bb-4a60-b840-9b244d846b3a", 00:16:33.844 "is_configured": true, 00:16:33.844 "data_offset": 2048, 00:16:33.844 "data_size": 63488 00:16:33.844 }, 00:16:33.844 { 00:16:33.844 "name": "BaseBdev2", 00:16:33.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.844 "is_configured": false, 00:16:33.844 "data_offset": 0, 00:16:33.844 "data_size": 0 00:16:33.844 } 00:16:33.844 ] 00:16:33.844 }' 00:16:33.844 21:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:33.844 21:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.781 21:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:34.781 [2024-07-15 21:30:08.148837] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:34.781 [2024-07-15 21:30:08.149158] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:16:34.781 [2024-07-15 21:30:08.149207] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:34.781 [2024-07-15 21:30:08.149384] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:16:34.781 BaseBdev2 00:16:34.781 [2024-07-15 21:30:08.149709] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:16:34.781 [2024-07-15 21:30:08.149756] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:16:34.781 [2024-07-15 21:30:08.149949] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.040 21:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:35.040 21:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:35.040 21:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:35.040 21:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:35.040 21:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:35.040 21:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:35.040 21:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:35.040 21:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:35.299 [ 00:16:35.299 { 00:16:35.299 "name": "BaseBdev2", 00:16:35.299 "aliases": [ 00:16:35.299 "963fd82b-42a7-4db4-9ae3-886224132848" 00:16:35.299 ], 00:16:35.299 "product_name": "Malloc disk", 00:16:35.299 "block_size": 512, 00:16:35.299 "num_blocks": 65536, 00:16:35.299 "uuid": "963fd82b-42a7-4db4-9ae3-886224132848", 00:16:35.299 "assigned_rate_limits": { 00:16:35.299 "rw_ios_per_sec": 0, 00:16:35.299 "rw_mbytes_per_sec": 0, 00:16:35.299 "r_mbytes_per_sec": 0, 00:16:35.299 "w_mbytes_per_sec": 0 00:16:35.299 }, 00:16:35.299 "claimed": true, 00:16:35.299 "claim_type": "exclusive_write", 00:16:35.299 "zoned": false, 00:16:35.299 "supported_io_types": { 00:16:35.299 "read": true, 00:16:35.299 "write": true, 00:16:35.299 "unmap": true, 00:16:35.299 "flush": true, 00:16:35.299 "reset": true, 00:16:35.299 "nvme_admin": false, 00:16:35.299 "nvme_io": false, 00:16:35.299 "nvme_io_md": false, 00:16:35.299 "write_zeroes": true, 00:16:35.299 "zcopy": true, 00:16:35.299 "get_zone_info": false, 00:16:35.299 "zone_management": false, 00:16:35.299 "zone_append": false, 00:16:35.299 "compare": false, 00:16:35.299 "compare_and_write": false, 00:16:35.299 "abort": true, 00:16:35.299 "seek_hole": false, 00:16:35.299 "seek_data": false, 00:16:35.299 "copy": true, 00:16:35.299 "nvme_iov_md": false 00:16:35.299 }, 00:16:35.299 "memory_domains": [ 00:16:35.299 { 00:16:35.299 "dma_device_id": "system", 00:16:35.299 "dma_device_type": 1 00:16:35.299 }, 00:16:35.299 { 00:16:35.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.299 "dma_device_type": 2 00:16:35.299 } 00:16:35.299 ], 00:16:35.299 "driver_specific": {} 00:16:35.299 } 00:16:35.299 ] 00:16:35.299 21:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:35.299 21:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:35.299 21:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:35.299 21:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:16:35.299 21:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:35.299 21:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:35.299 21:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:35.299 21:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:35.299 21:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:35.299 21:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:35.299 21:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:35.299 21:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:35.299 21:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:35.299 21:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.299 21:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.558 21:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:35.558 "name": "Existed_Raid", 00:16:35.558 "uuid": "65f7ba90-1ec1-4a8d-9ff8-4c5c0dd5fe9e", 00:16:35.558 "strip_size_kb": 64, 00:16:35.558 "state": "online", 00:16:35.558 "raid_level": "raid0", 00:16:35.558 "superblock": true, 00:16:35.558 "num_base_bdevs": 2, 00:16:35.558 "num_base_bdevs_discovered": 2, 00:16:35.558 "num_base_bdevs_operational": 2, 00:16:35.558 "base_bdevs_list": [ 00:16:35.558 { 00:16:35.558 "name": "BaseBdev1", 00:16:35.558 "uuid": "3651cc77-50bb-4a60-b840-9b244d846b3a", 00:16:35.558 "is_configured": true, 00:16:35.558 "data_offset": 2048, 00:16:35.558 "data_size": 63488 00:16:35.558 }, 00:16:35.558 { 00:16:35.558 "name": "BaseBdev2", 00:16:35.558 "uuid": "963fd82b-42a7-4db4-9ae3-886224132848", 00:16:35.558 "is_configured": true, 00:16:35.558 "data_offset": 2048, 00:16:35.558 "data_size": 63488 00:16:35.558 } 00:16:35.558 ] 00:16:35.558 }' 00:16:35.558 21:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:35.558 21:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.496 21:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:36.496 21:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:36.496 21:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:36.496 21:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:36.496 21:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:36.496 21:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:36.496 21:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:36.496 21:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:36.496 [2024-07-15 21:30:09.710564] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:36.496 21:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:36.496 "name": "Existed_Raid", 00:16:36.496 "aliases": [ 00:16:36.496 "65f7ba90-1ec1-4a8d-9ff8-4c5c0dd5fe9e" 00:16:36.496 ], 00:16:36.496 "product_name": "Raid Volume", 00:16:36.496 "block_size": 512, 00:16:36.496 "num_blocks": 126976, 00:16:36.496 "uuid": "65f7ba90-1ec1-4a8d-9ff8-4c5c0dd5fe9e", 00:16:36.496 "assigned_rate_limits": { 00:16:36.496 "rw_ios_per_sec": 0, 00:16:36.496 "rw_mbytes_per_sec": 0, 00:16:36.496 "r_mbytes_per_sec": 0, 00:16:36.496 "w_mbytes_per_sec": 0 00:16:36.496 }, 00:16:36.496 "claimed": false, 00:16:36.496 "zoned": false, 00:16:36.496 "supported_io_types": { 00:16:36.496 "read": true, 00:16:36.496 "write": true, 00:16:36.496 "unmap": true, 00:16:36.496 "flush": true, 00:16:36.496 "reset": true, 00:16:36.496 "nvme_admin": false, 00:16:36.496 "nvme_io": false, 00:16:36.496 "nvme_io_md": false, 00:16:36.496 "write_zeroes": true, 00:16:36.496 "zcopy": false, 00:16:36.496 "get_zone_info": false, 00:16:36.496 "zone_management": false, 00:16:36.496 "zone_append": false, 00:16:36.496 "compare": false, 00:16:36.496 "compare_and_write": false, 00:16:36.496 "abort": false, 00:16:36.496 "seek_hole": false, 00:16:36.497 "seek_data": false, 00:16:36.497 "copy": false, 00:16:36.497 "nvme_iov_md": false 00:16:36.497 }, 00:16:36.497 "memory_domains": [ 00:16:36.497 { 00:16:36.497 "dma_device_id": "system", 00:16:36.497 "dma_device_type": 1 00:16:36.497 }, 00:16:36.497 { 00:16:36.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.497 "dma_device_type": 2 00:16:36.497 }, 00:16:36.497 { 00:16:36.497 "dma_device_id": "system", 00:16:36.497 "dma_device_type": 1 00:16:36.497 }, 00:16:36.497 { 00:16:36.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.497 "dma_device_type": 2 00:16:36.497 } 00:16:36.497 ], 00:16:36.497 "driver_specific": { 00:16:36.497 "raid": { 00:16:36.497 "uuid": "65f7ba90-1ec1-4a8d-9ff8-4c5c0dd5fe9e", 00:16:36.497 "strip_size_kb": 64, 00:16:36.497 "state": "online", 00:16:36.497 "raid_level": "raid0", 00:16:36.497 "superblock": true, 00:16:36.497 "num_base_bdevs": 2, 00:16:36.497 "num_base_bdevs_discovered": 2, 00:16:36.497 "num_base_bdevs_operational": 2, 00:16:36.497 "base_bdevs_list": [ 00:16:36.497 { 00:16:36.497 "name": "BaseBdev1", 00:16:36.497 "uuid": "3651cc77-50bb-4a60-b840-9b244d846b3a", 00:16:36.497 "is_configured": true, 00:16:36.497 "data_offset": 2048, 00:16:36.497 "data_size": 63488 00:16:36.497 }, 00:16:36.497 { 00:16:36.497 "name": "BaseBdev2", 00:16:36.497 "uuid": "963fd82b-42a7-4db4-9ae3-886224132848", 00:16:36.497 "is_configured": true, 00:16:36.497 "data_offset": 2048, 00:16:36.497 "data_size": 63488 00:16:36.497 } 00:16:36.497 ] 00:16:36.497 } 00:16:36.497 } 00:16:36.497 }' 00:16:36.497 21:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:36.497 21:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:36.497 BaseBdev2' 00:16:36.497 21:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:36.497 21:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:36.497 21:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:36.756 21:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:36.756 "name": "BaseBdev1", 00:16:36.756 "aliases": [ 00:16:36.756 "3651cc77-50bb-4a60-b840-9b244d846b3a" 00:16:36.756 ], 00:16:36.756 "product_name": "Malloc disk", 00:16:36.756 "block_size": 512, 00:16:36.756 "num_blocks": 65536, 00:16:36.756 "uuid": "3651cc77-50bb-4a60-b840-9b244d846b3a", 00:16:36.756 "assigned_rate_limits": { 00:16:36.756 "rw_ios_per_sec": 0, 00:16:36.756 "rw_mbytes_per_sec": 0, 00:16:36.756 "r_mbytes_per_sec": 0, 00:16:36.756 "w_mbytes_per_sec": 0 00:16:36.756 }, 00:16:36.756 "claimed": true, 00:16:36.756 "claim_type": "exclusive_write", 00:16:36.756 "zoned": false, 00:16:36.756 "supported_io_types": { 00:16:36.756 "read": true, 00:16:36.756 "write": true, 00:16:36.756 "unmap": true, 00:16:36.756 "flush": true, 00:16:36.756 "reset": true, 00:16:36.756 "nvme_admin": false, 00:16:36.756 "nvme_io": false, 00:16:36.756 "nvme_io_md": false, 00:16:36.756 "write_zeroes": true, 00:16:36.756 "zcopy": true, 00:16:36.756 "get_zone_info": false, 00:16:36.756 "zone_management": false, 00:16:36.756 "zone_append": false, 00:16:36.756 "compare": false, 00:16:36.756 "compare_and_write": false, 00:16:36.756 "abort": true, 00:16:36.756 "seek_hole": false, 00:16:36.756 "seek_data": false, 00:16:36.756 "copy": true, 00:16:36.756 "nvme_iov_md": false 00:16:36.756 }, 00:16:36.756 "memory_domains": [ 00:16:36.756 { 00:16:36.756 "dma_device_id": "system", 00:16:36.756 "dma_device_type": 1 00:16:36.756 }, 00:16:36.756 { 00:16:36.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.756 "dma_device_type": 2 00:16:36.756 } 00:16:36.756 ], 00:16:36.756 "driver_specific": {} 00:16:36.756 }' 00:16:36.756 21:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:36.756 21:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:36.756 21:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:36.756 21:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:37.015 21:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:37.015 21:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:37.015 21:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:37.015 21:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:37.015 21:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:37.015 21:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:37.015 21:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:37.299 21:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:37.299 21:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:37.299 21:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:37.299 21:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:37.558 21:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:37.558 "name": "BaseBdev2", 00:16:37.558 "aliases": [ 00:16:37.558 "963fd82b-42a7-4db4-9ae3-886224132848" 00:16:37.558 ], 00:16:37.558 "product_name": "Malloc disk", 00:16:37.558 "block_size": 512, 00:16:37.558 "num_blocks": 65536, 00:16:37.558 "uuid": "963fd82b-42a7-4db4-9ae3-886224132848", 00:16:37.558 "assigned_rate_limits": { 00:16:37.558 "rw_ios_per_sec": 0, 00:16:37.558 "rw_mbytes_per_sec": 0, 00:16:37.558 "r_mbytes_per_sec": 0, 00:16:37.558 "w_mbytes_per_sec": 0 00:16:37.558 }, 00:16:37.558 "claimed": true, 00:16:37.558 "claim_type": "exclusive_write", 00:16:37.558 "zoned": false, 00:16:37.558 "supported_io_types": { 00:16:37.558 "read": true, 00:16:37.558 "write": true, 00:16:37.558 "unmap": true, 00:16:37.558 "flush": true, 00:16:37.558 "reset": true, 00:16:37.558 "nvme_admin": false, 00:16:37.558 "nvme_io": false, 00:16:37.558 "nvme_io_md": false, 00:16:37.558 "write_zeroes": true, 00:16:37.558 "zcopy": true, 00:16:37.558 "get_zone_info": false, 00:16:37.558 "zone_management": false, 00:16:37.558 "zone_append": false, 00:16:37.558 "compare": false, 00:16:37.558 "compare_and_write": false, 00:16:37.558 "abort": true, 00:16:37.558 "seek_hole": false, 00:16:37.558 "seek_data": false, 00:16:37.558 "copy": true, 00:16:37.558 "nvme_iov_md": false 00:16:37.558 }, 00:16:37.558 "memory_domains": [ 00:16:37.558 { 00:16:37.558 "dma_device_id": "system", 00:16:37.558 "dma_device_type": 1 00:16:37.558 }, 00:16:37.558 { 00:16:37.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.558 "dma_device_type": 2 00:16:37.558 } 00:16:37.558 ], 00:16:37.558 "driver_specific": {} 00:16:37.558 }' 00:16:37.559 21:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:37.559 21:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:37.559 21:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:37.559 21:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:37.559 21:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:37.559 21:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:37.559 21:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:37.818 21:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:37.818 21:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:37.818 21:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:37.818 21:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:37.818 21:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:37.818 21:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:38.076 [2024-07-15 21:30:11.323661] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:38.076 [2024-07-15 21:30:11.323780] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:38.076 [2024-07-15 21:30:11.323881] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:38.336 21:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:38.336 21:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:16:38.336 21:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:38.336 21:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:16:38.336 21:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:38.336 21:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:16:38.336 21:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:38.336 21:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:38.336 21:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:38.336 21:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:38.336 21:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:38.336 21:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:38.336 21:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:38.336 21:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:38.336 21:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:38.336 21:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.336 21:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.336 21:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:38.336 "name": "Existed_Raid", 00:16:38.336 "uuid": "65f7ba90-1ec1-4a8d-9ff8-4c5c0dd5fe9e", 00:16:38.336 "strip_size_kb": 64, 00:16:38.336 "state": "offline", 00:16:38.336 "raid_level": "raid0", 00:16:38.336 "superblock": true, 00:16:38.336 "num_base_bdevs": 2, 00:16:38.336 "num_base_bdevs_discovered": 1, 00:16:38.336 "num_base_bdevs_operational": 1, 00:16:38.336 "base_bdevs_list": [ 00:16:38.336 { 00:16:38.336 "name": null, 00:16:38.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.336 "is_configured": false, 00:16:38.336 "data_offset": 2048, 00:16:38.336 "data_size": 63488 00:16:38.336 }, 00:16:38.336 { 00:16:38.336 "name": "BaseBdev2", 00:16:38.336 "uuid": "963fd82b-42a7-4db4-9ae3-886224132848", 00:16:38.336 "is_configured": true, 00:16:38.336 "data_offset": 2048, 00:16:38.336 "data_size": 63488 00:16:38.336 } 00:16:38.336 ] 00:16:38.336 }' 00:16:38.336 21:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:38.336 21:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.275 21:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:39.275 21:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:39.275 21:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:39.275 21:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.275 21:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:39.275 21:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:39.275 21:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:39.533 [2024-07-15 21:30:12.743648] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:39.533 [2024-07-15 21:30:12.743798] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:16:39.533 21:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:39.533 21:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:39.533 21:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.533 21:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:39.792 21:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:39.792 21:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:39.792 21:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:16:39.792 21:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 121744 00:16:39.792 21:30:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 121744 ']' 00:16:39.792 21:30:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 121744 00:16:39.792 21:30:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:16:39.792 21:30:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:39.792 21:30:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 121744 00:16:39.792 killing process with pid 121744 00:16:39.792 21:30:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:39.792 21:30:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:39.792 21:30:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 121744' 00:16:39.792 21:30:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 121744 00:16:39.792 21:30:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 121744 00:16:39.792 [2024-07-15 21:30:13.127440] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:39.792 [2024-07-15 21:30:13.127596] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:41.173 ************************************ 00:16:41.173 END TEST raid_state_function_test_sb 00:16:41.173 ************************************ 00:16:41.173 21:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:16:41.173 00:16:41.173 real 0m11.950s 00:16:41.173 user 0m20.946s 00:16:41.173 sys 0m1.338s 00:16:41.173 21:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:41.173 21:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.173 21:30:14 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:41.173 21:30:14 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:16:41.173 21:30:14 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:41.173 21:30:14 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:41.173 21:30:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:41.173 ************************************ 00:16:41.173 START TEST raid_superblock_test 00:16:41.173 ************************************ 00:16:41.173 21:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 2 00:16:41.173 21:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:16:41.173 21:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:16:41.173 21:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:16:41.173 21:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:16:41.173 21:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:16:41.173 21:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:16:41.173 21:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:16:41.173 21:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:16:41.173 21:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:16:41.173 21:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:16:41.173 21:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:16:41.173 21:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:16:41.173 21:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:16:41.173 21:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:16:41.173 21:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:16:41.173 21:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:16:41.173 21:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:41.173 21:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=122141 00:16:41.173 21:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 122141 /var/tmp/spdk-raid.sock 00:16:41.173 21:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 122141 ']' 00:16:41.173 21:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:41.173 21:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:41.173 21:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:41.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:41.173 21:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:41.173 21:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.432 [2024-07-15 21:30:14.597029] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:16:41.432 [2024-07-15 21:30:14.597381] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122141 ] 00:16:41.432 [2024-07-15 21:30:14.766494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.698 [2024-07-15 21:30:14.971336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.965 [2024-07-15 21:30:15.177307] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:42.224 21:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:42.224 21:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:16:42.224 21:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:16:42.224 21:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:42.224 21:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:16:42.224 21:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:16:42.224 21:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:42.224 21:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:42.224 21:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:42.224 21:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:42.224 21:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:42.483 malloc1 00:16:42.483 21:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:42.742 [2024-07-15 21:30:15.898359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:42.742 [2024-07-15 21:30:15.898542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.742 [2024-07-15 21:30:15.898618] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:16:42.742 [2024-07-15 21:30:15.898672] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.742 [2024-07-15 21:30:15.900718] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.742 [2024-07-15 21:30:15.900809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:42.742 pt1 00:16:42.742 21:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:42.742 21:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:42.742 21:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:16:42.742 21:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:16:42.742 21:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:42.742 21:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:42.742 21:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:42.742 21:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:42.742 21:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:43.001 malloc2 00:16:43.001 21:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:43.001 [2024-07-15 21:30:16.374641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:43.001 [2024-07-15 21:30:16.374846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.001 [2024-07-15 21:30:16.374928] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:16:43.001 [2024-07-15 21:30:16.374970] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.001 [2024-07-15 21:30:16.377153] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.001 [2024-07-15 21:30:16.377265] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:43.260 pt2 00:16:43.260 21:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:43.260 21:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:43.260 21:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:16:43.260 [2024-07-15 21:30:16.590344] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:43.260 [2024-07-15 21:30:16.592331] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:43.260 [2024-07-15 21:30:16.592637] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:16:43.260 [2024-07-15 21:30:16.592678] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:43.260 [2024-07-15 21:30:16.592868] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:16:43.260 [2024-07-15 21:30:16.593212] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:16:43.260 [2024-07-15 21:30:16.593257] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:16:43.260 [2024-07-15 21:30:16.593486] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.260 21:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:16:43.260 21:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:43.260 21:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:43.260 21:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:43.260 21:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:43.260 21:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:43.260 21:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:43.260 21:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:43.260 21:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:43.260 21:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:43.260 21:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.260 21:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.518 21:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:43.518 "name": "raid_bdev1", 00:16:43.518 "uuid": "60d587b0-70a7-4345-9ba1-65a1e587c1a1", 00:16:43.518 "strip_size_kb": 64, 00:16:43.518 "state": "online", 00:16:43.518 "raid_level": "raid0", 00:16:43.518 "superblock": true, 00:16:43.518 "num_base_bdevs": 2, 00:16:43.518 "num_base_bdevs_discovered": 2, 00:16:43.518 "num_base_bdevs_operational": 2, 00:16:43.518 "base_bdevs_list": [ 00:16:43.518 { 00:16:43.518 "name": "pt1", 00:16:43.518 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:43.518 "is_configured": true, 00:16:43.518 "data_offset": 2048, 00:16:43.518 "data_size": 63488 00:16:43.518 }, 00:16:43.518 { 00:16:43.518 "name": "pt2", 00:16:43.518 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:43.518 "is_configured": true, 00:16:43.518 "data_offset": 2048, 00:16:43.518 "data_size": 63488 00:16:43.518 } 00:16:43.518 ] 00:16:43.518 }' 00:16:43.518 21:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:43.518 21:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.086 21:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:16:44.086 21:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:44.086 21:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:44.086 21:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:44.086 21:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:44.086 21:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:44.086 21:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:44.086 21:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:44.345 [2024-07-15 21:30:17.664811] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:44.345 21:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:44.345 "name": "raid_bdev1", 00:16:44.345 "aliases": [ 00:16:44.345 "60d587b0-70a7-4345-9ba1-65a1e587c1a1" 00:16:44.345 ], 00:16:44.345 "product_name": "Raid Volume", 00:16:44.345 "block_size": 512, 00:16:44.345 "num_blocks": 126976, 00:16:44.345 "uuid": "60d587b0-70a7-4345-9ba1-65a1e587c1a1", 00:16:44.345 "assigned_rate_limits": { 00:16:44.345 "rw_ios_per_sec": 0, 00:16:44.345 "rw_mbytes_per_sec": 0, 00:16:44.345 "r_mbytes_per_sec": 0, 00:16:44.345 "w_mbytes_per_sec": 0 00:16:44.345 }, 00:16:44.345 "claimed": false, 00:16:44.345 "zoned": false, 00:16:44.345 "supported_io_types": { 00:16:44.345 "read": true, 00:16:44.345 "write": true, 00:16:44.345 "unmap": true, 00:16:44.345 "flush": true, 00:16:44.345 "reset": true, 00:16:44.345 "nvme_admin": false, 00:16:44.345 "nvme_io": false, 00:16:44.345 "nvme_io_md": false, 00:16:44.345 "write_zeroes": true, 00:16:44.345 "zcopy": false, 00:16:44.345 "get_zone_info": false, 00:16:44.345 "zone_management": false, 00:16:44.345 "zone_append": false, 00:16:44.345 "compare": false, 00:16:44.345 "compare_and_write": false, 00:16:44.345 "abort": false, 00:16:44.345 "seek_hole": false, 00:16:44.345 "seek_data": false, 00:16:44.345 "copy": false, 00:16:44.345 "nvme_iov_md": false 00:16:44.345 }, 00:16:44.345 "memory_domains": [ 00:16:44.345 { 00:16:44.345 "dma_device_id": "system", 00:16:44.345 "dma_device_type": 1 00:16:44.345 }, 00:16:44.345 { 00:16:44.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.345 "dma_device_type": 2 00:16:44.345 }, 00:16:44.345 { 00:16:44.345 "dma_device_id": "system", 00:16:44.345 "dma_device_type": 1 00:16:44.345 }, 00:16:44.345 { 00:16:44.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.345 "dma_device_type": 2 00:16:44.345 } 00:16:44.345 ], 00:16:44.345 "driver_specific": { 00:16:44.345 "raid": { 00:16:44.345 "uuid": "60d587b0-70a7-4345-9ba1-65a1e587c1a1", 00:16:44.345 "strip_size_kb": 64, 00:16:44.345 "state": "online", 00:16:44.345 "raid_level": "raid0", 00:16:44.345 "superblock": true, 00:16:44.345 "num_base_bdevs": 2, 00:16:44.345 "num_base_bdevs_discovered": 2, 00:16:44.345 "num_base_bdevs_operational": 2, 00:16:44.345 "base_bdevs_list": [ 00:16:44.345 { 00:16:44.345 "name": "pt1", 00:16:44.345 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:44.345 "is_configured": true, 00:16:44.345 "data_offset": 2048, 00:16:44.345 "data_size": 63488 00:16:44.345 }, 00:16:44.345 { 00:16:44.345 "name": "pt2", 00:16:44.345 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:44.345 "is_configured": true, 00:16:44.345 "data_offset": 2048, 00:16:44.345 "data_size": 63488 00:16:44.345 } 00:16:44.345 ] 00:16:44.345 } 00:16:44.345 } 00:16:44.345 }' 00:16:44.345 21:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:44.602 21:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:44.602 pt2' 00:16:44.602 21:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:44.602 21:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:44.602 21:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:44.862 21:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:44.862 "name": "pt1", 00:16:44.862 "aliases": [ 00:16:44.862 "00000000-0000-0000-0000-000000000001" 00:16:44.862 ], 00:16:44.862 "product_name": "passthru", 00:16:44.862 "block_size": 512, 00:16:44.862 "num_blocks": 65536, 00:16:44.862 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:44.862 "assigned_rate_limits": { 00:16:44.862 "rw_ios_per_sec": 0, 00:16:44.862 "rw_mbytes_per_sec": 0, 00:16:44.862 "r_mbytes_per_sec": 0, 00:16:44.862 "w_mbytes_per_sec": 0 00:16:44.862 }, 00:16:44.862 "claimed": true, 00:16:44.862 "claim_type": "exclusive_write", 00:16:44.862 "zoned": false, 00:16:44.862 "supported_io_types": { 00:16:44.862 "read": true, 00:16:44.862 "write": true, 00:16:44.862 "unmap": true, 00:16:44.862 "flush": true, 00:16:44.862 "reset": true, 00:16:44.862 "nvme_admin": false, 00:16:44.862 "nvme_io": false, 00:16:44.862 "nvme_io_md": false, 00:16:44.862 "write_zeroes": true, 00:16:44.862 "zcopy": true, 00:16:44.862 "get_zone_info": false, 00:16:44.862 "zone_management": false, 00:16:44.862 "zone_append": false, 00:16:44.862 "compare": false, 00:16:44.862 "compare_and_write": false, 00:16:44.862 "abort": true, 00:16:44.862 "seek_hole": false, 00:16:44.862 "seek_data": false, 00:16:44.862 "copy": true, 00:16:44.862 "nvme_iov_md": false 00:16:44.862 }, 00:16:44.862 "memory_domains": [ 00:16:44.862 { 00:16:44.862 "dma_device_id": "system", 00:16:44.862 "dma_device_type": 1 00:16:44.862 }, 00:16:44.862 { 00:16:44.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.862 "dma_device_type": 2 00:16:44.862 } 00:16:44.862 ], 00:16:44.862 "driver_specific": { 00:16:44.862 "passthru": { 00:16:44.862 "name": "pt1", 00:16:44.862 "base_bdev_name": "malloc1" 00:16:44.862 } 00:16:44.862 } 00:16:44.862 }' 00:16:44.862 21:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:44.862 21:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:44.862 21:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:44.862 21:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:44.862 21:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:44.862 21:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:44.862 21:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:45.121 21:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:45.121 21:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:45.121 21:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:45.121 21:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:45.121 21:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:45.121 21:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:45.121 21:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:45.121 21:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:45.378 21:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:45.378 "name": "pt2", 00:16:45.378 "aliases": [ 00:16:45.378 "00000000-0000-0000-0000-000000000002" 00:16:45.378 ], 00:16:45.378 "product_name": "passthru", 00:16:45.378 "block_size": 512, 00:16:45.378 "num_blocks": 65536, 00:16:45.378 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:45.378 "assigned_rate_limits": { 00:16:45.378 "rw_ios_per_sec": 0, 00:16:45.378 "rw_mbytes_per_sec": 0, 00:16:45.378 "r_mbytes_per_sec": 0, 00:16:45.378 "w_mbytes_per_sec": 0 00:16:45.378 }, 00:16:45.378 "claimed": true, 00:16:45.378 "claim_type": "exclusive_write", 00:16:45.378 "zoned": false, 00:16:45.378 "supported_io_types": { 00:16:45.378 "read": true, 00:16:45.378 "write": true, 00:16:45.378 "unmap": true, 00:16:45.378 "flush": true, 00:16:45.378 "reset": true, 00:16:45.378 "nvme_admin": false, 00:16:45.378 "nvme_io": false, 00:16:45.378 "nvme_io_md": false, 00:16:45.378 "write_zeroes": true, 00:16:45.378 "zcopy": true, 00:16:45.378 "get_zone_info": false, 00:16:45.378 "zone_management": false, 00:16:45.378 "zone_append": false, 00:16:45.378 "compare": false, 00:16:45.378 "compare_and_write": false, 00:16:45.378 "abort": true, 00:16:45.378 "seek_hole": false, 00:16:45.378 "seek_data": false, 00:16:45.378 "copy": true, 00:16:45.378 "nvme_iov_md": false 00:16:45.378 }, 00:16:45.378 "memory_domains": [ 00:16:45.378 { 00:16:45.378 "dma_device_id": "system", 00:16:45.378 "dma_device_type": 1 00:16:45.378 }, 00:16:45.378 { 00:16:45.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.378 "dma_device_type": 2 00:16:45.378 } 00:16:45.378 ], 00:16:45.378 "driver_specific": { 00:16:45.378 "passthru": { 00:16:45.378 "name": "pt2", 00:16:45.378 "base_bdev_name": "malloc2" 00:16:45.378 } 00:16:45.378 } 00:16:45.378 }' 00:16:45.378 21:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:45.378 21:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:45.636 21:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:45.636 21:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:45.636 21:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:45.636 21:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:45.636 21:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:45.636 21:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:45.636 21:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:45.636 21:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:45.894 21:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:45.894 21:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:45.894 21:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:45.894 21:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:16:46.151 [2024-07-15 21:30:19.322189] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:46.151 21:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=60d587b0-70a7-4345-9ba1-65a1e587c1a1 00:16:46.151 21:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 60d587b0-70a7-4345-9ba1-65a1e587c1a1 ']' 00:16:46.151 21:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:46.447 [2024-07-15 21:30:19.541521] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:46.447 [2024-07-15 21:30:19.541629] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:46.447 [2024-07-15 21:30:19.541744] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:46.447 [2024-07-15 21:30:19.541814] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:46.447 [2024-07-15 21:30:19.541843] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:16:46.447 21:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:16:46.447 21:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.447 21:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:16:46.447 21:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:16:46.447 21:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:46.447 21:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:46.705 21:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:46.706 21:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:46.963 21:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:46.963 21:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:47.221 21:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:16:47.221 21:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:16:47.221 21:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:16:47.221 21:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:16:47.221 21:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:47.221 21:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:47.221 21:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:47.221 21:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:47.221 21:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:47.221 21:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:47.221 21:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:47.221 21:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:47.221 21:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:16:47.479 [2024-07-15 21:30:20.643731] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:47.479 [2024-07-15 21:30:20.645709] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:47.479 [2024-07-15 21:30:20.645833] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:47.479 [2024-07-15 21:30:20.646321] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:47.479 [2024-07-15 21:30:20.646471] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:47.479 [2024-07-15 21:30:20.646509] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state configuring 00:16:47.479 request: 00:16:47.479 { 00:16:47.479 "name": "raid_bdev1", 00:16:47.479 "raid_level": "raid0", 00:16:47.479 "base_bdevs": [ 00:16:47.479 "malloc1", 00:16:47.479 "malloc2" 00:16:47.479 ], 00:16:47.479 "strip_size_kb": 64, 00:16:47.479 "superblock": false, 00:16:47.479 "method": "bdev_raid_create", 00:16:47.479 "req_id": 1 00:16:47.479 } 00:16:47.479 Got JSON-RPC error response 00:16:47.479 response: 00:16:47.479 { 00:16:47.479 "code": -17, 00:16:47.479 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:47.479 } 00:16:47.479 21:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:16:47.479 21:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:47.479 21:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:47.479 21:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:47.479 21:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:16:47.479 21:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.738 21:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:16:47.738 21:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:16:47.738 21:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:47.738 [2024-07-15 21:30:21.110947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:47.738 [2024-07-15 21:30:21.111359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.738 [2024-07-15 21:30:21.111502] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:16:47.738 [2024-07-15 21:30:21.111627] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.738 [2024-07-15 21:30:21.113953] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.738 [2024-07-15 21:30:21.114198] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:47.997 [2024-07-15 21:30:21.114426] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:47.997 [2024-07-15 21:30:21.114521] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:47.997 pt1 00:16:47.997 21:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:16:47.997 21:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:47.997 21:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:47.997 21:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:47.997 21:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:47.997 21:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:47.997 21:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:47.997 21:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:47.997 21:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:47.997 21:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:47.997 21:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.997 21:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.997 21:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:47.997 "name": "raid_bdev1", 00:16:47.997 "uuid": "60d587b0-70a7-4345-9ba1-65a1e587c1a1", 00:16:47.997 "strip_size_kb": 64, 00:16:47.997 "state": "configuring", 00:16:47.997 "raid_level": "raid0", 00:16:47.997 "superblock": true, 00:16:47.997 "num_base_bdevs": 2, 00:16:47.997 "num_base_bdevs_discovered": 1, 00:16:47.997 "num_base_bdevs_operational": 2, 00:16:47.997 "base_bdevs_list": [ 00:16:47.997 { 00:16:47.997 "name": "pt1", 00:16:47.997 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:47.997 "is_configured": true, 00:16:47.997 "data_offset": 2048, 00:16:47.997 "data_size": 63488 00:16:47.997 }, 00:16:47.997 { 00:16:47.997 "name": null, 00:16:47.997 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:47.997 "is_configured": false, 00:16:47.997 "data_offset": 2048, 00:16:47.997 "data_size": 63488 00:16:47.997 } 00:16:47.997 ] 00:16:47.997 }' 00:16:47.997 21:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:47.997 21:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.942 21:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:16:48.942 21:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:16:48.942 21:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:48.942 21:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:48.942 [2024-07-15 21:30:22.233038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:48.942 [2024-07-15 21:30:22.233617] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.942 [2024-07-15 21:30:22.233774] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:48.942 [2024-07-15 21:30:22.233888] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.942 [2024-07-15 21:30:22.234495] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.942 [2024-07-15 21:30:22.234679] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:48.942 [2024-07-15 21:30:22.234922] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:48.942 [2024-07-15 21:30:22.234984] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:48.942 [2024-07-15 21:30:22.235126] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:16:48.942 [2024-07-15 21:30:22.235161] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:48.942 [2024-07-15 21:30:22.235311] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:48.942 [2024-07-15 21:30:22.235679] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:16:48.942 [2024-07-15 21:30:22.235728] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:16:48.942 [2024-07-15 21:30:22.235912] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.942 pt2 00:16:48.942 21:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:48.942 21:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:48.942 21:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:16:48.942 21:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:48.942 21:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:48.942 21:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:48.942 21:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:48.942 21:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:48.942 21:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:48.942 21:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:48.942 21:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:48.942 21:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:48.942 21:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.942 21:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.200 21:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:49.200 "name": "raid_bdev1", 00:16:49.200 "uuid": "60d587b0-70a7-4345-9ba1-65a1e587c1a1", 00:16:49.200 "strip_size_kb": 64, 00:16:49.200 "state": "online", 00:16:49.200 "raid_level": "raid0", 00:16:49.200 "superblock": true, 00:16:49.200 "num_base_bdevs": 2, 00:16:49.200 "num_base_bdevs_discovered": 2, 00:16:49.200 "num_base_bdevs_operational": 2, 00:16:49.200 "base_bdevs_list": [ 00:16:49.200 { 00:16:49.200 "name": "pt1", 00:16:49.200 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:49.200 "is_configured": true, 00:16:49.200 "data_offset": 2048, 00:16:49.200 "data_size": 63488 00:16:49.200 }, 00:16:49.200 { 00:16:49.200 "name": "pt2", 00:16:49.200 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:49.200 "is_configured": true, 00:16:49.200 "data_offset": 2048, 00:16:49.200 "data_size": 63488 00:16:49.200 } 00:16:49.200 ] 00:16:49.200 }' 00:16:49.200 21:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:49.200 21:30:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.138 21:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:16:50.138 21:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:50.138 21:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:50.138 21:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:50.138 21:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:50.138 21:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:50.138 21:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:50.138 21:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:50.138 [2024-07-15 21:30:23.387398] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.138 21:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:50.138 "name": "raid_bdev1", 00:16:50.138 "aliases": [ 00:16:50.138 "60d587b0-70a7-4345-9ba1-65a1e587c1a1" 00:16:50.138 ], 00:16:50.138 "product_name": "Raid Volume", 00:16:50.138 "block_size": 512, 00:16:50.138 "num_blocks": 126976, 00:16:50.138 "uuid": "60d587b0-70a7-4345-9ba1-65a1e587c1a1", 00:16:50.138 "assigned_rate_limits": { 00:16:50.138 "rw_ios_per_sec": 0, 00:16:50.138 "rw_mbytes_per_sec": 0, 00:16:50.138 "r_mbytes_per_sec": 0, 00:16:50.138 "w_mbytes_per_sec": 0 00:16:50.138 }, 00:16:50.138 "claimed": false, 00:16:50.138 "zoned": false, 00:16:50.138 "supported_io_types": { 00:16:50.138 "read": true, 00:16:50.138 "write": true, 00:16:50.138 "unmap": true, 00:16:50.138 "flush": true, 00:16:50.138 "reset": true, 00:16:50.138 "nvme_admin": false, 00:16:50.138 "nvme_io": false, 00:16:50.138 "nvme_io_md": false, 00:16:50.138 "write_zeroes": true, 00:16:50.138 "zcopy": false, 00:16:50.138 "get_zone_info": false, 00:16:50.138 "zone_management": false, 00:16:50.138 "zone_append": false, 00:16:50.138 "compare": false, 00:16:50.138 "compare_and_write": false, 00:16:50.138 "abort": false, 00:16:50.138 "seek_hole": false, 00:16:50.138 "seek_data": false, 00:16:50.138 "copy": false, 00:16:50.138 "nvme_iov_md": false 00:16:50.138 }, 00:16:50.138 "memory_domains": [ 00:16:50.138 { 00:16:50.138 "dma_device_id": "system", 00:16:50.138 "dma_device_type": 1 00:16:50.138 }, 00:16:50.138 { 00:16:50.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.138 "dma_device_type": 2 00:16:50.138 }, 00:16:50.138 { 00:16:50.138 "dma_device_id": "system", 00:16:50.138 "dma_device_type": 1 00:16:50.138 }, 00:16:50.138 { 00:16:50.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.138 "dma_device_type": 2 00:16:50.138 } 00:16:50.138 ], 00:16:50.138 "driver_specific": { 00:16:50.138 "raid": { 00:16:50.138 "uuid": "60d587b0-70a7-4345-9ba1-65a1e587c1a1", 00:16:50.138 "strip_size_kb": 64, 00:16:50.138 "state": "online", 00:16:50.138 "raid_level": "raid0", 00:16:50.138 "superblock": true, 00:16:50.138 "num_base_bdevs": 2, 00:16:50.138 "num_base_bdevs_discovered": 2, 00:16:50.138 "num_base_bdevs_operational": 2, 00:16:50.138 "base_bdevs_list": [ 00:16:50.138 { 00:16:50.138 "name": "pt1", 00:16:50.138 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:50.138 "is_configured": true, 00:16:50.138 "data_offset": 2048, 00:16:50.138 "data_size": 63488 00:16:50.138 }, 00:16:50.138 { 00:16:50.138 "name": "pt2", 00:16:50.138 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.138 "is_configured": true, 00:16:50.138 "data_offset": 2048, 00:16:50.138 "data_size": 63488 00:16:50.138 } 00:16:50.138 ] 00:16:50.138 } 00:16:50.138 } 00:16:50.138 }' 00:16:50.138 21:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:50.138 21:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:50.138 pt2' 00:16:50.138 21:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:50.138 21:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:50.138 21:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:50.400 21:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:50.400 "name": "pt1", 00:16:50.400 "aliases": [ 00:16:50.400 "00000000-0000-0000-0000-000000000001" 00:16:50.400 ], 00:16:50.400 "product_name": "passthru", 00:16:50.400 "block_size": 512, 00:16:50.400 "num_blocks": 65536, 00:16:50.400 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:50.400 "assigned_rate_limits": { 00:16:50.400 "rw_ios_per_sec": 0, 00:16:50.400 "rw_mbytes_per_sec": 0, 00:16:50.400 "r_mbytes_per_sec": 0, 00:16:50.400 "w_mbytes_per_sec": 0 00:16:50.400 }, 00:16:50.400 "claimed": true, 00:16:50.400 "claim_type": "exclusive_write", 00:16:50.400 "zoned": false, 00:16:50.400 "supported_io_types": { 00:16:50.400 "read": true, 00:16:50.400 "write": true, 00:16:50.400 "unmap": true, 00:16:50.400 "flush": true, 00:16:50.400 "reset": true, 00:16:50.400 "nvme_admin": false, 00:16:50.400 "nvme_io": false, 00:16:50.400 "nvme_io_md": false, 00:16:50.400 "write_zeroes": true, 00:16:50.400 "zcopy": true, 00:16:50.400 "get_zone_info": false, 00:16:50.400 "zone_management": false, 00:16:50.400 "zone_append": false, 00:16:50.400 "compare": false, 00:16:50.400 "compare_and_write": false, 00:16:50.400 "abort": true, 00:16:50.400 "seek_hole": false, 00:16:50.400 "seek_data": false, 00:16:50.400 "copy": true, 00:16:50.400 "nvme_iov_md": false 00:16:50.400 }, 00:16:50.400 "memory_domains": [ 00:16:50.400 { 00:16:50.400 "dma_device_id": "system", 00:16:50.400 "dma_device_type": 1 00:16:50.400 }, 00:16:50.400 { 00:16:50.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.400 "dma_device_type": 2 00:16:50.400 } 00:16:50.400 ], 00:16:50.400 "driver_specific": { 00:16:50.400 "passthru": { 00:16:50.400 "name": "pt1", 00:16:50.400 "base_bdev_name": "malloc1" 00:16:50.400 } 00:16:50.400 } 00:16:50.400 }' 00:16:50.400 21:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:50.400 21:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:50.678 21:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:50.678 21:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:50.678 21:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:50.678 21:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:50.678 21:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:50.678 21:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:50.678 21:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:50.678 21:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:50.937 21:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:50.937 21:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:50.937 21:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:50.937 21:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:50.937 21:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:51.195 21:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:51.195 "name": "pt2", 00:16:51.195 "aliases": [ 00:16:51.195 "00000000-0000-0000-0000-000000000002" 00:16:51.195 ], 00:16:51.195 "product_name": "passthru", 00:16:51.195 "block_size": 512, 00:16:51.195 "num_blocks": 65536, 00:16:51.195 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:51.195 "assigned_rate_limits": { 00:16:51.195 "rw_ios_per_sec": 0, 00:16:51.195 "rw_mbytes_per_sec": 0, 00:16:51.195 "r_mbytes_per_sec": 0, 00:16:51.195 "w_mbytes_per_sec": 0 00:16:51.195 }, 00:16:51.195 "claimed": true, 00:16:51.195 "claim_type": "exclusive_write", 00:16:51.195 "zoned": false, 00:16:51.195 "supported_io_types": { 00:16:51.195 "read": true, 00:16:51.195 "write": true, 00:16:51.195 "unmap": true, 00:16:51.195 "flush": true, 00:16:51.195 "reset": true, 00:16:51.195 "nvme_admin": false, 00:16:51.195 "nvme_io": false, 00:16:51.195 "nvme_io_md": false, 00:16:51.195 "write_zeroes": true, 00:16:51.195 "zcopy": true, 00:16:51.195 "get_zone_info": false, 00:16:51.195 "zone_management": false, 00:16:51.195 "zone_append": false, 00:16:51.195 "compare": false, 00:16:51.195 "compare_and_write": false, 00:16:51.195 "abort": true, 00:16:51.195 "seek_hole": false, 00:16:51.195 "seek_data": false, 00:16:51.195 "copy": true, 00:16:51.195 "nvme_iov_md": false 00:16:51.195 }, 00:16:51.195 "memory_domains": [ 00:16:51.195 { 00:16:51.195 "dma_device_id": "system", 00:16:51.195 "dma_device_type": 1 00:16:51.195 }, 00:16:51.195 { 00:16:51.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.195 "dma_device_type": 2 00:16:51.195 } 00:16:51.195 ], 00:16:51.195 "driver_specific": { 00:16:51.195 "passthru": { 00:16:51.195 "name": "pt2", 00:16:51.196 "base_bdev_name": "malloc2" 00:16:51.196 } 00:16:51.196 } 00:16:51.196 }' 00:16:51.196 21:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:51.196 21:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:51.196 21:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:51.196 21:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:51.196 21:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:51.196 21:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:51.196 21:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:51.454 21:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:51.454 21:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:51.454 21:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:51.454 21:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:51.454 21:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:51.454 21:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:51.454 21:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:16:51.713 [2024-07-15 21:30:24.920883] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:51.713 21:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 60d587b0-70a7-4345-9ba1-65a1e587c1a1 '!=' 60d587b0-70a7-4345-9ba1-65a1e587c1a1 ']' 00:16:51.713 21:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:16:51.713 21:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:51.713 21:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:51.713 21:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 122141 00:16:51.713 21:30:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 122141 ']' 00:16:51.713 21:30:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 122141 00:16:51.713 21:30:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:16:51.713 21:30:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:51.713 21:30:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 122141 00:16:51.713 killing process with pid 122141 00:16:51.713 21:30:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:51.713 21:30:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:51.713 21:30:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 122141' 00:16:51.713 21:30:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 122141 00:16:51.713 21:30:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 122141 00:16:51.713 [2024-07-15 21:30:24.965875] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:51.713 [2024-07-15 21:30:24.965967] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:51.713 [2024-07-15 21:30:24.966015] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:51.713 [2024-07-15 21:30:24.966024] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:16:51.972 [2024-07-15 21:30:25.174197] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:53.349 ************************************ 00:16:53.349 END TEST raid_superblock_test 00:16:53.349 ************************************ 00:16:53.349 21:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:16:53.349 00:16:53.349 real 0m11.940s 00:16:53.349 user 0m21.101s 00:16:53.349 sys 0m1.378s 00:16:53.349 21:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:53.349 21:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.349 21:30:26 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:53.349 21:30:26 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:16:53.349 21:30:26 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:53.349 21:30:26 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:53.349 21:30:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:53.349 ************************************ 00:16:53.349 START TEST raid_read_error_test 00:16:53.349 ************************************ 00:16:53.349 21:30:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 read 00:16:53.349 21:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:16:53.349 21:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:16:53.349 21:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:16:53.349 21:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:53.349 21:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:53.349 21:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:53.350 21:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:16:53.350 21:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:53.350 21:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:53.350 21:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:16:53.350 21:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:53.350 21:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:53.350 21:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:53.350 21:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:53.350 21:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:53.350 21:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:53.350 21:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:53.350 21:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:53.350 21:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:16:53.350 21:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:16:53.350 21:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:16:53.350 21:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:53.350 21:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.Dn0Cj8p9qx 00:16:53.350 21:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=122540 00:16:53.350 21:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 122540 /var/tmp/spdk-raid.sock 00:16:53.350 21:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:53.350 21:30:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 122540 ']' 00:16:53.350 21:30:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:53.350 21:30:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:53.350 21:30:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:53.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:53.350 21:30:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:53.350 21:30:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.350 [2024-07-15 21:30:26.623479] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:16:53.350 [2024-07-15 21:30:26.623757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122540 ] 00:16:53.609 [2024-07-15 21:30:26.787394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.868 [2024-07-15 21:30:27.053593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.126 [2024-07-15 21:30:27.294678] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:54.126 21:30:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:54.126 21:30:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:16:54.126 21:30:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:54.126 21:30:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:54.385 BaseBdev1_malloc 00:16:54.385 21:30:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:54.642 true 00:16:54.642 21:30:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:54.905 [2024-07-15 21:30:28.110475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:54.905 [2024-07-15 21:30:28.110803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.905 [2024-07-15 21:30:28.110902] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:54.905 [2024-07-15 21:30:28.110971] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.905 [2024-07-15 21:30:28.113984] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.905 [2024-07-15 21:30:28.114096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:54.905 BaseBdev1 00:16:54.905 21:30:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:54.905 21:30:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:55.165 BaseBdev2_malloc 00:16:55.165 21:30:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:55.424 true 00:16:55.424 21:30:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:55.424 [2024-07-15 21:30:28.791012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:55.424 [2024-07-15 21:30:28.791270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.424 [2024-07-15 21:30:28.791338] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:16:55.424 [2024-07-15 21:30:28.791385] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.424 [2024-07-15 21:30:28.794287] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.424 [2024-07-15 21:30:28.794423] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:55.424 BaseBdev2 00:16:55.683 21:30:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:16:55.683 [2024-07-15 21:30:29.002886] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:55.683 [2024-07-15 21:30:29.005361] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:55.683 [2024-07-15 21:30:29.005655] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:16:55.683 [2024-07-15 21:30:29.005696] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:55.683 [2024-07-15 21:30:29.005882] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:16:55.684 [2024-07-15 21:30:29.006302] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:16:55.684 [2024-07-15 21:30:29.006345] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:16:55.684 [2024-07-15 21:30:29.006565] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.684 21:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:16:55.684 21:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:55.684 21:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:55.684 21:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:55.684 21:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:55.684 21:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:55.684 21:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:55.684 21:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:55.684 21:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:55.684 21:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:55.684 21:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.684 21:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.943 21:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:55.943 "name": "raid_bdev1", 00:16:55.943 "uuid": "f2289c17-74e6-4d5b-b236-41f4738a042f", 00:16:55.943 "strip_size_kb": 64, 00:16:55.943 "state": "online", 00:16:55.943 "raid_level": "raid0", 00:16:55.943 "superblock": true, 00:16:55.943 "num_base_bdevs": 2, 00:16:55.943 "num_base_bdevs_discovered": 2, 00:16:55.943 "num_base_bdevs_operational": 2, 00:16:55.943 "base_bdevs_list": [ 00:16:55.943 { 00:16:55.943 "name": "BaseBdev1", 00:16:55.943 "uuid": "671421e9-fb75-5031-a1f8-bc01564d544d", 00:16:55.943 "is_configured": true, 00:16:55.943 "data_offset": 2048, 00:16:55.943 "data_size": 63488 00:16:55.943 }, 00:16:55.943 { 00:16:55.943 "name": "BaseBdev2", 00:16:55.943 "uuid": "845714e2-d98d-5eb8-9164-f35139209428", 00:16:55.943 "is_configured": true, 00:16:55.943 "data_offset": 2048, 00:16:55.943 "data_size": 63488 00:16:55.943 } 00:16:55.943 ] 00:16:55.943 }' 00:16:55.943 21:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:55.943 21:30:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.512 21:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:56.512 21:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:56.772 [2024-07-15 21:30:29.946556] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:16:57.711 21:30:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:57.711 21:30:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:57.711 21:30:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:16:57.711 21:30:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:16:57.711 21:30:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:16:57.711 21:30:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:57.711 21:30:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:57.711 21:30:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:57.711 21:30:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:57.711 21:30:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:57.711 21:30:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:57.711 21:30:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:57.711 21:30:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:57.711 21:30:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:57.711 21:30:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.711 21:30:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.971 21:30:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:57.971 "name": "raid_bdev1", 00:16:57.971 "uuid": "f2289c17-74e6-4d5b-b236-41f4738a042f", 00:16:57.971 "strip_size_kb": 64, 00:16:57.971 "state": "online", 00:16:57.971 "raid_level": "raid0", 00:16:57.971 "superblock": true, 00:16:57.971 "num_base_bdevs": 2, 00:16:57.971 "num_base_bdevs_discovered": 2, 00:16:57.971 "num_base_bdevs_operational": 2, 00:16:57.971 "base_bdevs_list": [ 00:16:57.971 { 00:16:57.971 "name": "BaseBdev1", 00:16:57.971 "uuid": "671421e9-fb75-5031-a1f8-bc01564d544d", 00:16:57.971 "is_configured": true, 00:16:57.971 "data_offset": 2048, 00:16:57.971 "data_size": 63488 00:16:57.971 }, 00:16:57.971 { 00:16:57.971 "name": "BaseBdev2", 00:16:57.971 "uuid": "845714e2-d98d-5eb8-9164-f35139209428", 00:16:57.971 "is_configured": true, 00:16:57.971 "data_offset": 2048, 00:16:57.971 "data_size": 63488 00:16:57.971 } 00:16:57.971 ] 00:16:57.971 }' 00:16:57.971 21:30:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:57.971 21:30:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.913 21:30:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:58.913 [2024-07-15 21:30:32.120198] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:58.913 [2024-07-15 21:30:32.120390] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:58.913 [2024-07-15 21:30:32.123360] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:58.913 [2024-07-15 21:30:32.123462] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.913 [2024-07-15 21:30:32.123514] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:58.913 [2024-07-15 21:30:32.123546] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:16:58.913 0 00:16:58.913 21:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 122540 00:16:58.913 21:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 122540 ']' 00:16:58.913 21:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 122540 00:16:58.913 21:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:16:58.913 21:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:58.913 21:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 122540 00:16:58.913 killing process with pid 122540 00:16:58.913 21:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:58.913 21:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:58.913 21:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 122540' 00:16:58.913 21:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 122540 00:16:58.913 21:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 122540 00:16:58.913 [2024-07-15 21:30:32.168301] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:59.173 [2024-07-15 21:30:32.312900] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:00.550 21:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.Dn0Cj8p9qx 00:17:00.550 21:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:00.550 21:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:00.550 ************************************ 00:17:00.550 END TEST raid_read_error_test 00:17:00.550 ************************************ 00:17:00.550 21:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.46 00:17:00.550 21:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:17:00.550 21:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:00.550 21:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:00.550 21:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.46 != \0\.\0\0 ]] 00:17:00.550 00:17:00.550 real 0m7.285s 00:17:00.550 user 0m10.534s 00:17:00.550 sys 0m0.858s 00:17:00.550 21:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:00.550 21:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.550 21:30:33 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:00.550 21:30:33 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:17:00.550 21:30:33 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:00.550 21:30:33 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:00.550 21:30:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:00.550 ************************************ 00:17:00.550 START TEST raid_write_error_test 00:17:00.550 ************************************ 00:17:00.550 21:30:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 write 00:17:00.550 21:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:17:00.550 21:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:17:00.550 21:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:17:00.550 21:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:00.550 21:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:00.550 21:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:00.550 21:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:17:00.550 21:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:00.550 21:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:00.550 21:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:17:00.550 21:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:00.550 21:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:00.550 21:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:00.550 21:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:00.550 21:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:00.550 21:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:00.550 21:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:00.550 21:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:00.550 21:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:17:00.550 21:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:17:00.550 21:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:17:00.550 21:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:00.550 21:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.3uwFB3FzCw 00:17:00.550 21:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=122736 00:17:00.550 21:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 122736 /var/tmp/spdk-raid.sock 00:17:00.550 21:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:00.550 21:30:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 122736 ']' 00:17:00.550 21:30:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:00.550 21:30:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:00.550 21:30:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:00.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:00.550 21:30:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:00.550 21:30:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.810 [2024-07-15 21:30:33.981338] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:17:00.810 [2024-07-15 21:30:33.981586] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122736 ] 00:17:00.810 [2024-07-15 21:30:34.145043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.070 [2024-07-15 21:30:34.400711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.330 [2024-07-15 21:30:34.648550] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.589 21:30:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:01.589 21:30:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:17:01.589 21:30:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:01.589 21:30:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:01.849 BaseBdev1_malloc 00:17:01.849 21:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:02.108 true 00:17:02.108 21:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:02.368 [2024-07-15 21:30:35.485368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:02.368 [2024-07-15 21:30:35.485622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.368 [2024-07-15 21:30:35.485688] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:02.368 [2024-07-15 21:30:35.485771] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.368 [2024-07-15 21:30:35.488542] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.368 [2024-07-15 21:30:35.488636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:02.368 BaseBdev1 00:17:02.368 21:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:02.368 21:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:02.368 BaseBdev2_malloc 00:17:02.627 21:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:02.627 true 00:17:02.627 21:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:02.886 [2024-07-15 21:30:36.149304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:02.886 [2024-07-15 21:30:36.149539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.886 [2024-07-15 21:30:36.149597] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:17:02.886 [2024-07-15 21:30:36.149636] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.886 [2024-07-15 21:30:36.151956] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.886 [2024-07-15 21:30:36.152039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:02.886 BaseBdev2 00:17:02.886 21:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:17:03.144 [2024-07-15 21:30:36.364927] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:03.145 [2024-07-15 21:30:36.367070] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:03.145 [2024-07-15 21:30:36.367335] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:17:03.145 [2024-07-15 21:30:36.367371] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:03.145 [2024-07-15 21:30:36.367546] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:17:03.145 [2024-07-15 21:30:36.367959] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:17:03.145 [2024-07-15 21:30:36.367999] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:17:03.145 [2024-07-15 21:30:36.368202] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.145 21:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:17:03.145 21:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:03.145 21:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:03.145 21:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:03.145 21:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:03.145 21:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:03.145 21:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:03.145 21:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:03.145 21:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:03.145 21:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:03.145 21:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.145 21:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.402 21:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:03.402 "name": "raid_bdev1", 00:17:03.402 "uuid": "72288411-3518-4189-8af5-228096a1169a", 00:17:03.402 "strip_size_kb": 64, 00:17:03.402 "state": "online", 00:17:03.402 "raid_level": "raid0", 00:17:03.402 "superblock": true, 00:17:03.402 "num_base_bdevs": 2, 00:17:03.402 "num_base_bdevs_discovered": 2, 00:17:03.402 "num_base_bdevs_operational": 2, 00:17:03.402 "base_bdevs_list": [ 00:17:03.402 { 00:17:03.402 "name": "BaseBdev1", 00:17:03.402 "uuid": "cd9b7a11-d6a2-57f1-a099-d6e7a3d09a32", 00:17:03.402 "is_configured": true, 00:17:03.402 "data_offset": 2048, 00:17:03.402 "data_size": 63488 00:17:03.402 }, 00:17:03.402 { 00:17:03.403 "name": "BaseBdev2", 00:17:03.403 "uuid": "78667496-49ab-5540-9c4e-cc32da128ac1", 00:17:03.403 "is_configured": true, 00:17:03.403 "data_offset": 2048, 00:17:03.403 "data_size": 63488 00:17:03.403 } 00:17:03.403 ] 00:17:03.403 }' 00:17:03.403 21:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:03.403 21:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.993 21:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:03.993 21:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:03.993 [2024-07-15 21:30:37.300626] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:17:04.939 21:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:05.198 21:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:05.198 21:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:17:05.198 21:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:17:05.198 21:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:17:05.198 21:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:05.198 21:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:05.198 21:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:05.198 21:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:05.198 21:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:05.198 21:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:05.198 21:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:05.198 21:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:05.198 21:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:05.198 21:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.198 21:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.456 21:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:05.456 "name": "raid_bdev1", 00:17:05.456 "uuid": "72288411-3518-4189-8af5-228096a1169a", 00:17:05.456 "strip_size_kb": 64, 00:17:05.456 "state": "online", 00:17:05.456 "raid_level": "raid0", 00:17:05.456 "superblock": true, 00:17:05.456 "num_base_bdevs": 2, 00:17:05.456 "num_base_bdevs_discovered": 2, 00:17:05.456 "num_base_bdevs_operational": 2, 00:17:05.456 "base_bdevs_list": [ 00:17:05.456 { 00:17:05.456 "name": "BaseBdev1", 00:17:05.456 "uuid": "cd9b7a11-d6a2-57f1-a099-d6e7a3d09a32", 00:17:05.456 "is_configured": true, 00:17:05.456 "data_offset": 2048, 00:17:05.456 "data_size": 63488 00:17:05.456 }, 00:17:05.456 { 00:17:05.456 "name": "BaseBdev2", 00:17:05.456 "uuid": "78667496-49ab-5540-9c4e-cc32da128ac1", 00:17:05.456 "is_configured": true, 00:17:05.456 "data_offset": 2048, 00:17:05.456 "data_size": 63488 00:17:05.456 } 00:17:05.456 ] 00:17:05.456 }' 00:17:05.457 21:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:05.457 21:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.024 21:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:06.282 [2024-07-15 21:30:39.566545] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:06.282 [2024-07-15 21:30:39.566698] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:06.282 [2024-07-15 21:30:39.569314] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:06.282 [2024-07-15 21:30:39.569386] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.282 [2024-07-15 21:30:39.569431] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:06.282 [2024-07-15 21:30:39.569456] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:17:06.282 0 00:17:06.282 21:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 122736 00:17:06.282 21:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 122736 ']' 00:17:06.282 21:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 122736 00:17:06.282 21:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:17:06.282 21:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:06.282 21:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 122736 00:17:06.282 21:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:06.282 21:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:06.282 21:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 122736' 00:17:06.282 killing process with pid 122736 00:17:06.282 21:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 122736 00:17:06.282 21:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 122736 00:17:06.282 [2024-07-15 21:30:39.611402] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:06.541 [2024-07-15 21:30:39.753419] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:07.920 21:30:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.3uwFB3FzCw 00:17:07.920 21:30:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:07.920 21:30:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:07.920 ************************************ 00:17:07.920 END TEST raid_write_error_test 00:17:07.920 ************************************ 00:17:07.920 21:30:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.44 00:17:07.920 21:30:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:17:07.920 21:30:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:07.920 21:30:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:07.920 21:30:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.44 != \0\.\0\0 ]] 00:17:07.920 00:17:07.920 real 0m7.297s 00:17:07.920 user 0m10.536s 00:17:07.920 sys 0m0.978s 00:17:07.920 21:30:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:07.920 21:30:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.920 21:30:41 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:07.920 21:30:41 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:17:07.920 21:30:41 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:17:07.920 21:30:41 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:07.920 21:30:41 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:07.920 21:30:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:07.920 ************************************ 00:17:07.920 START TEST raid_state_function_test 00:17:07.920 ************************************ 00:17:07.920 21:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 false 00:17:07.920 21:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:17:07.920 21:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:17:07.920 21:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:17:07.920 21:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:07.920 21:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:07.920 21:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:07.920 21:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:07.920 21:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:17:07.920 21:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:07.920 21:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:07.920 21:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:17:07.920 21:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:07.920 21:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:07.920 21:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:07.921 21:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:07.921 21:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:07.921 21:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:07.921 21:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:07.921 21:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:17:07.921 21:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:17:07.921 21:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:17:07.921 21:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:17:07.921 21:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:17:07.921 21:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=122944 00:17:07.921 21:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:07.921 21:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 122944' 00:17:07.921 Process raid pid: 122944 00:17:07.921 21:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 122944 /var/tmp/spdk-raid.sock 00:17:07.921 21:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 122944 ']' 00:17:07.921 21:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:07.921 21:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:07.921 21:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:07.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:07.921 21:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:07.921 21:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.180 [2024-07-15 21:30:41.345614] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:17:08.180 [2024-07-15 21:30:41.345869] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.180 [2024-07-15 21:30:41.508377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.439 [2024-07-15 21:30:41.758911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.705 [2024-07-15 21:30:41.996819] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:08.970 21:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:08.970 21:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:17:08.970 21:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:09.229 [2024-07-15 21:30:42.420395] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:09.229 [2024-07-15 21:30:42.420586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:09.229 [2024-07-15 21:30:42.420619] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:09.229 [2024-07-15 21:30:42.420655] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:09.229 21:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:09.229 21:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:09.229 21:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:09.229 21:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:09.229 21:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:09.229 21:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:09.229 21:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:09.229 21:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:09.229 21:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:09.229 21:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:09.229 21:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.229 21:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.488 21:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:09.488 "name": "Existed_Raid", 00:17:09.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.488 "strip_size_kb": 64, 00:17:09.488 "state": "configuring", 00:17:09.488 "raid_level": "concat", 00:17:09.488 "superblock": false, 00:17:09.488 "num_base_bdevs": 2, 00:17:09.488 "num_base_bdevs_discovered": 0, 00:17:09.488 "num_base_bdevs_operational": 2, 00:17:09.488 "base_bdevs_list": [ 00:17:09.488 { 00:17:09.488 "name": "BaseBdev1", 00:17:09.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.488 "is_configured": false, 00:17:09.488 "data_offset": 0, 00:17:09.488 "data_size": 0 00:17:09.488 }, 00:17:09.488 { 00:17:09.488 "name": "BaseBdev2", 00:17:09.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.488 "is_configured": false, 00:17:09.488 "data_offset": 0, 00:17:09.488 "data_size": 0 00:17:09.488 } 00:17:09.488 ] 00:17:09.488 }' 00:17:09.488 21:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:09.488 21:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.058 21:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:10.317 [2024-07-15 21:30:43.494542] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:10.317 [2024-07-15 21:30:43.494675] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:10.317 21:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:10.576 [2024-07-15 21:30:43.698170] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:10.576 [2024-07-15 21:30:43.698290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:10.576 [2024-07-15 21:30:43.698313] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:10.576 [2024-07-15 21:30:43.698352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:10.576 21:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:10.576 [2024-07-15 21:30:43.946404] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:10.576 BaseBdev1 00:17:10.834 21:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:10.834 21:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:10.834 21:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:10.834 21:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:10.834 21:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:10.834 21:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:10.834 21:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:10.834 21:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:11.093 [ 00:17:11.093 { 00:17:11.093 "name": "BaseBdev1", 00:17:11.093 "aliases": [ 00:17:11.093 "7d8f6d31-28f1-4064-a64b-a9dbd4a23bbe" 00:17:11.093 ], 00:17:11.093 "product_name": "Malloc disk", 00:17:11.093 "block_size": 512, 00:17:11.093 "num_blocks": 65536, 00:17:11.093 "uuid": "7d8f6d31-28f1-4064-a64b-a9dbd4a23bbe", 00:17:11.093 "assigned_rate_limits": { 00:17:11.093 "rw_ios_per_sec": 0, 00:17:11.093 "rw_mbytes_per_sec": 0, 00:17:11.093 "r_mbytes_per_sec": 0, 00:17:11.093 "w_mbytes_per_sec": 0 00:17:11.093 }, 00:17:11.093 "claimed": true, 00:17:11.093 "claim_type": "exclusive_write", 00:17:11.093 "zoned": false, 00:17:11.093 "supported_io_types": { 00:17:11.093 "read": true, 00:17:11.093 "write": true, 00:17:11.093 "unmap": true, 00:17:11.093 "flush": true, 00:17:11.093 "reset": true, 00:17:11.093 "nvme_admin": false, 00:17:11.093 "nvme_io": false, 00:17:11.093 "nvme_io_md": false, 00:17:11.093 "write_zeroes": true, 00:17:11.093 "zcopy": true, 00:17:11.093 "get_zone_info": false, 00:17:11.093 "zone_management": false, 00:17:11.093 "zone_append": false, 00:17:11.093 "compare": false, 00:17:11.093 "compare_and_write": false, 00:17:11.093 "abort": true, 00:17:11.093 "seek_hole": false, 00:17:11.093 "seek_data": false, 00:17:11.093 "copy": true, 00:17:11.093 "nvme_iov_md": false 00:17:11.093 }, 00:17:11.093 "memory_domains": [ 00:17:11.093 { 00:17:11.093 "dma_device_id": "system", 00:17:11.093 "dma_device_type": 1 00:17:11.093 }, 00:17:11.093 { 00:17:11.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.093 "dma_device_type": 2 00:17:11.093 } 00:17:11.093 ], 00:17:11.093 "driver_specific": {} 00:17:11.093 } 00:17:11.093 ] 00:17:11.093 21:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:11.093 21:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:11.093 21:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:11.093 21:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:11.093 21:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:11.093 21:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:11.093 21:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:11.093 21:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:11.093 21:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:11.093 21:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:11.093 21:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:11.093 21:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.093 21:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.352 21:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:11.352 "name": "Existed_Raid", 00:17:11.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.352 "strip_size_kb": 64, 00:17:11.352 "state": "configuring", 00:17:11.352 "raid_level": "concat", 00:17:11.352 "superblock": false, 00:17:11.352 "num_base_bdevs": 2, 00:17:11.352 "num_base_bdevs_discovered": 1, 00:17:11.352 "num_base_bdevs_operational": 2, 00:17:11.352 "base_bdevs_list": [ 00:17:11.352 { 00:17:11.352 "name": "BaseBdev1", 00:17:11.352 "uuid": "7d8f6d31-28f1-4064-a64b-a9dbd4a23bbe", 00:17:11.352 "is_configured": true, 00:17:11.352 "data_offset": 0, 00:17:11.352 "data_size": 65536 00:17:11.352 }, 00:17:11.352 { 00:17:11.352 "name": "BaseBdev2", 00:17:11.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.352 "is_configured": false, 00:17:11.352 "data_offset": 0, 00:17:11.352 "data_size": 0 00:17:11.352 } 00:17:11.352 ] 00:17:11.352 }' 00:17:11.352 21:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:11.352 21:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.920 21:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:12.179 [2024-07-15 21:30:45.467956] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:12.179 [2024-07-15 21:30:45.468131] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:17:12.179 21:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:12.437 [2024-07-15 21:30:45.679586] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:12.437 [2024-07-15 21:30:45.681767] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:12.437 [2024-07-15 21:30:45.681860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:12.437 21:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:12.437 21:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:12.438 21:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:12.438 21:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:12.438 21:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:12.438 21:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:12.438 21:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:12.438 21:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:12.438 21:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:12.438 21:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:12.438 21:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:12.438 21:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:12.438 21:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.438 21:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.696 21:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:12.696 "name": "Existed_Raid", 00:17:12.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.696 "strip_size_kb": 64, 00:17:12.696 "state": "configuring", 00:17:12.696 "raid_level": "concat", 00:17:12.696 "superblock": false, 00:17:12.696 "num_base_bdevs": 2, 00:17:12.696 "num_base_bdevs_discovered": 1, 00:17:12.696 "num_base_bdevs_operational": 2, 00:17:12.696 "base_bdevs_list": [ 00:17:12.696 { 00:17:12.696 "name": "BaseBdev1", 00:17:12.696 "uuid": "7d8f6d31-28f1-4064-a64b-a9dbd4a23bbe", 00:17:12.696 "is_configured": true, 00:17:12.696 "data_offset": 0, 00:17:12.696 "data_size": 65536 00:17:12.696 }, 00:17:12.696 { 00:17:12.696 "name": "BaseBdev2", 00:17:12.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.696 "is_configured": false, 00:17:12.696 "data_offset": 0, 00:17:12.696 "data_size": 0 00:17:12.696 } 00:17:12.696 ] 00:17:12.696 }' 00:17:12.696 21:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:12.696 21:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.267 21:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:13.527 [2024-07-15 21:30:46.828676] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:13.528 [2024-07-15 21:30:46.828852] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:17:13.528 [2024-07-15 21:30:46.828875] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:13.528 [2024-07-15 21:30:46.829040] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:17:13.528 [2024-07-15 21:30:46.829439] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:17:13.528 [2024-07-15 21:30:46.829481] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:17:13.528 [2024-07-15 21:30:46.829753] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.528 BaseBdev2 00:17:13.528 21:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:13.528 21:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:13.528 21:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:13.528 21:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:13.528 21:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:13.528 21:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:13.528 21:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:13.787 21:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:14.057 [ 00:17:14.057 { 00:17:14.057 "name": "BaseBdev2", 00:17:14.057 "aliases": [ 00:17:14.057 "f892c203-c82b-4ea1-b45f-b8f70316eda6" 00:17:14.057 ], 00:17:14.057 "product_name": "Malloc disk", 00:17:14.057 "block_size": 512, 00:17:14.057 "num_blocks": 65536, 00:17:14.057 "uuid": "f892c203-c82b-4ea1-b45f-b8f70316eda6", 00:17:14.057 "assigned_rate_limits": { 00:17:14.057 "rw_ios_per_sec": 0, 00:17:14.057 "rw_mbytes_per_sec": 0, 00:17:14.057 "r_mbytes_per_sec": 0, 00:17:14.057 "w_mbytes_per_sec": 0 00:17:14.057 }, 00:17:14.057 "claimed": true, 00:17:14.057 "claim_type": "exclusive_write", 00:17:14.057 "zoned": false, 00:17:14.057 "supported_io_types": { 00:17:14.057 "read": true, 00:17:14.057 "write": true, 00:17:14.057 "unmap": true, 00:17:14.057 "flush": true, 00:17:14.057 "reset": true, 00:17:14.057 "nvme_admin": false, 00:17:14.057 "nvme_io": false, 00:17:14.057 "nvme_io_md": false, 00:17:14.057 "write_zeroes": true, 00:17:14.057 "zcopy": true, 00:17:14.057 "get_zone_info": false, 00:17:14.057 "zone_management": false, 00:17:14.057 "zone_append": false, 00:17:14.057 "compare": false, 00:17:14.057 "compare_and_write": false, 00:17:14.057 "abort": true, 00:17:14.057 "seek_hole": false, 00:17:14.057 "seek_data": false, 00:17:14.057 "copy": true, 00:17:14.057 "nvme_iov_md": false 00:17:14.057 }, 00:17:14.057 "memory_domains": [ 00:17:14.057 { 00:17:14.057 "dma_device_id": "system", 00:17:14.057 "dma_device_type": 1 00:17:14.057 }, 00:17:14.057 { 00:17:14.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.057 "dma_device_type": 2 00:17:14.057 } 00:17:14.057 ], 00:17:14.057 "driver_specific": {} 00:17:14.057 } 00:17:14.057 ] 00:17:14.057 21:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:14.057 21:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:14.057 21:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:14.057 21:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:17:14.057 21:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:14.057 21:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:14.057 21:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:14.057 21:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:14.057 21:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:14.057 21:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:14.057 21:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:14.057 21:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:14.057 21:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:14.057 21:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.058 21:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.315 21:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:14.315 "name": "Existed_Raid", 00:17:14.315 "uuid": "be8c1b92-81c4-42e8-b457-f64931e41537", 00:17:14.315 "strip_size_kb": 64, 00:17:14.315 "state": "online", 00:17:14.315 "raid_level": "concat", 00:17:14.315 "superblock": false, 00:17:14.315 "num_base_bdevs": 2, 00:17:14.315 "num_base_bdevs_discovered": 2, 00:17:14.315 "num_base_bdevs_operational": 2, 00:17:14.315 "base_bdevs_list": [ 00:17:14.315 { 00:17:14.315 "name": "BaseBdev1", 00:17:14.315 "uuid": "7d8f6d31-28f1-4064-a64b-a9dbd4a23bbe", 00:17:14.315 "is_configured": true, 00:17:14.315 "data_offset": 0, 00:17:14.315 "data_size": 65536 00:17:14.315 }, 00:17:14.315 { 00:17:14.315 "name": "BaseBdev2", 00:17:14.315 "uuid": "f892c203-c82b-4ea1-b45f-b8f70316eda6", 00:17:14.315 "is_configured": true, 00:17:14.315 "data_offset": 0, 00:17:14.315 "data_size": 65536 00:17:14.315 } 00:17:14.315 ] 00:17:14.315 }' 00:17:14.315 21:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:14.315 21:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.884 21:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:14.884 21:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:14.884 21:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:14.884 21:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:14.884 21:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:14.884 21:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:14.884 21:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:14.884 21:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:15.144 [2024-07-15 21:30:48.326955] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:15.144 21:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:15.144 "name": "Existed_Raid", 00:17:15.144 "aliases": [ 00:17:15.144 "be8c1b92-81c4-42e8-b457-f64931e41537" 00:17:15.144 ], 00:17:15.144 "product_name": "Raid Volume", 00:17:15.144 "block_size": 512, 00:17:15.144 "num_blocks": 131072, 00:17:15.144 "uuid": "be8c1b92-81c4-42e8-b457-f64931e41537", 00:17:15.144 "assigned_rate_limits": { 00:17:15.144 "rw_ios_per_sec": 0, 00:17:15.144 "rw_mbytes_per_sec": 0, 00:17:15.144 "r_mbytes_per_sec": 0, 00:17:15.144 "w_mbytes_per_sec": 0 00:17:15.144 }, 00:17:15.144 "claimed": false, 00:17:15.144 "zoned": false, 00:17:15.144 "supported_io_types": { 00:17:15.144 "read": true, 00:17:15.144 "write": true, 00:17:15.144 "unmap": true, 00:17:15.144 "flush": true, 00:17:15.144 "reset": true, 00:17:15.144 "nvme_admin": false, 00:17:15.144 "nvme_io": false, 00:17:15.144 "nvme_io_md": false, 00:17:15.144 "write_zeroes": true, 00:17:15.144 "zcopy": false, 00:17:15.144 "get_zone_info": false, 00:17:15.144 "zone_management": false, 00:17:15.144 "zone_append": false, 00:17:15.144 "compare": false, 00:17:15.144 "compare_and_write": false, 00:17:15.144 "abort": false, 00:17:15.144 "seek_hole": false, 00:17:15.144 "seek_data": false, 00:17:15.144 "copy": false, 00:17:15.144 "nvme_iov_md": false 00:17:15.144 }, 00:17:15.144 "memory_domains": [ 00:17:15.144 { 00:17:15.144 "dma_device_id": "system", 00:17:15.144 "dma_device_type": 1 00:17:15.144 }, 00:17:15.144 { 00:17:15.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.144 "dma_device_type": 2 00:17:15.144 }, 00:17:15.144 { 00:17:15.144 "dma_device_id": "system", 00:17:15.144 "dma_device_type": 1 00:17:15.144 }, 00:17:15.144 { 00:17:15.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.144 "dma_device_type": 2 00:17:15.144 } 00:17:15.144 ], 00:17:15.144 "driver_specific": { 00:17:15.144 "raid": { 00:17:15.144 "uuid": "be8c1b92-81c4-42e8-b457-f64931e41537", 00:17:15.144 "strip_size_kb": 64, 00:17:15.144 "state": "online", 00:17:15.144 "raid_level": "concat", 00:17:15.144 "superblock": false, 00:17:15.144 "num_base_bdevs": 2, 00:17:15.144 "num_base_bdevs_discovered": 2, 00:17:15.144 "num_base_bdevs_operational": 2, 00:17:15.144 "base_bdevs_list": [ 00:17:15.144 { 00:17:15.144 "name": "BaseBdev1", 00:17:15.144 "uuid": "7d8f6d31-28f1-4064-a64b-a9dbd4a23bbe", 00:17:15.144 "is_configured": true, 00:17:15.144 "data_offset": 0, 00:17:15.144 "data_size": 65536 00:17:15.144 }, 00:17:15.144 { 00:17:15.144 "name": "BaseBdev2", 00:17:15.144 "uuid": "f892c203-c82b-4ea1-b45f-b8f70316eda6", 00:17:15.144 "is_configured": true, 00:17:15.144 "data_offset": 0, 00:17:15.144 "data_size": 65536 00:17:15.144 } 00:17:15.145 ] 00:17:15.145 } 00:17:15.145 } 00:17:15.145 }' 00:17:15.145 21:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:15.145 21:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:15.145 BaseBdev2' 00:17:15.145 21:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:15.145 21:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:15.145 21:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:15.404 21:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:15.404 "name": "BaseBdev1", 00:17:15.404 "aliases": [ 00:17:15.404 "7d8f6d31-28f1-4064-a64b-a9dbd4a23bbe" 00:17:15.404 ], 00:17:15.404 "product_name": "Malloc disk", 00:17:15.404 "block_size": 512, 00:17:15.404 "num_blocks": 65536, 00:17:15.404 "uuid": "7d8f6d31-28f1-4064-a64b-a9dbd4a23bbe", 00:17:15.404 "assigned_rate_limits": { 00:17:15.404 "rw_ios_per_sec": 0, 00:17:15.404 "rw_mbytes_per_sec": 0, 00:17:15.404 "r_mbytes_per_sec": 0, 00:17:15.404 "w_mbytes_per_sec": 0 00:17:15.404 }, 00:17:15.404 "claimed": true, 00:17:15.404 "claim_type": "exclusive_write", 00:17:15.404 "zoned": false, 00:17:15.404 "supported_io_types": { 00:17:15.404 "read": true, 00:17:15.404 "write": true, 00:17:15.404 "unmap": true, 00:17:15.404 "flush": true, 00:17:15.404 "reset": true, 00:17:15.404 "nvme_admin": false, 00:17:15.404 "nvme_io": false, 00:17:15.404 "nvme_io_md": false, 00:17:15.404 "write_zeroes": true, 00:17:15.404 "zcopy": true, 00:17:15.404 "get_zone_info": false, 00:17:15.404 "zone_management": false, 00:17:15.404 "zone_append": false, 00:17:15.404 "compare": false, 00:17:15.404 "compare_and_write": false, 00:17:15.404 "abort": true, 00:17:15.404 "seek_hole": false, 00:17:15.404 "seek_data": false, 00:17:15.404 "copy": true, 00:17:15.404 "nvme_iov_md": false 00:17:15.404 }, 00:17:15.404 "memory_domains": [ 00:17:15.404 { 00:17:15.404 "dma_device_id": "system", 00:17:15.404 "dma_device_type": 1 00:17:15.404 }, 00:17:15.404 { 00:17:15.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.404 "dma_device_type": 2 00:17:15.404 } 00:17:15.404 ], 00:17:15.404 "driver_specific": {} 00:17:15.404 }' 00:17:15.404 21:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:15.404 21:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:15.404 21:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:15.404 21:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:15.404 21:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:15.664 21:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:15.664 21:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:15.664 21:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:15.664 21:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:15.664 21:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:15.664 21:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:15.664 21:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:15.664 21:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:15.664 21:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:15.664 21:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:15.924 21:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:15.924 "name": "BaseBdev2", 00:17:15.924 "aliases": [ 00:17:15.924 "f892c203-c82b-4ea1-b45f-b8f70316eda6" 00:17:15.924 ], 00:17:15.924 "product_name": "Malloc disk", 00:17:15.924 "block_size": 512, 00:17:15.924 "num_blocks": 65536, 00:17:15.924 "uuid": "f892c203-c82b-4ea1-b45f-b8f70316eda6", 00:17:15.924 "assigned_rate_limits": { 00:17:15.924 "rw_ios_per_sec": 0, 00:17:15.924 "rw_mbytes_per_sec": 0, 00:17:15.924 "r_mbytes_per_sec": 0, 00:17:15.924 "w_mbytes_per_sec": 0 00:17:15.924 }, 00:17:15.924 "claimed": true, 00:17:15.924 "claim_type": "exclusive_write", 00:17:15.924 "zoned": false, 00:17:15.924 "supported_io_types": { 00:17:15.924 "read": true, 00:17:15.925 "write": true, 00:17:15.925 "unmap": true, 00:17:15.925 "flush": true, 00:17:15.925 "reset": true, 00:17:15.925 "nvme_admin": false, 00:17:15.925 "nvme_io": false, 00:17:15.925 "nvme_io_md": false, 00:17:15.925 "write_zeroes": true, 00:17:15.925 "zcopy": true, 00:17:15.925 "get_zone_info": false, 00:17:15.925 "zone_management": false, 00:17:15.925 "zone_append": false, 00:17:15.925 "compare": false, 00:17:15.925 "compare_and_write": false, 00:17:15.925 "abort": true, 00:17:15.925 "seek_hole": false, 00:17:15.925 "seek_data": false, 00:17:15.925 "copy": true, 00:17:15.925 "nvme_iov_md": false 00:17:15.925 }, 00:17:15.925 "memory_domains": [ 00:17:15.925 { 00:17:15.925 "dma_device_id": "system", 00:17:15.925 "dma_device_type": 1 00:17:15.925 }, 00:17:15.925 { 00:17:15.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.925 "dma_device_type": 2 00:17:15.925 } 00:17:15.925 ], 00:17:15.925 "driver_specific": {} 00:17:15.925 }' 00:17:15.925 21:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:15.925 21:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:16.185 21:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:16.185 21:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:16.185 21:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:16.185 21:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:16.185 21:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:16.185 21:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:16.445 21:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:16.445 21:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:16.445 21:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:16.445 21:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:16.445 21:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:16.705 [2024-07-15 21:30:49.883874] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:16.705 [2024-07-15 21:30:49.884023] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:16.705 [2024-07-15 21:30:49.884110] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:16.705 21:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:16.705 21:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:17:16.705 21:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:16.705 21:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:16.705 21:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:16.705 21:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:17:16.705 21:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:16.705 21:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:16.705 21:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:16.705 21:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:16.705 21:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:16.705 21:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:16.705 21:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:16.705 21:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:16.705 21:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:16.705 21:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.705 21:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:16.964 21:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:16.964 "name": "Existed_Raid", 00:17:16.964 "uuid": "be8c1b92-81c4-42e8-b457-f64931e41537", 00:17:16.964 "strip_size_kb": 64, 00:17:16.964 "state": "offline", 00:17:16.964 "raid_level": "concat", 00:17:16.964 "superblock": false, 00:17:16.964 "num_base_bdevs": 2, 00:17:16.964 "num_base_bdevs_discovered": 1, 00:17:16.964 "num_base_bdevs_operational": 1, 00:17:16.964 "base_bdevs_list": [ 00:17:16.964 { 00:17:16.964 "name": null, 00:17:16.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.964 "is_configured": false, 00:17:16.964 "data_offset": 0, 00:17:16.964 "data_size": 65536 00:17:16.964 }, 00:17:16.964 { 00:17:16.964 "name": "BaseBdev2", 00:17:16.964 "uuid": "f892c203-c82b-4ea1-b45f-b8f70316eda6", 00:17:16.964 "is_configured": true, 00:17:16.964 "data_offset": 0, 00:17:16.964 "data_size": 65536 00:17:16.964 } 00:17:16.964 ] 00:17:16.964 }' 00:17:16.964 21:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:16.964 21:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.903 21:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:17.903 21:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:17.903 21:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.903 21:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:17.903 21:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:17.903 21:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:17.903 21:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:18.161 [2024-07-15 21:30:51.374864] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:18.161 [2024-07-15 21:30:51.375021] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:17:18.161 21:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:18.161 21:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:18.161 21:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.161 21:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:18.419 21:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:18.419 21:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:18.419 21:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:17:18.419 21:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 122944 00:17:18.419 21:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 122944 ']' 00:17:18.419 21:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 122944 00:17:18.419 21:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:17:18.419 21:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:18.419 21:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 122944 00:17:18.419 killing process with pid 122944 00:17:18.419 21:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:18.419 21:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:18.419 21:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 122944' 00:17:18.419 21:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 122944 00:17:18.419 21:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 122944 00:17:18.419 [2024-07-15 21:30:51.749380] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:18.419 [2024-07-15 21:30:51.749536] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:19.804 ************************************ 00:17:19.804 END TEST raid_state_function_test 00:17:19.804 ************************************ 00:17:19.804 21:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:17:19.804 00:17:19.804 real 0m11.842s 00:17:19.804 user 0m20.395s 00:17:19.804 sys 0m1.571s 00:17:19.804 21:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:19.804 21:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.804 21:30:53 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:19.804 21:30:53 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:17:19.804 21:30:53 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:19.804 21:30:53 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:19.804 21:30:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:20.062 ************************************ 00:17:20.062 START TEST raid_state_function_test_sb 00:17:20.062 ************************************ 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 true 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=123336 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 123336' 00:17:20.062 Process raid pid: 123336 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 123336 /var/tmp/spdk-raid.sock 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 123336 ']' 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:20.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.062 21:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.062 [2024-07-15 21:30:53.257129] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:17:20.062 [2024-07-15 21:30:53.257386] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.062 [2024-07-15 21:30:53.422519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.320 [2024-07-15 21:30:53.676934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.579 [2024-07-15 21:30:53.911168] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:20.838 21:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:20.838 21:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:17:20.838 21:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:21.096 [2024-07-15 21:30:54.342670] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:21.096 [2024-07-15 21:30:54.342867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:21.096 [2024-07-15 21:30:54.342899] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:21.096 [2024-07-15 21:30:54.342936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:21.096 21:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:21.096 21:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:21.096 21:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:21.096 21:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:21.096 21:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:21.096 21:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:21.096 21:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:21.096 21:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:21.096 21:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:21.096 21:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:21.096 21:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.096 21:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.354 21:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:21.354 "name": "Existed_Raid", 00:17:21.354 "uuid": "646a9d7b-e21c-4a9c-ad26-0392f0bffa6f", 00:17:21.354 "strip_size_kb": 64, 00:17:21.354 "state": "configuring", 00:17:21.354 "raid_level": "concat", 00:17:21.354 "superblock": true, 00:17:21.354 "num_base_bdevs": 2, 00:17:21.354 "num_base_bdevs_discovered": 0, 00:17:21.354 "num_base_bdevs_operational": 2, 00:17:21.354 "base_bdevs_list": [ 00:17:21.354 { 00:17:21.354 "name": "BaseBdev1", 00:17:21.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.354 "is_configured": false, 00:17:21.354 "data_offset": 0, 00:17:21.354 "data_size": 0 00:17:21.354 }, 00:17:21.354 { 00:17:21.354 "name": "BaseBdev2", 00:17:21.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.354 "is_configured": false, 00:17:21.354 "data_offset": 0, 00:17:21.354 "data_size": 0 00:17:21.354 } 00:17:21.354 ] 00:17:21.354 }' 00:17:21.354 21:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:21.354 21:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.920 21:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:22.178 [2024-07-15 21:30:55.401275] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:22.178 [2024-07-15 21:30:55.401443] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:22.178 21:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:22.436 [2024-07-15 21:30:55.628946] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:22.436 [2024-07-15 21:30:55.629118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:22.436 [2024-07-15 21:30:55.629144] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:22.436 [2024-07-15 21:30:55.629177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:22.436 21:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:22.695 [2024-07-15 21:30:55.887297] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:22.695 BaseBdev1 00:17:22.695 21:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:22.695 21:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:22.695 21:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:22.695 21:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:22.695 21:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:22.695 21:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:22.695 21:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:22.953 21:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:22.953 [ 00:17:22.953 { 00:17:22.953 "name": "BaseBdev1", 00:17:22.953 "aliases": [ 00:17:22.953 "a7baf20b-5385-4e66-8b86-a8a97f08eafc" 00:17:22.953 ], 00:17:22.953 "product_name": "Malloc disk", 00:17:22.953 "block_size": 512, 00:17:22.953 "num_blocks": 65536, 00:17:22.953 "uuid": "a7baf20b-5385-4e66-8b86-a8a97f08eafc", 00:17:22.953 "assigned_rate_limits": { 00:17:22.953 "rw_ios_per_sec": 0, 00:17:22.953 "rw_mbytes_per_sec": 0, 00:17:22.953 "r_mbytes_per_sec": 0, 00:17:22.953 "w_mbytes_per_sec": 0 00:17:22.953 }, 00:17:22.953 "claimed": true, 00:17:22.953 "claim_type": "exclusive_write", 00:17:22.953 "zoned": false, 00:17:22.953 "supported_io_types": { 00:17:22.953 "read": true, 00:17:22.953 "write": true, 00:17:22.953 "unmap": true, 00:17:22.953 "flush": true, 00:17:22.953 "reset": true, 00:17:22.953 "nvme_admin": false, 00:17:22.953 "nvme_io": false, 00:17:22.953 "nvme_io_md": false, 00:17:22.953 "write_zeroes": true, 00:17:22.953 "zcopy": true, 00:17:22.953 "get_zone_info": false, 00:17:22.953 "zone_management": false, 00:17:22.953 "zone_append": false, 00:17:22.953 "compare": false, 00:17:22.953 "compare_and_write": false, 00:17:22.953 "abort": true, 00:17:22.953 "seek_hole": false, 00:17:22.953 "seek_data": false, 00:17:22.953 "copy": true, 00:17:22.953 "nvme_iov_md": false 00:17:22.953 }, 00:17:22.953 "memory_domains": [ 00:17:22.953 { 00:17:22.953 "dma_device_id": "system", 00:17:22.953 "dma_device_type": 1 00:17:22.953 }, 00:17:22.953 { 00:17:22.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.953 "dma_device_type": 2 00:17:22.953 } 00:17:22.953 ], 00:17:22.953 "driver_specific": {} 00:17:22.953 } 00:17:22.953 ] 00:17:23.212 21:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:23.212 21:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:23.212 21:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:23.212 21:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:23.212 21:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:23.212 21:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:23.212 21:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:23.212 21:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:23.212 21:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:23.212 21:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:23.212 21:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:23.212 21:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.212 21:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.212 21:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:23.212 "name": "Existed_Raid", 00:17:23.212 "uuid": "6a8be84b-97e2-488b-a47a-f4e3566a831e", 00:17:23.212 "strip_size_kb": 64, 00:17:23.212 "state": "configuring", 00:17:23.212 "raid_level": "concat", 00:17:23.212 "superblock": true, 00:17:23.212 "num_base_bdevs": 2, 00:17:23.212 "num_base_bdevs_discovered": 1, 00:17:23.212 "num_base_bdevs_operational": 2, 00:17:23.212 "base_bdevs_list": [ 00:17:23.212 { 00:17:23.212 "name": "BaseBdev1", 00:17:23.212 "uuid": "a7baf20b-5385-4e66-8b86-a8a97f08eafc", 00:17:23.212 "is_configured": true, 00:17:23.212 "data_offset": 2048, 00:17:23.212 "data_size": 63488 00:17:23.212 }, 00:17:23.212 { 00:17:23.212 "name": "BaseBdev2", 00:17:23.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.212 "is_configured": false, 00:17:23.212 "data_offset": 0, 00:17:23.212 "data_size": 0 00:17:23.212 } 00:17:23.212 ] 00:17:23.212 }' 00:17:23.213 21:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:23.213 21:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.160 21:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:24.160 [2024-07-15 21:30:57.424735] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:24.160 [2024-07-15 21:30:57.424903] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:17:24.160 21:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:24.418 [2024-07-15 21:30:57.648447] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:24.418 [2024-07-15 21:30:57.650626] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:24.418 [2024-07-15 21:30:57.650719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:24.418 21:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:24.418 21:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:24.418 21:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:24.418 21:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:24.418 21:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:24.418 21:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:24.418 21:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:24.418 21:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:24.418 21:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:24.418 21:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:24.418 21:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:24.418 21:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:24.418 21:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.418 21:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.677 21:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:24.677 "name": "Existed_Raid", 00:17:24.677 "uuid": "9ec8e419-0efb-4cde-92f7-68dc672964ec", 00:17:24.677 "strip_size_kb": 64, 00:17:24.677 "state": "configuring", 00:17:24.677 "raid_level": "concat", 00:17:24.677 "superblock": true, 00:17:24.677 "num_base_bdevs": 2, 00:17:24.677 "num_base_bdevs_discovered": 1, 00:17:24.677 "num_base_bdevs_operational": 2, 00:17:24.677 "base_bdevs_list": [ 00:17:24.677 { 00:17:24.677 "name": "BaseBdev1", 00:17:24.677 "uuid": "a7baf20b-5385-4e66-8b86-a8a97f08eafc", 00:17:24.677 "is_configured": true, 00:17:24.677 "data_offset": 2048, 00:17:24.677 "data_size": 63488 00:17:24.677 }, 00:17:24.677 { 00:17:24.677 "name": "BaseBdev2", 00:17:24.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.677 "is_configured": false, 00:17:24.677 "data_offset": 0, 00:17:24.677 "data_size": 0 00:17:24.677 } 00:17:24.677 ] 00:17:24.677 }' 00:17:24.677 21:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:24.677 21:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.292 21:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:25.551 [2024-07-15 21:30:58.775069] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:25.551 [2024-07-15 21:30:58.775418] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:17:25.551 [2024-07-15 21:30:58.775450] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:25.551 [2024-07-15 21:30:58.775633] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:17:25.551 [2024-07-15 21:30:58.775972] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:17:25.551 [2024-07-15 21:30:58.776027] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:17:25.551 [2024-07-15 21:30:58.776214] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.551 BaseBdev2 00:17:25.551 21:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:25.551 21:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:25.551 21:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:25.551 21:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:25.551 21:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:25.551 21:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:25.551 21:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:25.808 21:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:26.065 [ 00:17:26.065 { 00:17:26.065 "name": "BaseBdev2", 00:17:26.065 "aliases": [ 00:17:26.065 "11cb8562-24cd-4712-b818-40d31078ba37" 00:17:26.065 ], 00:17:26.065 "product_name": "Malloc disk", 00:17:26.065 "block_size": 512, 00:17:26.065 "num_blocks": 65536, 00:17:26.065 "uuid": "11cb8562-24cd-4712-b818-40d31078ba37", 00:17:26.065 "assigned_rate_limits": { 00:17:26.065 "rw_ios_per_sec": 0, 00:17:26.065 "rw_mbytes_per_sec": 0, 00:17:26.065 "r_mbytes_per_sec": 0, 00:17:26.065 "w_mbytes_per_sec": 0 00:17:26.065 }, 00:17:26.065 "claimed": true, 00:17:26.065 "claim_type": "exclusive_write", 00:17:26.065 "zoned": false, 00:17:26.065 "supported_io_types": { 00:17:26.066 "read": true, 00:17:26.066 "write": true, 00:17:26.066 "unmap": true, 00:17:26.066 "flush": true, 00:17:26.066 "reset": true, 00:17:26.066 "nvme_admin": false, 00:17:26.066 "nvme_io": false, 00:17:26.066 "nvme_io_md": false, 00:17:26.066 "write_zeroes": true, 00:17:26.066 "zcopy": true, 00:17:26.066 "get_zone_info": false, 00:17:26.066 "zone_management": false, 00:17:26.066 "zone_append": false, 00:17:26.066 "compare": false, 00:17:26.066 "compare_and_write": false, 00:17:26.066 "abort": true, 00:17:26.066 "seek_hole": false, 00:17:26.066 "seek_data": false, 00:17:26.066 "copy": true, 00:17:26.066 "nvme_iov_md": false 00:17:26.066 }, 00:17:26.066 "memory_domains": [ 00:17:26.066 { 00:17:26.066 "dma_device_id": "system", 00:17:26.066 "dma_device_type": 1 00:17:26.066 }, 00:17:26.066 { 00:17:26.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.066 "dma_device_type": 2 00:17:26.066 } 00:17:26.066 ], 00:17:26.066 "driver_specific": {} 00:17:26.066 } 00:17:26.066 ] 00:17:26.066 21:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:26.066 21:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:26.066 21:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:26.066 21:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:17:26.066 21:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:26.066 21:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:26.066 21:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:26.066 21:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:26.066 21:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:26.066 21:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:26.066 21:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:26.066 21:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:26.066 21:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:26.066 21:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.066 21:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.323 21:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:26.323 "name": "Existed_Raid", 00:17:26.323 "uuid": "9ec8e419-0efb-4cde-92f7-68dc672964ec", 00:17:26.323 "strip_size_kb": 64, 00:17:26.323 "state": "online", 00:17:26.323 "raid_level": "concat", 00:17:26.323 "superblock": true, 00:17:26.323 "num_base_bdevs": 2, 00:17:26.323 "num_base_bdevs_discovered": 2, 00:17:26.323 "num_base_bdevs_operational": 2, 00:17:26.323 "base_bdevs_list": [ 00:17:26.323 { 00:17:26.323 "name": "BaseBdev1", 00:17:26.323 "uuid": "a7baf20b-5385-4e66-8b86-a8a97f08eafc", 00:17:26.323 "is_configured": true, 00:17:26.323 "data_offset": 2048, 00:17:26.323 "data_size": 63488 00:17:26.323 }, 00:17:26.323 { 00:17:26.323 "name": "BaseBdev2", 00:17:26.324 "uuid": "11cb8562-24cd-4712-b818-40d31078ba37", 00:17:26.324 "is_configured": true, 00:17:26.324 "data_offset": 2048, 00:17:26.324 "data_size": 63488 00:17:26.324 } 00:17:26.324 ] 00:17:26.324 }' 00:17:26.324 21:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:26.324 21:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.891 21:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:26.891 21:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:26.891 21:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:26.891 21:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:26.891 21:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:26.891 21:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:26.891 21:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:26.891 21:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:27.150 [2024-07-15 21:31:00.308846] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:27.150 21:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:27.150 "name": "Existed_Raid", 00:17:27.150 "aliases": [ 00:17:27.150 "9ec8e419-0efb-4cde-92f7-68dc672964ec" 00:17:27.150 ], 00:17:27.150 "product_name": "Raid Volume", 00:17:27.150 "block_size": 512, 00:17:27.150 "num_blocks": 126976, 00:17:27.150 "uuid": "9ec8e419-0efb-4cde-92f7-68dc672964ec", 00:17:27.150 "assigned_rate_limits": { 00:17:27.150 "rw_ios_per_sec": 0, 00:17:27.150 "rw_mbytes_per_sec": 0, 00:17:27.150 "r_mbytes_per_sec": 0, 00:17:27.150 "w_mbytes_per_sec": 0 00:17:27.150 }, 00:17:27.150 "claimed": false, 00:17:27.150 "zoned": false, 00:17:27.150 "supported_io_types": { 00:17:27.150 "read": true, 00:17:27.150 "write": true, 00:17:27.150 "unmap": true, 00:17:27.150 "flush": true, 00:17:27.150 "reset": true, 00:17:27.150 "nvme_admin": false, 00:17:27.150 "nvme_io": false, 00:17:27.150 "nvme_io_md": false, 00:17:27.150 "write_zeroes": true, 00:17:27.150 "zcopy": false, 00:17:27.150 "get_zone_info": false, 00:17:27.150 "zone_management": false, 00:17:27.150 "zone_append": false, 00:17:27.150 "compare": false, 00:17:27.150 "compare_and_write": false, 00:17:27.150 "abort": false, 00:17:27.150 "seek_hole": false, 00:17:27.150 "seek_data": false, 00:17:27.150 "copy": false, 00:17:27.150 "nvme_iov_md": false 00:17:27.150 }, 00:17:27.150 "memory_domains": [ 00:17:27.150 { 00:17:27.150 "dma_device_id": "system", 00:17:27.150 "dma_device_type": 1 00:17:27.150 }, 00:17:27.150 { 00:17:27.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.150 "dma_device_type": 2 00:17:27.150 }, 00:17:27.150 { 00:17:27.150 "dma_device_id": "system", 00:17:27.150 "dma_device_type": 1 00:17:27.150 }, 00:17:27.150 { 00:17:27.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.150 "dma_device_type": 2 00:17:27.150 } 00:17:27.150 ], 00:17:27.150 "driver_specific": { 00:17:27.150 "raid": { 00:17:27.150 "uuid": "9ec8e419-0efb-4cde-92f7-68dc672964ec", 00:17:27.150 "strip_size_kb": 64, 00:17:27.150 "state": "online", 00:17:27.150 "raid_level": "concat", 00:17:27.150 "superblock": true, 00:17:27.150 "num_base_bdevs": 2, 00:17:27.150 "num_base_bdevs_discovered": 2, 00:17:27.150 "num_base_bdevs_operational": 2, 00:17:27.150 "base_bdevs_list": [ 00:17:27.150 { 00:17:27.150 "name": "BaseBdev1", 00:17:27.150 "uuid": "a7baf20b-5385-4e66-8b86-a8a97f08eafc", 00:17:27.150 "is_configured": true, 00:17:27.150 "data_offset": 2048, 00:17:27.150 "data_size": 63488 00:17:27.150 }, 00:17:27.150 { 00:17:27.150 "name": "BaseBdev2", 00:17:27.150 "uuid": "11cb8562-24cd-4712-b818-40d31078ba37", 00:17:27.150 "is_configured": true, 00:17:27.150 "data_offset": 2048, 00:17:27.150 "data_size": 63488 00:17:27.150 } 00:17:27.150 ] 00:17:27.150 } 00:17:27.150 } 00:17:27.150 }' 00:17:27.150 21:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:27.150 21:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:27.150 BaseBdev2' 00:17:27.150 21:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:27.150 21:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:27.150 21:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:27.409 21:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:27.409 "name": "BaseBdev1", 00:17:27.409 "aliases": [ 00:17:27.409 "a7baf20b-5385-4e66-8b86-a8a97f08eafc" 00:17:27.409 ], 00:17:27.409 "product_name": "Malloc disk", 00:17:27.409 "block_size": 512, 00:17:27.409 "num_blocks": 65536, 00:17:27.409 "uuid": "a7baf20b-5385-4e66-8b86-a8a97f08eafc", 00:17:27.409 "assigned_rate_limits": { 00:17:27.409 "rw_ios_per_sec": 0, 00:17:27.409 "rw_mbytes_per_sec": 0, 00:17:27.409 "r_mbytes_per_sec": 0, 00:17:27.409 "w_mbytes_per_sec": 0 00:17:27.409 }, 00:17:27.409 "claimed": true, 00:17:27.409 "claim_type": "exclusive_write", 00:17:27.409 "zoned": false, 00:17:27.409 "supported_io_types": { 00:17:27.409 "read": true, 00:17:27.409 "write": true, 00:17:27.409 "unmap": true, 00:17:27.409 "flush": true, 00:17:27.409 "reset": true, 00:17:27.409 "nvme_admin": false, 00:17:27.409 "nvme_io": false, 00:17:27.409 "nvme_io_md": false, 00:17:27.409 "write_zeroes": true, 00:17:27.409 "zcopy": true, 00:17:27.409 "get_zone_info": false, 00:17:27.409 "zone_management": false, 00:17:27.409 "zone_append": false, 00:17:27.409 "compare": false, 00:17:27.409 "compare_and_write": false, 00:17:27.409 "abort": true, 00:17:27.409 "seek_hole": false, 00:17:27.410 "seek_data": false, 00:17:27.410 "copy": true, 00:17:27.410 "nvme_iov_md": false 00:17:27.410 }, 00:17:27.410 "memory_domains": [ 00:17:27.410 { 00:17:27.410 "dma_device_id": "system", 00:17:27.410 "dma_device_type": 1 00:17:27.410 }, 00:17:27.410 { 00:17:27.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.410 "dma_device_type": 2 00:17:27.410 } 00:17:27.410 ], 00:17:27.410 "driver_specific": {} 00:17:27.410 }' 00:17:27.410 21:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:27.410 21:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:27.410 21:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:27.410 21:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:27.410 21:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:27.669 21:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:27.669 21:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:27.669 21:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:27.669 21:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:27.669 21:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:27.669 21:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:27.929 21:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:27.929 21:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:27.929 21:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:27.929 21:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:27.929 21:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:27.929 "name": "BaseBdev2", 00:17:27.929 "aliases": [ 00:17:27.929 "11cb8562-24cd-4712-b818-40d31078ba37" 00:17:27.929 ], 00:17:27.929 "product_name": "Malloc disk", 00:17:27.929 "block_size": 512, 00:17:27.929 "num_blocks": 65536, 00:17:27.929 "uuid": "11cb8562-24cd-4712-b818-40d31078ba37", 00:17:27.929 "assigned_rate_limits": { 00:17:27.929 "rw_ios_per_sec": 0, 00:17:27.929 "rw_mbytes_per_sec": 0, 00:17:27.929 "r_mbytes_per_sec": 0, 00:17:27.929 "w_mbytes_per_sec": 0 00:17:27.929 }, 00:17:27.929 "claimed": true, 00:17:27.929 "claim_type": "exclusive_write", 00:17:27.929 "zoned": false, 00:17:27.929 "supported_io_types": { 00:17:27.929 "read": true, 00:17:27.929 "write": true, 00:17:27.929 "unmap": true, 00:17:27.929 "flush": true, 00:17:27.929 "reset": true, 00:17:27.929 "nvme_admin": false, 00:17:27.929 "nvme_io": false, 00:17:27.929 "nvme_io_md": false, 00:17:27.929 "write_zeroes": true, 00:17:27.929 "zcopy": true, 00:17:27.929 "get_zone_info": false, 00:17:27.929 "zone_management": false, 00:17:27.929 "zone_append": false, 00:17:27.929 "compare": false, 00:17:27.929 "compare_and_write": false, 00:17:27.929 "abort": true, 00:17:27.929 "seek_hole": false, 00:17:27.929 "seek_data": false, 00:17:27.929 "copy": true, 00:17:27.929 "nvme_iov_md": false 00:17:27.929 }, 00:17:27.929 "memory_domains": [ 00:17:27.929 { 00:17:27.929 "dma_device_id": "system", 00:17:27.929 "dma_device_type": 1 00:17:27.929 }, 00:17:27.929 { 00:17:27.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.929 "dma_device_type": 2 00:17:27.929 } 00:17:27.929 ], 00:17:27.929 "driver_specific": {} 00:17:27.929 }' 00:17:27.929 21:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:28.189 21:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:28.189 21:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:28.189 21:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:28.189 21:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:28.189 21:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:28.189 21:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:28.449 21:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:28.449 21:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:28.449 21:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:28.449 21:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:28.449 21:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:28.449 21:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:28.709 [2024-07-15 21:31:01.953973] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:28.709 [2024-07-15 21:31:01.954125] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:28.709 [2024-07-15 21:31:01.954217] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:28.709 21:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:28.709 21:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:17:28.709 21:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:28.709 21:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:17:28.709 21:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:28.709 21:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:17:28.709 21:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:28.709 21:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:28.709 21:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:28.709 21:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:28.709 21:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:28.709 21:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:28.709 21:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:28.709 21:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:28.709 21:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:28.968 21:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.968 21:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.968 21:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:28.968 "name": "Existed_Raid", 00:17:28.968 "uuid": "9ec8e419-0efb-4cde-92f7-68dc672964ec", 00:17:28.968 "strip_size_kb": 64, 00:17:28.968 "state": "offline", 00:17:28.968 "raid_level": "concat", 00:17:28.968 "superblock": true, 00:17:28.968 "num_base_bdevs": 2, 00:17:28.968 "num_base_bdevs_discovered": 1, 00:17:28.968 "num_base_bdevs_operational": 1, 00:17:28.968 "base_bdevs_list": [ 00:17:28.968 { 00:17:28.968 "name": null, 00:17:28.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.968 "is_configured": false, 00:17:28.968 "data_offset": 2048, 00:17:28.968 "data_size": 63488 00:17:28.968 }, 00:17:28.968 { 00:17:28.968 "name": "BaseBdev2", 00:17:28.968 "uuid": "11cb8562-24cd-4712-b818-40d31078ba37", 00:17:28.968 "is_configured": true, 00:17:28.968 "data_offset": 2048, 00:17:28.968 "data_size": 63488 00:17:28.968 } 00:17:28.968 ] 00:17:28.968 }' 00:17:28.968 21:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:28.968 21:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.906 21:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:29.906 21:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:29.906 21:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.906 21:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:29.906 21:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:29.906 21:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:29.906 21:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:30.164 [2024-07-15 21:31:03.323393] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:30.164 [2024-07-15 21:31:03.323561] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:17:30.164 21:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:30.164 21:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:30.164 21:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.164 21:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:30.422 21:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:30.422 21:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:30.422 21:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:17:30.422 21:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 123336 00:17:30.422 21:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 123336 ']' 00:17:30.422 21:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 123336 00:17:30.422 21:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:17:30.422 21:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:30.422 21:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 123336 00:17:30.422 killing process with pid 123336 00:17:30.422 21:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:30.422 21:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:30.422 21:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 123336' 00:17:30.422 21:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 123336 00:17:30.422 21:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 123336 00:17:30.422 [2024-07-15 21:31:03.698038] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:30.422 [2024-07-15 21:31:03.698208] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:31.801 ************************************ 00:17:31.802 END TEST raid_state_function_test_sb 00:17:31.802 ************************************ 00:17:31.802 21:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:17:31.802 00:17:31.802 real 0m11.860s 00:17:31.802 user 0m20.407s 00:17:31.802 sys 0m1.739s 00:17:31.802 21:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:31.802 21:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.802 21:31:05 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:31.802 21:31:05 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:17:31.802 21:31:05 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:31.802 21:31:05 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:31.802 21:31:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:31.802 ************************************ 00:17:31.802 START TEST raid_superblock_test 00:17:31.802 ************************************ 00:17:31.802 21:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 2 00:17:31.802 21:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:17:31.802 21:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:17:31.802 21:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:17:31.802 21:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:17:31.802 21:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:17:31.802 21:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:17:31.802 21:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:17:31.802 21:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:17:31.802 21:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:17:31.802 21:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:17:31.802 21:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:17:31.802 21:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:17:31.802 21:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:17:31.802 21:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:17:31.802 21:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:17:31.802 21:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:17:31.802 21:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=123736 00:17:31.802 21:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:31.802 21:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 123736 /var/tmp/spdk-raid.sock 00:17:31.802 21:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 123736 ']' 00:17:31.802 21:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:31.802 21:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:31.802 21:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:31.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:31.802 21:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:31.802 21:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.061 [2024-07-15 21:31:05.191525] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:17:32.061 [2024-07-15 21:31:05.191758] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123736 ] 00:17:32.061 [2024-07-15 21:31:05.355595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.320 [2024-07-15 21:31:05.598960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.580 [2024-07-15 21:31:05.835184] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:32.840 21:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:32.840 21:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:17:32.840 21:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:17:32.840 21:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:32.840 21:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:17:32.840 21:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:17:32.840 21:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:32.840 21:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:32.840 21:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:32.840 21:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:32.840 21:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:33.100 malloc1 00:17:33.100 21:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:33.360 [2024-07-15 21:31:06.509796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:33.360 [2024-07-15 21:31:06.510027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.360 [2024-07-15 21:31:06.510081] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:17:33.360 [2024-07-15 21:31:06.510139] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.360 [2024-07-15 21:31:06.512482] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.360 [2024-07-15 21:31:06.512558] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:33.360 pt1 00:17:33.360 21:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:33.360 21:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:33.360 21:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:17:33.360 21:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:17:33.360 21:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:33.360 21:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:33.360 21:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:33.360 21:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:33.360 21:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:33.620 malloc2 00:17:33.620 21:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:33.620 [2024-07-15 21:31:06.996465] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:33.620 [2024-07-15 21:31:06.996680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.620 [2024-07-15 21:31:06.996742] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:17:33.620 [2024-07-15 21:31:06.996784] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.882 [2024-07-15 21:31:06.999099] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.882 [2024-07-15 21:31:06.999174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:33.882 pt2 00:17:33.882 21:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:33.882 21:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:33.882 21:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:17:33.882 [2024-07-15 21:31:07.196163] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:33.882 [2024-07-15 21:31:07.198254] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:33.882 [2024-07-15 21:31:07.198478] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:17:33.882 [2024-07-15 21:31:07.198515] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:33.882 [2024-07-15 21:31:07.198697] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:17:33.882 [2024-07-15 21:31:07.199049] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:17:33.882 [2024-07-15 21:31:07.199087] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:17:33.882 [2024-07-15 21:31:07.199264] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.882 21:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:17:33.882 21:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:33.882 21:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:33.882 21:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:33.882 21:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:33.882 21:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:33.882 21:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:33.882 21:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:33.882 21:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:33.882 21:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:33.882 21:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.882 21:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.143 21:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:34.144 "name": "raid_bdev1", 00:17:34.144 "uuid": "a5adcecd-5dac-4bfa-b946-6ca1c11f657d", 00:17:34.144 "strip_size_kb": 64, 00:17:34.144 "state": "online", 00:17:34.144 "raid_level": "concat", 00:17:34.144 "superblock": true, 00:17:34.144 "num_base_bdevs": 2, 00:17:34.144 "num_base_bdevs_discovered": 2, 00:17:34.144 "num_base_bdevs_operational": 2, 00:17:34.144 "base_bdevs_list": [ 00:17:34.144 { 00:17:34.144 "name": "pt1", 00:17:34.144 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:34.144 "is_configured": true, 00:17:34.144 "data_offset": 2048, 00:17:34.144 "data_size": 63488 00:17:34.144 }, 00:17:34.144 { 00:17:34.144 "name": "pt2", 00:17:34.144 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:34.144 "is_configured": true, 00:17:34.144 "data_offset": 2048, 00:17:34.144 "data_size": 63488 00:17:34.144 } 00:17:34.144 ] 00:17:34.144 }' 00:17:34.144 21:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:34.144 21:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.082 21:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:17:35.082 21:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:35.082 21:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:35.082 21:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:35.082 21:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:35.082 21:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:35.082 21:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:35.082 21:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:35.082 [2024-07-15 21:31:08.314473] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:35.082 21:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:35.082 "name": "raid_bdev1", 00:17:35.082 "aliases": [ 00:17:35.082 "a5adcecd-5dac-4bfa-b946-6ca1c11f657d" 00:17:35.082 ], 00:17:35.082 "product_name": "Raid Volume", 00:17:35.082 "block_size": 512, 00:17:35.082 "num_blocks": 126976, 00:17:35.082 "uuid": "a5adcecd-5dac-4bfa-b946-6ca1c11f657d", 00:17:35.082 "assigned_rate_limits": { 00:17:35.082 "rw_ios_per_sec": 0, 00:17:35.082 "rw_mbytes_per_sec": 0, 00:17:35.082 "r_mbytes_per_sec": 0, 00:17:35.082 "w_mbytes_per_sec": 0 00:17:35.082 }, 00:17:35.082 "claimed": false, 00:17:35.083 "zoned": false, 00:17:35.083 "supported_io_types": { 00:17:35.083 "read": true, 00:17:35.083 "write": true, 00:17:35.083 "unmap": true, 00:17:35.083 "flush": true, 00:17:35.083 "reset": true, 00:17:35.083 "nvme_admin": false, 00:17:35.083 "nvme_io": false, 00:17:35.083 "nvme_io_md": false, 00:17:35.083 "write_zeroes": true, 00:17:35.083 "zcopy": false, 00:17:35.083 "get_zone_info": false, 00:17:35.083 "zone_management": false, 00:17:35.083 "zone_append": false, 00:17:35.083 "compare": false, 00:17:35.083 "compare_and_write": false, 00:17:35.083 "abort": false, 00:17:35.083 "seek_hole": false, 00:17:35.083 "seek_data": false, 00:17:35.083 "copy": false, 00:17:35.083 "nvme_iov_md": false 00:17:35.083 }, 00:17:35.083 "memory_domains": [ 00:17:35.083 { 00:17:35.083 "dma_device_id": "system", 00:17:35.083 "dma_device_type": 1 00:17:35.083 }, 00:17:35.083 { 00:17:35.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.083 "dma_device_type": 2 00:17:35.083 }, 00:17:35.083 { 00:17:35.083 "dma_device_id": "system", 00:17:35.083 "dma_device_type": 1 00:17:35.083 }, 00:17:35.083 { 00:17:35.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.083 "dma_device_type": 2 00:17:35.083 } 00:17:35.083 ], 00:17:35.083 "driver_specific": { 00:17:35.083 "raid": { 00:17:35.083 "uuid": "a5adcecd-5dac-4bfa-b946-6ca1c11f657d", 00:17:35.083 "strip_size_kb": 64, 00:17:35.083 "state": "online", 00:17:35.083 "raid_level": "concat", 00:17:35.083 "superblock": true, 00:17:35.083 "num_base_bdevs": 2, 00:17:35.083 "num_base_bdevs_discovered": 2, 00:17:35.083 "num_base_bdevs_operational": 2, 00:17:35.083 "base_bdevs_list": [ 00:17:35.083 { 00:17:35.083 "name": "pt1", 00:17:35.083 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:35.083 "is_configured": true, 00:17:35.083 "data_offset": 2048, 00:17:35.083 "data_size": 63488 00:17:35.083 }, 00:17:35.083 { 00:17:35.083 "name": "pt2", 00:17:35.083 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:35.083 "is_configured": true, 00:17:35.083 "data_offset": 2048, 00:17:35.083 "data_size": 63488 00:17:35.083 } 00:17:35.083 ] 00:17:35.083 } 00:17:35.083 } 00:17:35.083 }' 00:17:35.083 21:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:35.083 21:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:35.083 pt2' 00:17:35.083 21:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:35.083 21:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:35.083 21:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:35.343 21:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:35.343 "name": "pt1", 00:17:35.343 "aliases": [ 00:17:35.343 "00000000-0000-0000-0000-000000000001" 00:17:35.343 ], 00:17:35.343 "product_name": "passthru", 00:17:35.343 "block_size": 512, 00:17:35.343 "num_blocks": 65536, 00:17:35.343 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:35.343 "assigned_rate_limits": { 00:17:35.343 "rw_ios_per_sec": 0, 00:17:35.343 "rw_mbytes_per_sec": 0, 00:17:35.343 "r_mbytes_per_sec": 0, 00:17:35.343 "w_mbytes_per_sec": 0 00:17:35.343 }, 00:17:35.343 "claimed": true, 00:17:35.343 "claim_type": "exclusive_write", 00:17:35.343 "zoned": false, 00:17:35.343 "supported_io_types": { 00:17:35.343 "read": true, 00:17:35.343 "write": true, 00:17:35.343 "unmap": true, 00:17:35.343 "flush": true, 00:17:35.343 "reset": true, 00:17:35.343 "nvme_admin": false, 00:17:35.343 "nvme_io": false, 00:17:35.343 "nvme_io_md": false, 00:17:35.343 "write_zeroes": true, 00:17:35.343 "zcopy": true, 00:17:35.343 "get_zone_info": false, 00:17:35.343 "zone_management": false, 00:17:35.343 "zone_append": false, 00:17:35.343 "compare": false, 00:17:35.343 "compare_and_write": false, 00:17:35.343 "abort": true, 00:17:35.343 "seek_hole": false, 00:17:35.343 "seek_data": false, 00:17:35.343 "copy": true, 00:17:35.343 "nvme_iov_md": false 00:17:35.343 }, 00:17:35.343 "memory_domains": [ 00:17:35.343 { 00:17:35.343 "dma_device_id": "system", 00:17:35.343 "dma_device_type": 1 00:17:35.343 }, 00:17:35.343 { 00:17:35.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.343 "dma_device_type": 2 00:17:35.343 } 00:17:35.343 ], 00:17:35.343 "driver_specific": { 00:17:35.343 "passthru": { 00:17:35.343 "name": "pt1", 00:17:35.343 "base_bdev_name": "malloc1" 00:17:35.343 } 00:17:35.343 } 00:17:35.343 }' 00:17:35.343 21:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:35.343 21:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:35.343 21:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:35.343 21:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:35.602 21:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:35.602 21:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:35.602 21:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:35.602 21:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:35.602 21:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:35.602 21:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:35.861 21:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:35.861 21:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:35.861 21:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:35.861 21:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:35.861 21:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:36.120 21:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:36.120 "name": "pt2", 00:17:36.120 "aliases": [ 00:17:36.120 "00000000-0000-0000-0000-000000000002" 00:17:36.120 ], 00:17:36.120 "product_name": "passthru", 00:17:36.120 "block_size": 512, 00:17:36.120 "num_blocks": 65536, 00:17:36.120 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:36.120 "assigned_rate_limits": { 00:17:36.120 "rw_ios_per_sec": 0, 00:17:36.120 "rw_mbytes_per_sec": 0, 00:17:36.120 "r_mbytes_per_sec": 0, 00:17:36.120 "w_mbytes_per_sec": 0 00:17:36.120 }, 00:17:36.120 "claimed": true, 00:17:36.120 "claim_type": "exclusive_write", 00:17:36.120 "zoned": false, 00:17:36.120 "supported_io_types": { 00:17:36.120 "read": true, 00:17:36.120 "write": true, 00:17:36.120 "unmap": true, 00:17:36.120 "flush": true, 00:17:36.120 "reset": true, 00:17:36.120 "nvme_admin": false, 00:17:36.120 "nvme_io": false, 00:17:36.120 "nvme_io_md": false, 00:17:36.120 "write_zeroes": true, 00:17:36.120 "zcopy": true, 00:17:36.120 "get_zone_info": false, 00:17:36.120 "zone_management": false, 00:17:36.120 "zone_append": false, 00:17:36.120 "compare": false, 00:17:36.120 "compare_and_write": false, 00:17:36.120 "abort": true, 00:17:36.120 "seek_hole": false, 00:17:36.120 "seek_data": false, 00:17:36.120 "copy": true, 00:17:36.120 "nvme_iov_md": false 00:17:36.120 }, 00:17:36.120 "memory_domains": [ 00:17:36.120 { 00:17:36.120 "dma_device_id": "system", 00:17:36.120 "dma_device_type": 1 00:17:36.120 }, 00:17:36.120 { 00:17:36.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.120 "dma_device_type": 2 00:17:36.120 } 00:17:36.120 ], 00:17:36.120 "driver_specific": { 00:17:36.120 "passthru": { 00:17:36.120 "name": "pt2", 00:17:36.120 "base_bdev_name": "malloc2" 00:17:36.120 } 00:17:36.120 } 00:17:36.120 }' 00:17:36.120 21:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:36.120 21:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:36.120 21:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:36.120 21:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:36.120 21:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:36.120 21:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:36.120 21:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:36.380 21:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:36.380 21:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:36.380 21:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:36.380 21:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:36.380 21:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:36.380 21:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:36.380 21:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:17:36.639 [2024-07-15 21:31:09.935750] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:36.639 21:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=a5adcecd-5dac-4bfa-b946-6ca1c11f657d 00:17:36.639 21:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z a5adcecd-5dac-4bfa-b946-6ca1c11f657d ']' 00:17:36.639 21:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:36.898 [2024-07-15 21:31:10.214979] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:36.898 [2024-07-15 21:31:10.215106] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:36.898 [2024-07-15 21:31:10.215239] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:36.898 [2024-07-15 21:31:10.215312] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:36.898 [2024-07-15 21:31:10.215332] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:17:36.898 21:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.898 21:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:17:37.156 21:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:17:37.156 21:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:17:37.156 21:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:37.156 21:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:37.416 21:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:37.416 21:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:37.675 21:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:37.675 21:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:37.935 21:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:17:37.935 21:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:17:37.935 21:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:17:37.935 21:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:17:37.935 21:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:37.935 21:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:37.935 21:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:37.935 21:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:37.935 21:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:37.935 21:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:37.935 21:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:37.935 21:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:37.935 21:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:17:37.935 [2024-07-15 21:31:11.277139] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:37.935 [2024-07-15 21:31:11.279129] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:37.935 [2024-07-15 21:31:11.279237] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:37.935 [2024-07-15 21:31:11.279346] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:37.935 [2024-07-15 21:31:11.279405] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:37.935 [2024-07-15 21:31:11.279424] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state configuring 00:17:37.935 request: 00:17:37.935 { 00:17:37.935 "name": "raid_bdev1", 00:17:37.935 "raid_level": "concat", 00:17:37.935 "base_bdevs": [ 00:17:37.935 "malloc1", 00:17:37.935 "malloc2" 00:17:37.935 ], 00:17:37.935 "strip_size_kb": 64, 00:17:37.935 "superblock": false, 00:17:37.935 "method": "bdev_raid_create", 00:17:37.935 "req_id": 1 00:17:37.935 } 00:17:37.935 Got JSON-RPC error response 00:17:37.935 response: 00:17:37.935 { 00:17:37.935 "code": -17, 00:17:37.935 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:37.935 } 00:17:37.935 21:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:17:37.935 21:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:37.935 21:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:37.935 21:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:37.935 21:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.935 21:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:17:38.195 21:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:17:38.195 21:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:17:38.195 21:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:38.455 [2024-07-15 21:31:11.688453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:38.455 [2024-07-15 21:31:11.688647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.455 [2024-07-15 21:31:11.688693] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:17:38.455 [2024-07-15 21:31:11.688734] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.455 [2024-07-15 21:31:11.691202] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.455 [2024-07-15 21:31:11.691299] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:38.455 [2024-07-15 21:31:11.691447] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:38.455 [2024-07-15 21:31:11.691521] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:38.455 pt1 00:17:38.455 21:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:17:38.455 21:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:38.455 21:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:38.455 21:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:38.455 21:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:38.455 21:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:38.455 21:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:38.455 21:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:38.455 21:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:38.455 21:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:38.455 21:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.455 21:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.714 21:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:38.714 "name": "raid_bdev1", 00:17:38.714 "uuid": "a5adcecd-5dac-4bfa-b946-6ca1c11f657d", 00:17:38.714 "strip_size_kb": 64, 00:17:38.714 "state": "configuring", 00:17:38.714 "raid_level": "concat", 00:17:38.714 "superblock": true, 00:17:38.714 "num_base_bdevs": 2, 00:17:38.714 "num_base_bdevs_discovered": 1, 00:17:38.714 "num_base_bdevs_operational": 2, 00:17:38.714 "base_bdevs_list": [ 00:17:38.714 { 00:17:38.714 "name": "pt1", 00:17:38.714 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:38.714 "is_configured": true, 00:17:38.714 "data_offset": 2048, 00:17:38.714 "data_size": 63488 00:17:38.714 }, 00:17:38.714 { 00:17:38.714 "name": null, 00:17:38.714 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:38.714 "is_configured": false, 00:17:38.714 "data_offset": 2048, 00:17:38.714 "data_size": 63488 00:17:38.714 } 00:17:38.714 ] 00:17:38.714 }' 00:17:38.714 21:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:38.714 21:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.283 21:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:17:39.283 21:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:17:39.283 21:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:39.283 21:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:39.605 [2024-07-15 21:31:12.778575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:39.605 [2024-07-15 21:31:12.778804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.605 [2024-07-15 21:31:12.778854] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:39.605 [2024-07-15 21:31:12.778896] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.605 [2024-07-15 21:31:12.779472] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.605 [2024-07-15 21:31:12.779549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:39.605 [2024-07-15 21:31:12.779689] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:39.605 [2024-07-15 21:31:12.779735] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:39.605 [2024-07-15 21:31:12.779879] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:17:39.605 [2024-07-15 21:31:12.779907] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:39.605 [2024-07-15 21:31:12.780022] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:17:39.605 [2024-07-15 21:31:12.780337] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:17:39.605 [2024-07-15 21:31:12.780375] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:17:39.605 [2024-07-15 21:31:12.780543] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.605 pt2 00:17:39.605 21:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:39.605 21:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:39.605 21:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:17:39.605 21:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:39.605 21:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:39.605 21:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:39.605 21:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:39.605 21:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:39.605 21:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:39.605 21:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:39.605 21:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:39.605 21:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:39.605 21:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.605 21:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.865 21:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:39.865 "name": "raid_bdev1", 00:17:39.865 "uuid": "a5adcecd-5dac-4bfa-b946-6ca1c11f657d", 00:17:39.865 "strip_size_kb": 64, 00:17:39.865 "state": "online", 00:17:39.865 "raid_level": "concat", 00:17:39.865 "superblock": true, 00:17:39.865 "num_base_bdevs": 2, 00:17:39.865 "num_base_bdevs_discovered": 2, 00:17:39.865 "num_base_bdevs_operational": 2, 00:17:39.865 "base_bdevs_list": [ 00:17:39.865 { 00:17:39.865 "name": "pt1", 00:17:39.865 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:39.865 "is_configured": true, 00:17:39.865 "data_offset": 2048, 00:17:39.865 "data_size": 63488 00:17:39.865 }, 00:17:39.865 { 00:17:39.865 "name": "pt2", 00:17:39.865 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:39.865 "is_configured": true, 00:17:39.865 "data_offset": 2048, 00:17:39.865 "data_size": 63488 00:17:39.865 } 00:17:39.865 ] 00:17:39.865 }' 00:17:39.865 21:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:39.865 21:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.432 21:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:17:40.432 21:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:40.432 21:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:40.432 21:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:40.432 21:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:40.432 21:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:40.432 21:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:40.432 21:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:40.691 [2024-07-15 21:31:13.916974] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:40.691 21:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:40.691 "name": "raid_bdev1", 00:17:40.691 "aliases": [ 00:17:40.691 "a5adcecd-5dac-4bfa-b946-6ca1c11f657d" 00:17:40.691 ], 00:17:40.691 "product_name": "Raid Volume", 00:17:40.691 "block_size": 512, 00:17:40.691 "num_blocks": 126976, 00:17:40.691 "uuid": "a5adcecd-5dac-4bfa-b946-6ca1c11f657d", 00:17:40.691 "assigned_rate_limits": { 00:17:40.691 "rw_ios_per_sec": 0, 00:17:40.691 "rw_mbytes_per_sec": 0, 00:17:40.691 "r_mbytes_per_sec": 0, 00:17:40.691 "w_mbytes_per_sec": 0 00:17:40.691 }, 00:17:40.691 "claimed": false, 00:17:40.691 "zoned": false, 00:17:40.691 "supported_io_types": { 00:17:40.691 "read": true, 00:17:40.691 "write": true, 00:17:40.691 "unmap": true, 00:17:40.691 "flush": true, 00:17:40.691 "reset": true, 00:17:40.691 "nvme_admin": false, 00:17:40.691 "nvme_io": false, 00:17:40.691 "nvme_io_md": false, 00:17:40.691 "write_zeroes": true, 00:17:40.691 "zcopy": false, 00:17:40.691 "get_zone_info": false, 00:17:40.691 "zone_management": false, 00:17:40.691 "zone_append": false, 00:17:40.691 "compare": false, 00:17:40.691 "compare_and_write": false, 00:17:40.691 "abort": false, 00:17:40.691 "seek_hole": false, 00:17:40.691 "seek_data": false, 00:17:40.691 "copy": false, 00:17:40.691 "nvme_iov_md": false 00:17:40.691 }, 00:17:40.691 "memory_domains": [ 00:17:40.691 { 00:17:40.691 "dma_device_id": "system", 00:17:40.691 "dma_device_type": 1 00:17:40.691 }, 00:17:40.691 { 00:17:40.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.691 "dma_device_type": 2 00:17:40.691 }, 00:17:40.691 { 00:17:40.691 "dma_device_id": "system", 00:17:40.691 "dma_device_type": 1 00:17:40.691 }, 00:17:40.691 { 00:17:40.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.691 "dma_device_type": 2 00:17:40.691 } 00:17:40.691 ], 00:17:40.691 "driver_specific": { 00:17:40.691 "raid": { 00:17:40.691 "uuid": "a5adcecd-5dac-4bfa-b946-6ca1c11f657d", 00:17:40.692 "strip_size_kb": 64, 00:17:40.692 "state": "online", 00:17:40.692 "raid_level": "concat", 00:17:40.692 "superblock": true, 00:17:40.692 "num_base_bdevs": 2, 00:17:40.692 "num_base_bdevs_discovered": 2, 00:17:40.692 "num_base_bdevs_operational": 2, 00:17:40.692 "base_bdevs_list": [ 00:17:40.692 { 00:17:40.692 "name": "pt1", 00:17:40.692 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:40.692 "is_configured": true, 00:17:40.692 "data_offset": 2048, 00:17:40.692 "data_size": 63488 00:17:40.692 }, 00:17:40.692 { 00:17:40.692 "name": "pt2", 00:17:40.692 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:40.692 "is_configured": true, 00:17:40.692 "data_offset": 2048, 00:17:40.692 "data_size": 63488 00:17:40.692 } 00:17:40.692 ] 00:17:40.692 } 00:17:40.692 } 00:17:40.692 }' 00:17:40.692 21:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:40.692 21:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:40.692 pt2' 00:17:40.692 21:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:40.692 21:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:40.692 21:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:40.950 21:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:40.950 "name": "pt1", 00:17:40.950 "aliases": [ 00:17:40.950 "00000000-0000-0000-0000-000000000001" 00:17:40.950 ], 00:17:40.950 "product_name": "passthru", 00:17:40.950 "block_size": 512, 00:17:40.950 "num_blocks": 65536, 00:17:40.950 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:40.950 "assigned_rate_limits": { 00:17:40.950 "rw_ios_per_sec": 0, 00:17:40.950 "rw_mbytes_per_sec": 0, 00:17:40.950 "r_mbytes_per_sec": 0, 00:17:40.950 "w_mbytes_per_sec": 0 00:17:40.950 }, 00:17:40.950 "claimed": true, 00:17:40.950 "claim_type": "exclusive_write", 00:17:40.950 "zoned": false, 00:17:40.950 "supported_io_types": { 00:17:40.950 "read": true, 00:17:40.950 "write": true, 00:17:40.950 "unmap": true, 00:17:40.950 "flush": true, 00:17:40.950 "reset": true, 00:17:40.950 "nvme_admin": false, 00:17:40.950 "nvme_io": false, 00:17:40.950 "nvme_io_md": false, 00:17:40.950 "write_zeroes": true, 00:17:40.951 "zcopy": true, 00:17:40.951 "get_zone_info": false, 00:17:40.951 "zone_management": false, 00:17:40.951 "zone_append": false, 00:17:40.951 "compare": false, 00:17:40.951 "compare_and_write": false, 00:17:40.951 "abort": true, 00:17:40.951 "seek_hole": false, 00:17:40.951 "seek_data": false, 00:17:40.951 "copy": true, 00:17:40.951 "nvme_iov_md": false 00:17:40.951 }, 00:17:40.951 "memory_domains": [ 00:17:40.951 { 00:17:40.951 "dma_device_id": "system", 00:17:40.951 "dma_device_type": 1 00:17:40.951 }, 00:17:40.951 { 00:17:40.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.951 "dma_device_type": 2 00:17:40.951 } 00:17:40.951 ], 00:17:40.951 "driver_specific": { 00:17:40.951 "passthru": { 00:17:40.951 "name": "pt1", 00:17:40.951 "base_bdev_name": "malloc1" 00:17:40.951 } 00:17:40.951 } 00:17:40.951 }' 00:17:40.951 21:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:40.951 21:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:40.951 21:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:40.951 21:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:41.209 21:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:41.209 21:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:41.209 21:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:41.209 21:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:41.209 21:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:41.209 21:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:41.468 21:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:41.468 21:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:41.468 21:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:41.468 21:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:41.468 21:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:41.727 21:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:41.727 "name": "pt2", 00:17:41.727 "aliases": [ 00:17:41.727 "00000000-0000-0000-0000-000000000002" 00:17:41.727 ], 00:17:41.727 "product_name": "passthru", 00:17:41.727 "block_size": 512, 00:17:41.727 "num_blocks": 65536, 00:17:41.727 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:41.727 "assigned_rate_limits": { 00:17:41.727 "rw_ios_per_sec": 0, 00:17:41.727 "rw_mbytes_per_sec": 0, 00:17:41.727 "r_mbytes_per_sec": 0, 00:17:41.727 "w_mbytes_per_sec": 0 00:17:41.727 }, 00:17:41.727 "claimed": true, 00:17:41.727 "claim_type": "exclusive_write", 00:17:41.727 "zoned": false, 00:17:41.727 "supported_io_types": { 00:17:41.727 "read": true, 00:17:41.727 "write": true, 00:17:41.727 "unmap": true, 00:17:41.727 "flush": true, 00:17:41.727 "reset": true, 00:17:41.727 "nvme_admin": false, 00:17:41.727 "nvme_io": false, 00:17:41.727 "nvme_io_md": false, 00:17:41.727 "write_zeroes": true, 00:17:41.727 "zcopy": true, 00:17:41.727 "get_zone_info": false, 00:17:41.727 "zone_management": false, 00:17:41.727 "zone_append": false, 00:17:41.727 "compare": false, 00:17:41.727 "compare_and_write": false, 00:17:41.727 "abort": true, 00:17:41.727 "seek_hole": false, 00:17:41.727 "seek_data": false, 00:17:41.727 "copy": true, 00:17:41.727 "nvme_iov_md": false 00:17:41.727 }, 00:17:41.727 "memory_domains": [ 00:17:41.727 { 00:17:41.727 "dma_device_id": "system", 00:17:41.727 "dma_device_type": 1 00:17:41.727 }, 00:17:41.727 { 00:17:41.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.727 "dma_device_type": 2 00:17:41.727 } 00:17:41.727 ], 00:17:41.727 "driver_specific": { 00:17:41.727 "passthru": { 00:17:41.727 "name": "pt2", 00:17:41.727 "base_bdev_name": "malloc2" 00:17:41.727 } 00:17:41.727 } 00:17:41.727 }' 00:17:41.727 21:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:41.727 21:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:41.727 21:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:41.727 21:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:41.727 21:31:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:41.727 21:31:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:41.727 21:31:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:41.985 21:31:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:41.985 21:31:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:41.985 21:31:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:41.985 21:31:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:41.985 21:31:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:41.985 21:31:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:41.985 21:31:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:17:42.245 [2024-07-15 21:31:15.522117] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:42.245 21:31:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' a5adcecd-5dac-4bfa-b946-6ca1c11f657d '!=' a5adcecd-5dac-4bfa-b946-6ca1c11f657d ']' 00:17:42.245 21:31:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:17:42.245 21:31:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:42.245 21:31:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:42.245 21:31:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 123736 00:17:42.245 21:31:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 123736 ']' 00:17:42.245 21:31:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 123736 00:17:42.245 21:31:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:17:42.245 21:31:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:42.245 21:31:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 123736 00:17:42.245 21:31:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:42.245 21:31:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:42.245 21:31:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 123736' 00:17:42.245 killing process with pid 123736 00:17:42.245 21:31:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 123736 00:17:42.245 21:31:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 123736 00:17:42.245 [2024-07-15 21:31:15.573976] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:42.245 [2024-07-15 21:31:15.574066] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:42.245 [2024-07-15 21:31:15.574171] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:42.245 [2024-07-15 21:31:15.574207] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:17:42.504 [2024-07-15 21:31:15.783767] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:43.881 ************************************ 00:17:43.881 END TEST raid_superblock_test 00:17:43.881 ************************************ 00:17:43.881 21:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:17:43.881 00:17:43.881 real 0m12.028s 00:17:43.881 user 0m20.927s 00:17:43.881 sys 0m1.618s 00:17:43.881 21:31:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:43.881 21:31:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.881 21:31:17 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:43.881 21:31:17 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:17:43.881 21:31:17 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:43.881 21:31:17 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:43.882 21:31:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:43.882 ************************************ 00:17:43.882 START TEST raid_read_error_test 00:17:43.882 ************************************ 00:17:43.882 21:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 read 00:17:43.882 21:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:17:43.882 21:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:17:43.882 21:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:17:43.882 21:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:43.882 21:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:43.882 21:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:43.882 21:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:17:43.882 21:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:43.882 21:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:43.882 21:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:17:43.882 21:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:43.882 21:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:43.882 21:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:43.882 21:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:43.882 21:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:43.882 21:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:43.882 21:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:43.882 21:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:43.882 21:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:17:43.882 21:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:17:43.882 21:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:17:43.882 21:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:43.882 21:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.xos8H1yY1P 00:17:43.882 21:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=124144 00:17:43.882 21:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 124144 /var/tmp/spdk-raid.sock 00:17:43.882 21:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:43.882 21:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 124144 ']' 00:17:43.882 21:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:43.882 21:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:43.882 21:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:43.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:43.882 21:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:43.882 21:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.141 [2024-07-15 21:31:17.301849] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:17:44.141 [2024-07-15 21:31:17.302100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124144 ] 00:17:44.141 [2024-07-15 21:31:17.465893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.400 [2024-07-15 21:31:17.710735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.660 [2024-07-15 21:31:17.934305] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:44.920 21:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:44.920 21:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:17:44.920 21:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:44.920 21:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:45.179 BaseBdev1_malloc 00:17:45.179 21:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:45.179 true 00:17:45.179 21:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:45.438 [2024-07-15 21:31:18.714788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:45.438 [2024-07-15 21:31:18.715012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.438 [2024-07-15 21:31:18.715077] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:45.438 [2024-07-15 21:31:18.715117] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.438 [2024-07-15 21:31:18.717613] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.438 [2024-07-15 21:31:18.717696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:45.438 BaseBdev1 00:17:45.438 21:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:45.438 21:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:45.696 BaseBdev2_malloc 00:17:45.696 21:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:45.955 true 00:17:45.955 21:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:46.214 [2024-07-15 21:31:19.370654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:46.214 [2024-07-15 21:31:19.370881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.214 [2024-07-15 21:31:19.370939] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:17:46.214 [2024-07-15 21:31:19.370979] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.214 [2024-07-15 21:31:19.373335] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.214 [2024-07-15 21:31:19.373416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:46.214 BaseBdev2 00:17:46.214 21:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:17:46.214 [2024-07-15 21:31:19.574436] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:46.214 [2024-07-15 21:31:19.576544] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:46.214 [2024-07-15 21:31:19.576828] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:17:46.214 [2024-07-15 21:31:19.576875] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:46.214 [2024-07-15 21:31:19.577040] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:17:46.214 [2024-07-15 21:31:19.577455] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:17:46.214 [2024-07-15 21:31:19.577498] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:17:46.214 [2024-07-15 21:31:19.577682] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.214 21:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:17:46.214 21:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:46.214 21:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:46.214 21:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:46.214 21:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:46.214 21:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:46.214 21:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:46.214 21:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:46.214 21:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:46.214 21:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:46.472 21:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.472 21:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.472 21:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:46.472 "name": "raid_bdev1", 00:17:46.472 "uuid": "cb132f1b-079b-4e28-a6d1-2c8e50955132", 00:17:46.472 "strip_size_kb": 64, 00:17:46.472 "state": "online", 00:17:46.472 "raid_level": "concat", 00:17:46.472 "superblock": true, 00:17:46.472 "num_base_bdevs": 2, 00:17:46.472 "num_base_bdevs_discovered": 2, 00:17:46.472 "num_base_bdevs_operational": 2, 00:17:46.472 "base_bdevs_list": [ 00:17:46.472 { 00:17:46.472 "name": "BaseBdev1", 00:17:46.472 "uuid": "343563ef-308c-524d-aab8-a597b5e0ea1c", 00:17:46.472 "is_configured": true, 00:17:46.472 "data_offset": 2048, 00:17:46.472 "data_size": 63488 00:17:46.472 }, 00:17:46.472 { 00:17:46.472 "name": "BaseBdev2", 00:17:46.472 "uuid": "299fbe8b-eacd-5a60-9956-46db8c1e148f", 00:17:46.472 "is_configured": true, 00:17:46.472 "data_offset": 2048, 00:17:46.472 "data_size": 63488 00:17:46.472 } 00:17:46.472 ] 00:17:46.472 }' 00:17:46.472 21:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:46.472 21:31:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.408 21:31:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:47.408 21:31:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:47.408 [2024-07-15 21:31:20.506238] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:17:48.342 21:31:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:48.342 21:31:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:48.342 21:31:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:17:48.342 21:31:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:17:48.342 21:31:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:17:48.342 21:31:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:48.342 21:31:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:48.342 21:31:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:48.342 21:31:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:48.342 21:31:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:48.342 21:31:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:48.342 21:31:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:48.342 21:31:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:48.342 21:31:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:48.342 21:31:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.342 21:31:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.600 21:31:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:48.600 "name": "raid_bdev1", 00:17:48.600 "uuid": "cb132f1b-079b-4e28-a6d1-2c8e50955132", 00:17:48.600 "strip_size_kb": 64, 00:17:48.600 "state": "online", 00:17:48.600 "raid_level": "concat", 00:17:48.600 "superblock": true, 00:17:48.600 "num_base_bdevs": 2, 00:17:48.600 "num_base_bdevs_discovered": 2, 00:17:48.600 "num_base_bdevs_operational": 2, 00:17:48.600 "base_bdevs_list": [ 00:17:48.600 { 00:17:48.600 "name": "BaseBdev1", 00:17:48.600 "uuid": "343563ef-308c-524d-aab8-a597b5e0ea1c", 00:17:48.600 "is_configured": true, 00:17:48.600 "data_offset": 2048, 00:17:48.600 "data_size": 63488 00:17:48.600 }, 00:17:48.600 { 00:17:48.600 "name": "BaseBdev2", 00:17:48.600 "uuid": "299fbe8b-eacd-5a60-9956-46db8c1e148f", 00:17:48.600 "is_configured": true, 00:17:48.600 "data_offset": 2048, 00:17:48.600 "data_size": 63488 00:17:48.600 } 00:17:48.600 ] 00:17:48.600 }' 00:17:48.600 21:31:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:48.600 21:31:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.167 21:31:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:49.423 [2024-07-15 21:31:22.674341] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:49.423 [2024-07-15 21:31:22.674460] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:49.423 [2024-07-15 21:31:22.677227] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:49.423 [2024-07-15 21:31:22.677332] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.423 [2024-07-15 21:31:22.677388] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:49.423 [2024-07-15 21:31:22.677423] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:17:49.423 0 00:17:49.423 21:31:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 124144 00:17:49.423 21:31:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 124144 ']' 00:17:49.423 21:31:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 124144 00:17:49.423 21:31:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:17:49.423 21:31:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:49.423 21:31:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124144 00:17:49.423 21:31:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:49.423 21:31:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:49.423 21:31:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124144' 00:17:49.423 killing process with pid 124144 00:17:49.423 21:31:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 124144 00:17:49.423 21:31:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 124144 00:17:49.423 [2024-07-15 21:31:22.712876] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:49.695 [2024-07-15 21:31:22.847459] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:51.067 21:31:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.xos8H1yY1P 00:17:51.067 21:31:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:51.067 21:31:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:51.067 ************************************ 00:17:51.067 END TEST raid_read_error_test 00:17:51.067 ************************************ 00:17:51.067 21:31:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.46 00:17:51.067 21:31:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:17:51.067 21:31:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:51.067 21:31:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:51.067 21:31:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.46 != \0\.\0\0 ]] 00:17:51.067 00:17:51.067 real 0m6.970s 00:17:51.067 user 0m10.004s 00:17:51.067 sys 0m0.954s 00:17:51.067 21:31:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:51.067 21:31:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.067 21:31:24 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:51.067 21:31:24 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:17:51.067 21:31:24 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:51.067 21:31:24 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:51.067 21:31:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:51.067 ************************************ 00:17:51.067 START TEST raid_write_error_test 00:17:51.067 ************************************ 00:17:51.067 21:31:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 write 00:17:51.067 21:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:17:51.067 21:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:17:51.067 21:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:17:51.067 21:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:51.067 21:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:51.067 21:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:51.067 21:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:17:51.067 21:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:51.067 21:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:51.067 21:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:17:51.067 21:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:51.067 21:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:51.067 21:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:51.067 21:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:51.067 21:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:51.067 21:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:51.067 21:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:51.067 21:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:51.067 21:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:17:51.067 21:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:17:51.067 21:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:17:51.067 21:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:51.067 21:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.0uIWesYBet 00:17:51.067 21:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=124334 00:17:51.067 21:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 124334 /var/tmp/spdk-raid.sock 00:17:51.067 21:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:51.067 21:31:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 124334 ']' 00:17:51.067 21:31:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:51.067 21:31:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:51.067 21:31:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:51.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:51.067 21:31:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:51.067 21:31:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.067 [2024-07-15 21:31:24.344050] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:17:51.067 [2024-07-15 21:31:24.344264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124334 ] 00:17:51.352 [2024-07-15 21:31:24.504236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.352 [2024-07-15 21:31:24.708369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.610 [2024-07-15 21:31:24.923165] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:51.869 21:31:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:51.869 21:31:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:17:51.869 21:31:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:51.869 21:31:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:52.136 BaseBdev1_malloc 00:17:52.136 21:31:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:52.396 true 00:17:52.396 21:31:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:52.654 [2024-07-15 21:31:25.859747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:52.654 [2024-07-15 21:31:25.859941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.654 [2024-07-15 21:31:25.860007] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:52.654 [2024-07-15 21:31:25.860059] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.654 [2024-07-15 21:31:25.862390] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.654 [2024-07-15 21:31:25.862490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:52.654 BaseBdev1 00:17:52.654 21:31:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:52.654 21:31:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:52.912 BaseBdev2_malloc 00:17:52.912 21:31:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:53.170 true 00:17:53.170 21:31:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:53.170 [2024-07-15 21:31:26.453426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:53.170 [2024-07-15 21:31:26.453574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.170 [2024-07-15 21:31:26.453625] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:17:53.170 [2024-07-15 21:31:26.453661] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.170 [2024-07-15 21:31:26.455458] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.170 [2024-07-15 21:31:26.455528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:53.170 BaseBdev2 00:17:53.170 21:31:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:17:53.429 [2024-07-15 21:31:26.645498] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:53.429 [2024-07-15 21:31:26.647229] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:53.429 [2024-07-15 21:31:26.647513] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:17:53.429 [2024-07-15 21:31:26.647561] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:53.429 [2024-07-15 21:31:26.647709] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:17:53.429 [2024-07-15 21:31:26.648068] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:17:53.429 [2024-07-15 21:31:26.648104] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:17:53.429 [2024-07-15 21:31:26.648292] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.429 21:31:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:17:53.429 21:31:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:53.429 21:31:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:53.429 21:31:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:53.429 21:31:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:53.429 21:31:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:53.429 21:31:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:53.429 21:31:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:53.429 21:31:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:53.429 21:31:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:53.429 21:31:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.429 21:31:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.687 21:31:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:53.687 "name": "raid_bdev1", 00:17:53.687 "uuid": "4d2f8dce-537f-4b84-8ad0-560928c9f2b4", 00:17:53.687 "strip_size_kb": 64, 00:17:53.687 "state": "online", 00:17:53.687 "raid_level": "concat", 00:17:53.687 "superblock": true, 00:17:53.687 "num_base_bdevs": 2, 00:17:53.687 "num_base_bdevs_discovered": 2, 00:17:53.687 "num_base_bdevs_operational": 2, 00:17:53.687 "base_bdevs_list": [ 00:17:53.687 { 00:17:53.687 "name": "BaseBdev1", 00:17:53.687 "uuid": "166750aa-d9b3-542f-8d6d-df3a2d0ac9b2", 00:17:53.687 "is_configured": true, 00:17:53.687 "data_offset": 2048, 00:17:53.687 "data_size": 63488 00:17:53.687 }, 00:17:53.687 { 00:17:53.687 "name": "BaseBdev2", 00:17:53.687 "uuid": "5f928cc9-695f-5e45-b5bb-974f1aaf6df8", 00:17:53.687 "is_configured": true, 00:17:53.687 "data_offset": 2048, 00:17:53.687 "data_size": 63488 00:17:53.687 } 00:17:53.687 ] 00:17:53.687 }' 00:17:53.687 21:31:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:53.687 21:31:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.254 21:31:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:54.254 21:31:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:54.254 [2024-07-15 21:31:27.437295] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:17:55.213 21:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:55.213 21:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:55.213 21:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:17:55.213 21:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:17:55.213 21:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:17:55.213 21:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:55.213 21:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:55.213 21:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:55.213 21:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:55.213 21:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:55.213 21:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:55.213 21:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:55.213 21:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:55.213 21:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:55.213 21:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.213 21:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.470 21:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:55.470 "name": "raid_bdev1", 00:17:55.470 "uuid": "4d2f8dce-537f-4b84-8ad0-560928c9f2b4", 00:17:55.470 "strip_size_kb": 64, 00:17:55.470 "state": "online", 00:17:55.470 "raid_level": "concat", 00:17:55.471 "superblock": true, 00:17:55.471 "num_base_bdevs": 2, 00:17:55.471 "num_base_bdevs_discovered": 2, 00:17:55.471 "num_base_bdevs_operational": 2, 00:17:55.471 "base_bdevs_list": [ 00:17:55.471 { 00:17:55.471 "name": "BaseBdev1", 00:17:55.471 "uuid": "166750aa-d9b3-542f-8d6d-df3a2d0ac9b2", 00:17:55.471 "is_configured": true, 00:17:55.471 "data_offset": 2048, 00:17:55.471 "data_size": 63488 00:17:55.471 }, 00:17:55.471 { 00:17:55.471 "name": "BaseBdev2", 00:17:55.471 "uuid": "5f928cc9-695f-5e45-b5bb-974f1aaf6df8", 00:17:55.471 "is_configured": true, 00:17:55.471 "data_offset": 2048, 00:17:55.471 "data_size": 63488 00:17:55.471 } 00:17:55.471 ] 00:17:55.471 }' 00:17:55.471 21:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:55.471 21:31:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.038 21:31:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:56.297 [2024-07-15 21:31:29.539314] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:56.297 [2024-07-15 21:31:29.539414] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:56.297 [2024-07-15 21:31:29.541931] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:56.297 [2024-07-15 21:31:29.542001] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.297 [2024-07-15 21:31:29.542042] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:56.297 [2024-07-15 21:31:29.542091] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:17:56.297 0 00:17:56.297 21:31:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 124334 00:17:56.297 21:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 124334 ']' 00:17:56.297 21:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 124334 00:17:56.297 21:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:17:56.297 21:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:56.297 21:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124334 00:17:56.297 killing process with pid 124334 00:17:56.297 21:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:56.297 21:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:56.297 21:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124334' 00:17:56.297 21:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 124334 00:17:56.297 21:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 124334 00:17:56.297 [2024-07-15 21:31:29.578335] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:56.555 [2024-07-15 21:31:29.699112] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:57.934 21:31:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.0uIWesYBet 00:17:57.934 21:31:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:57.934 21:31:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:57.934 ************************************ 00:17:57.934 END TEST raid_write_error_test 00:17:57.934 ************************************ 00:17:57.934 21:31:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.48 00:17:57.934 21:31:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:17:57.934 21:31:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:57.934 21:31:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:57.934 21:31:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.48 != \0\.\0\0 ]] 00:17:57.934 00:17:57.934 real 0m6.688s 00:17:57.934 user 0m9.750s 00:17:57.934 sys 0m0.744s 00:17:57.934 21:31:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:57.934 21:31:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.934 21:31:31 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:57.934 21:31:31 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:17:57.934 21:31:31 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:17:57.934 21:31:31 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:57.934 21:31:31 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:57.934 21:31:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:57.934 ************************************ 00:17:57.934 START TEST raid_state_function_test 00:17:57.934 ************************************ 00:17:57.934 21:31:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 false 00:17:57.934 21:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:17:57.934 21:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:17:57.934 21:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:17:57.934 21:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:57.934 21:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:57.934 21:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:57.934 21:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:57.934 21:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:17:57.934 21:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:57.934 21:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:57.934 21:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:17:57.934 21:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:57.934 21:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:57.934 21:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:57.934 21:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:57.934 21:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:57.934 21:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:57.934 21:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:57.934 21:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:17:57.934 21:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:17:57.934 21:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:17:57.934 21:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:17:57.934 21:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=124531 00:17:57.934 21:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:57.934 Process raid pid: 124531 00:17:57.934 21:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 124531' 00:17:57.934 21:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 124531 /var/tmp/spdk-raid.sock 00:17:57.934 21:31:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 124531 ']' 00:17:57.934 21:31:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:57.934 21:31:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:57.934 21:31:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:57.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:57.934 21:31:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:57.934 21:31:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.935 [2024-07-15 21:31:31.097257] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:17:57.935 [2024-07-15 21:31:31.097503] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.935 [2024-07-15 21:31:31.255718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.193 [2024-07-15 21:31:31.450701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.452 [2024-07-15 21:31:31.646794] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:58.768 21:31:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:58.768 21:31:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:17:58.768 21:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:58.768 [2024-07-15 21:31:32.105496] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:58.768 [2024-07-15 21:31:32.105649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:58.768 [2024-07-15 21:31:32.105683] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:58.768 [2024-07-15 21:31:32.105736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:58.768 21:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:58.768 21:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:58.768 21:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:58.768 21:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:58.768 21:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:58.768 21:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:58.768 21:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:58.768 21:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:58.768 21:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:58.768 21:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:58.768 21:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.768 21:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.027 21:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:59.027 "name": "Existed_Raid", 00:17:59.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.027 "strip_size_kb": 0, 00:17:59.027 "state": "configuring", 00:17:59.027 "raid_level": "raid1", 00:17:59.027 "superblock": false, 00:17:59.027 "num_base_bdevs": 2, 00:17:59.027 "num_base_bdevs_discovered": 0, 00:17:59.027 "num_base_bdevs_operational": 2, 00:17:59.027 "base_bdevs_list": [ 00:17:59.027 { 00:17:59.027 "name": "BaseBdev1", 00:17:59.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.027 "is_configured": false, 00:17:59.027 "data_offset": 0, 00:17:59.027 "data_size": 0 00:17:59.027 }, 00:17:59.027 { 00:17:59.027 "name": "BaseBdev2", 00:17:59.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.027 "is_configured": false, 00:17:59.027 "data_offset": 0, 00:17:59.027 "data_size": 0 00:17:59.027 } 00:17:59.027 ] 00:17:59.027 }' 00:17:59.027 21:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:59.027 21:31:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.593 21:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:59.852 [2024-07-15 21:31:33.139638] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:59.852 [2024-07-15 21:31:33.139735] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:59.852 21:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:00.114 [2024-07-15 21:31:33.331309] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:00.114 [2024-07-15 21:31:33.331421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:00.114 [2024-07-15 21:31:33.331445] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:00.114 [2024-07-15 21:31:33.331478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:00.114 21:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:00.371 [2024-07-15 21:31:33.564978] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:00.371 BaseBdev1 00:18:00.371 21:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:00.371 21:31:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:00.371 21:31:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:00.371 21:31:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:00.371 21:31:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:00.371 21:31:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:00.371 21:31:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:00.630 21:31:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:00.630 [ 00:18:00.630 { 00:18:00.630 "name": "BaseBdev1", 00:18:00.630 "aliases": [ 00:18:00.630 "4fa70dc5-3879-4a47-a390-69eded2f3f9f" 00:18:00.630 ], 00:18:00.630 "product_name": "Malloc disk", 00:18:00.630 "block_size": 512, 00:18:00.630 "num_blocks": 65536, 00:18:00.630 "uuid": "4fa70dc5-3879-4a47-a390-69eded2f3f9f", 00:18:00.630 "assigned_rate_limits": { 00:18:00.630 "rw_ios_per_sec": 0, 00:18:00.630 "rw_mbytes_per_sec": 0, 00:18:00.630 "r_mbytes_per_sec": 0, 00:18:00.630 "w_mbytes_per_sec": 0 00:18:00.630 }, 00:18:00.630 "claimed": true, 00:18:00.630 "claim_type": "exclusive_write", 00:18:00.630 "zoned": false, 00:18:00.630 "supported_io_types": { 00:18:00.630 "read": true, 00:18:00.630 "write": true, 00:18:00.630 "unmap": true, 00:18:00.630 "flush": true, 00:18:00.630 "reset": true, 00:18:00.630 "nvme_admin": false, 00:18:00.630 "nvme_io": false, 00:18:00.630 "nvme_io_md": false, 00:18:00.630 "write_zeroes": true, 00:18:00.630 "zcopy": true, 00:18:00.630 "get_zone_info": false, 00:18:00.630 "zone_management": false, 00:18:00.630 "zone_append": false, 00:18:00.630 "compare": false, 00:18:00.630 "compare_and_write": false, 00:18:00.630 "abort": true, 00:18:00.630 "seek_hole": false, 00:18:00.630 "seek_data": false, 00:18:00.630 "copy": true, 00:18:00.630 "nvme_iov_md": false 00:18:00.630 }, 00:18:00.630 "memory_domains": [ 00:18:00.630 { 00:18:00.630 "dma_device_id": "system", 00:18:00.630 "dma_device_type": 1 00:18:00.630 }, 00:18:00.630 { 00:18:00.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.630 "dma_device_type": 2 00:18:00.630 } 00:18:00.630 ], 00:18:00.630 "driver_specific": {} 00:18:00.630 } 00:18:00.630 ] 00:18:00.630 21:31:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:00.630 21:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:00.630 21:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:00.630 21:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:00.630 21:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:00.630 21:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:00.630 21:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:00.630 21:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:00.630 21:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:00.630 21:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:00.630 21:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:00.630 21:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.630 21:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.889 21:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:00.889 "name": "Existed_Raid", 00:18:00.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.889 "strip_size_kb": 0, 00:18:00.889 "state": "configuring", 00:18:00.889 "raid_level": "raid1", 00:18:00.889 "superblock": false, 00:18:00.889 "num_base_bdevs": 2, 00:18:00.889 "num_base_bdevs_discovered": 1, 00:18:00.889 "num_base_bdevs_operational": 2, 00:18:00.889 "base_bdevs_list": [ 00:18:00.889 { 00:18:00.889 "name": "BaseBdev1", 00:18:00.889 "uuid": "4fa70dc5-3879-4a47-a390-69eded2f3f9f", 00:18:00.889 "is_configured": true, 00:18:00.889 "data_offset": 0, 00:18:00.889 "data_size": 65536 00:18:00.889 }, 00:18:00.889 { 00:18:00.889 "name": "BaseBdev2", 00:18:00.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.889 "is_configured": false, 00:18:00.889 "data_offset": 0, 00:18:00.889 "data_size": 0 00:18:00.889 } 00:18:00.889 ] 00:18:00.889 }' 00:18:00.889 21:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:00.889 21:31:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.457 21:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:01.715 [2024-07-15 21:31:34.922714] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:01.715 [2024-07-15 21:31:34.922830] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:18:01.715 21:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:01.975 [2024-07-15 21:31:35.118380] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:01.975 [2024-07-15 21:31:35.120073] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:01.975 [2024-07-15 21:31:35.120154] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:01.975 21:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:01.975 21:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:01.975 21:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:01.975 21:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:01.975 21:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:01.975 21:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:01.975 21:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:01.975 21:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:01.975 21:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:01.975 21:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:01.975 21:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:01.975 21:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:01.975 21:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.975 21:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.975 21:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:01.975 "name": "Existed_Raid", 00:18:01.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.975 "strip_size_kb": 0, 00:18:01.975 "state": "configuring", 00:18:01.975 "raid_level": "raid1", 00:18:01.975 "superblock": false, 00:18:01.975 "num_base_bdevs": 2, 00:18:01.975 "num_base_bdevs_discovered": 1, 00:18:01.975 "num_base_bdevs_operational": 2, 00:18:01.975 "base_bdevs_list": [ 00:18:01.975 { 00:18:01.975 "name": "BaseBdev1", 00:18:01.975 "uuid": "4fa70dc5-3879-4a47-a390-69eded2f3f9f", 00:18:01.975 "is_configured": true, 00:18:01.975 "data_offset": 0, 00:18:01.975 "data_size": 65536 00:18:01.975 }, 00:18:01.975 { 00:18:01.975 "name": "BaseBdev2", 00:18:01.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.975 "is_configured": false, 00:18:01.975 "data_offset": 0, 00:18:01.975 "data_size": 0 00:18:01.975 } 00:18:01.975 ] 00:18:01.975 }' 00:18:01.975 21:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:01.975 21:31:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.929 21:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:02.929 [2024-07-15 21:31:36.215839] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:02.929 [2024-07-15 21:31:36.215963] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:18:02.929 [2024-07-15 21:31:36.215985] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:02.930 [2024-07-15 21:31:36.216145] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:18:02.930 [2024-07-15 21:31:36.216469] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:18:02.930 [2024-07-15 21:31:36.216511] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:18:02.930 [2024-07-15 21:31:36.216770] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.930 BaseBdev2 00:18:02.930 21:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:02.930 21:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:02.930 21:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:02.930 21:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:02.930 21:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:02.930 21:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:02.930 21:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:03.188 21:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:03.447 [ 00:18:03.447 { 00:18:03.447 "name": "BaseBdev2", 00:18:03.447 "aliases": [ 00:18:03.447 "9c24cf58-7283-4560-938d-ebedd0979fca" 00:18:03.447 ], 00:18:03.447 "product_name": "Malloc disk", 00:18:03.447 "block_size": 512, 00:18:03.447 "num_blocks": 65536, 00:18:03.447 "uuid": "9c24cf58-7283-4560-938d-ebedd0979fca", 00:18:03.447 "assigned_rate_limits": { 00:18:03.447 "rw_ios_per_sec": 0, 00:18:03.447 "rw_mbytes_per_sec": 0, 00:18:03.447 "r_mbytes_per_sec": 0, 00:18:03.447 "w_mbytes_per_sec": 0 00:18:03.447 }, 00:18:03.447 "claimed": true, 00:18:03.447 "claim_type": "exclusive_write", 00:18:03.447 "zoned": false, 00:18:03.447 "supported_io_types": { 00:18:03.447 "read": true, 00:18:03.447 "write": true, 00:18:03.447 "unmap": true, 00:18:03.447 "flush": true, 00:18:03.447 "reset": true, 00:18:03.447 "nvme_admin": false, 00:18:03.447 "nvme_io": false, 00:18:03.447 "nvme_io_md": false, 00:18:03.447 "write_zeroes": true, 00:18:03.447 "zcopy": true, 00:18:03.447 "get_zone_info": false, 00:18:03.447 "zone_management": false, 00:18:03.447 "zone_append": false, 00:18:03.447 "compare": false, 00:18:03.447 "compare_and_write": false, 00:18:03.447 "abort": true, 00:18:03.447 "seek_hole": false, 00:18:03.447 "seek_data": false, 00:18:03.447 "copy": true, 00:18:03.447 "nvme_iov_md": false 00:18:03.447 }, 00:18:03.447 "memory_domains": [ 00:18:03.447 { 00:18:03.447 "dma_device_id": "system", 00:18:03.447 "dma_device_type": 1 00:18:03.447 }, 00:18:03.447 { 00:18:03.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.447 "dma_device_type": 2 00:18:03.447 } 00:18:03.447 ], 00:18:03.447 "driver_specific": {} 00:18:03.447 } 00:18:03.447 ] 00:18:03.447 21:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:03.447 21:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:03.447 21:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:03.447 21:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:03.447 21:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:03.447 21:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:03.447 21:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:03.447 21:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:03.447 21:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:03.447 21:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:03.447 21:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:03.447 21:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:03.447 21:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:03.447 21:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.447 21:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.706 21:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:03.706 "name": "Existed_Raid", 00:18:03.706 "uuid": "5dc48899-2766-496d-836b-85c087a353f2", 00:18:03.706 "strip_size_kb": 0, 00:18:03.706 "state": "online", 00:18:03.706 "raid_level": "raid1", 00:18:03.706 "superblock": false, 00:18:03.706 "num_base_bdevs": 2, 00:18:03.706 "num_base_bdevs_discovered": 2, 00:18:03.706 "num_base_bdevs_operational": 2, 00:18:03.706 "base_bdevs_list": [ 00:18:03.706 { 00:18:03.706 "name": "BaseBdev1", 00:18:03.706 "uuid": "4fa70dc5-3879-4a47-a390-69eded2f3f9f", 00:18:03.706 "is_configured": true, 00:18:03.706 "data_offset": 0, 00:18:03.706 "data_size": 65536 00:18:03.706 }, 00:18:03.706 { 00:18:03.706 "name": "BaseBdev2", 00:18:03.706 "uuid": "9c24cf58-7283-4560-938d-ebedd0979fca", 00:18:03.706 "is_configured": true, 00:18:03.706 "data_offset": 0, 00:18:03.706 "data_size": 65536 00:18:03.706 } 00:18:03.706 ] 00:18:03.706 }' 00:18:03.706 21:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:03.706 21:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.275 21:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:04.275 21:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:04.275 21:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:04.275 21:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:04.275 21:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:04.275 21:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:04.275 21:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:04.275 21:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:04.534 [2024-07-15 21:31:37.669667] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:04.534 21:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:04.534 "name": "Existed_Raid", 00:18:04.534 "aliases": [ 00:18:04.534 "5dc48899-2766-496d-836b-85c087a353f2" 00:18:04.534 ], 00:18:04.534 "product_name": "Raid Volume", 00:18:04.534 "block_size": 512, 00:18:04.534 "num_blocks": 65536, 00:18:04.534 "uuid": "5dc48899-2766-496d-836b-85c087a353f2", 00:18:04.534 "assigned_rate_limits": { 00:18:04.534 "rw_ios_per_sec": 0, 00:18:04.534 "rw_mbytes_per_sec": 0, 00:18:04.534 "r_mbytes_per_sec": 0, 00:18:04.534 "w_mbytes_per_sec": 0 00:18:04.534 }, 00:18:04.534 "claimed": false, 00:18:04.534 "zoned": false, 00:18:04.534 "supported_io_types": { 00:18:04.534 "read": true, 00:18:04.534 "write": true, 00:18:04.534 "unmap": false, 00:18:04.534 "flush": false, 00:18:04.534 "reset": true, 00:18:04.534 "nvme_admin": false, 00:18:04.534 "nvme_io": false, 00:18:04.534 "nvme_io_md": false, 00:18:04.534 "write_zeroes": true, 00:18:04.534 "zcopy": false, 00:18:04.534 "get_zone_info": false, 00:18:04.534 "zone_management": false, 00:18:04.534 "zone_append": false, 00:18:04.534 "compare": false, 00:18:04.534 "compare_and_write": false, 00:18:04.534 "abort": false, 00:18:04.534 "seek_hole": false, 00:18:04.534 "seek_data": false, 00:18:04.534 "copy": false, 00:18:04.534 "nvme_iov_md": false 00:18:04.534 }, 00:18:04.534 "memory_domains": [ 00:18:04.534 { 00:18:04.534 "dma_device_id": "system", 00:18:04.534 "dma_device_type": 1 00:18:04.534 }, 00:18:04.534 { 00:18:04.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.534 "dma_device_type": 2 00:18:04.534 }, 00:18:04.534 { 00:18:04.534 "dma_device_id": "system", 00:18:04.534 "dma_device_type": 1 00:18:04.534 }, 00:18:04.534 { 00:18:04.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.534 "dma_device_type": 2 00:18:04.534 } 00:18:04.534 ], 00:18:04.534 "driver_specific": { 00:18:04.534 "raid": { 00:18:04.535 "uuid": "5dc48899-2766-496d-836b-85c087a353f2", 00:18:04.535 "strip_size_kb": 0, 00:18:04.535 "state": "online", 00:18:04.535 "raid_level": "raid1", 00:18:04.535 "superblock": false, 00:18:04.535 "num_base_bdevs": 2, 00:18:04.535 "num_base_bdevs_discovered": 2, 00:18:04.535 "num_base_bdevs_operational": 2, 00:18:04.535 "base_bdevs_list": [ 00:18:04.535 { 00:18:04.535 "name": "BaseBdev1", 00:18:04.535 "uuid": "4fa70dc5-3879-4a47-a390-69eded2f3f9f", 00:18:04.535 "is_configured": true, 00:18:04.535 "data_offset": 0, 00:18:04.535 "data_size": 65536 00:18:04.535 }, 00:18:04.535 { 00:18:04.535 "name": "BaseBdev2", 00:18:04.535 "uuid": "9c24cf58-7283-4560-938d-ebedd0979fca", 00:18:04.535 "is_configured": true, 00:18:04.535 "data_offset": 0, 00:18:04.535 "data_size": 65536 00:18:04.535 } 00:18:04.535 ] 00:18:04.535 } 00:18:04.535 } 00:18:04.535 }' 00:18:04.535 21:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:04.535 21:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:04.535 BaseBdev2' 00:18:04.535 21:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:04.535 21:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:04.535 21:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:04.794 21:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:04.794 "name": "BaseBdev1", 00:18:04.794 "aliases": [ 00:18:04.794 "4fa70dc5-3879-4a47-a390-69eded2f3f9f" 00:18:04.794 ], 00:18:04.794 "product_name": "Malloc disk", 00:18:04.794 "block_size": 512, 00:18:04.794 "num_blocks": 65536, 00:18:04.794 "uuid": "4fa70dc5-3879-4a47-a390-69eded2f3f9f", 00:18:04.794 "assigned_rate_limits": { 00:18:04.794 "rw_ios_per_sec": 0, 00:18:04.794 "rw_mbytes_per_sec": 0, 00:18:04.794 "r_mbytes_per_sec": 0, 00:18:04.794 "w_mbytes_per_sec": 0 00:18:04.794 }, 00:18:04.794 "claimed": true, 00:18:04.794 "claim_type": "exclusive_write", 00:18:04.794 "zoned": false, 00:18:04.794 "supported_io_types": { 00:18:04.794 "read": true, 00:18:04.794 "write": true, 00:18:04.794 "unmap": true, 00:18:04.794 "flush": true, 00:18:04.794 "reset": true, 00:18:04.794 "nvme_admin": false, 00:18:04.794 "nvme_io": false, 00:18:04.794 "nvme_io_md": false, 00:18:04.794 "write_zeroes": true, 00:18:04.794 "zcopy": true, 00:18:04.794 "get_zone_info": false, 00:18:04.794 "zone_management": false, 00:18:04.794 "zone_append": false, 00:18:04.794 "compare": false, 00:18:04.794 "compare_and_write": false, 00:18:04.794 "abort": true, 00:18:04.794 "seek_hole": false, 00:18:04.794 "seek_data": false, 00:18:04.794 "copy": true, 00:18:04.794 "nvme_iov_md": false 00:18:04.794 }, 00:18:04.794 "memory_domains": [ 00:18:04.794 { 00:18:04.794 "dma_device_id": "system", 00:18:04.794 "dma_device_type": 1 00:18:04.794 }, 00:18:04.794 { 00:18:04.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.794 "dma_device_type": 2 00:18:04.794 } 00:18:04.794 ], 00:18:04.794 "driver_specific": {} 00:18:04.794 }' 00:18:04.794 21:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:04.794 21:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:04.794 21:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:04.794 21:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:04.794 21:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:04.794 21:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:04.794 21:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:05.053 21:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:05.053 21:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:05.053 21:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:05.053 21:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:05.053 21:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:05.053 21:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:05.053 21:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:05.053 21:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:05.312 21:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:05.313 "name": "BaseBdev2", 00:18:05.313 "aliases": [ 00:18:05.313 "9c24cf58-7283-4560-938d-ebedd0979fca" 00:18:05.313 ], 00:18:05.313 "product_name": "Malloc disk", 00:18:05.313 "block_size": 512, 00:18:05.313 "num_blocks": 65536, 00:18:05.313 "uuid": "9c24cf58-7283-4560-938d-ebedd0979fca", 00:18:05.313 "assigned_rate_limits": { 00:18:05.313 "rw_ios_per_sec": 0, 00:18:05.313 "rw_mbytes_per_sec": 0, 00:18:05.313 "r_mbytes_per_sec": 0, 00:18:05.313 "w_mbytes_per_sec": 0 00:18:05.313 }, 00:18:05.313 "claimed": true, 00:18:05.313 "claim_type": "exclusive_write", 00:18:05.313 "zoned": false, 00:18:05.313 "supported_io_types": { 00:18:05.313 "read": true, 00:18:05.313 "write": true, 00:18:05.313 "unmap": true, 00:18:05.313 "flush": true, 00:18:05.313 "reset": true, 00:18:05.313 "nvme_admin": false, 00:18:05.313 "nvme_io": false, 00:18:05.313 "nvme_io_md": false, 00:18:05.313 "write_zeroes": true, 00:18:05.313 "zcopy": true, 00:18:05.313 "get_zone_info": false, 00:18:05.313 "zone_management": false, 00:18:05.313 "zone_append": false, 00:18:05.313 "compare": false, 00:18:05.313 "compare_and_write": false, 00:18:05.313 "abort": true, 00:18:05.313 "seek_hole": false, 00:18:05.313 "seek_data": false, 00:18:05.313 "copy": true, 00:18:05.313 "nvme_iov_md": false 00:18:05.313 }, 00:18:05.313 "memory_domains": [ 00:18:05.313 { 00:18:05.313 "dma_device_id": "system", 00:18:05.313 "dma_device_type": 1 00:18:05.313 }, 00:18:05.313 { 00:18:05.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.313 "dma_device_type": 2 00:18:05.313 } 00:18:05.313 ], 00:18:05.313 "driver_specific": {} 00:18:05.313 }' 00:18:05.313 21:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:05.313 21:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:05.313 21:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:05.313 21:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:05.572 21:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:05.572 21:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:05.572 21:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:05.572 21:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:05.572 21:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:05.572 21:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:05.572 21:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:05.831 21:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:05.831 21:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:05.831 [2024-07-15 21:31:39.151053] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:06.090 21:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:06.090 21:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:18:06.090 21:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:06.090 21:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:06.090 21:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:18:06.090 21:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:06.090 21:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:06.090 21:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:06.090 21:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:06.090 21:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:06.090 21:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:06.090 21:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:06.091 21:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:06.091 21:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:06.091 21:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:06.091 21:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.091 21:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.091 21:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:06.091 "name": "Existed_Raid", 00:18:06.091 "uuid": "5dc48899-2766-496d-836b-85c087a353f2", 00:18:06.091 "strip_size_kb": 0, 00:18:06.091 "state": "online", 00:18:06.091 "raid_level": "raid1", 00:18:06.091 "superblock": false, 00:18:06.091 "num_base_bdevs": 2, 00:18:06.091 "num_base_bdevs_discovered": 1, 00:18:06.091 "num_base_bdevs_operational": 1, 00:18:06.091 "base_bdevs_list": [ 00:18:06.091 { 00:18:06.091 "name": null, 00:18:06.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.091 "is_configured": false, 00:18:06.091 "data_offset": 0, 00:18:06.091 "data_size": 65536 00:18:06.091 }, 00:18:06.091 { 00:18:06.091 "name": "BaseBdev2", 00:18:06.091 "uuid": "9c24cf58-7283-4560-938d-ebedd0979fca", 00:18:06.091 "is_configured": true, 00:18:06.091 "data_offset": 0, 00:18:06.091 "data_size": 65536 00:18:06.091 } 00:18:06.091 ] 00:18:06.091 }' 00:18:06.091 21:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:06.091 21:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.071 21:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:07.071 21:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:07.071 21:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.071 21:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:07.071 21:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:07.071 21:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:07.071 21:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:07.071 [2024-07-15 21:31:40.412456] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:07.071 [2024-07-15 21:31:40.412591] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:07.330 [2024-07-15 21:31:40.501524] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:07.330 [2024-07-15 21:31:40.501632] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:07.330 [2024-07-15 21:31:40.501653] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:18:07.330 21:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:07.330 21:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:07.330 21:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.330 21:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:07.330 21:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:07.330 21:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:07.330 21:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:18:07.330 21:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 124531 00:18:07.330 21:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 124531 ']' 00:18:07.330 21:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 124531 00:18:07.330 21:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:18:07.330 21:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:07.330 21:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124531 00:18:07.588 killing process with pid 124531 00:18:07.588 21:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:07.588 21:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:07.588 21:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124531' 00:18:07.588 21:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 124531 00:18:07.588 21:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 124531 00:18:07.588 [2024-07-15 21:31:40.718614] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:07.588 [2024-07-15 21:31:40.718734] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:08.522 ************************************ 00:18:08.522 END TEST raid_state_function_test 00:18:08.522 ************************************ 00:18:08.522 21:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:18:08.522 00:18:08.522 real 0m10.849s 00:18:08.522 user 0m19.011s 00:18:08.522 sys 0m1.310s 00:18:08.522 21:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:08.522 21:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.782 21:31:41 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:08.782 21:31:41 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:18:08.782 21:31:41 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:08.782 21:31:41 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:08.782 21:31:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:08.782 ************************************ 00:18:08.782 START TEST raid_state_function_test_sb 00:18:08.782 ************************************ 00:18:08.782 21:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:18:08.782 21:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:18:08.782 21:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:18:08.782 21:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:18:08.782 21:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:08.782 21:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:08.782 21:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:08.782 21:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:08.782 21:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:18:08.782 21:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:08.782 21:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:08.782 21:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:18:08.782 21:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:08.782 21:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:08.782 21:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:08.782 21:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:08.782 21:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:08.782 21:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:08.782 21:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:08.782 21:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:18:08.782 21:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:18:08.782 21:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:18:08.782 21:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:18:08.782 21:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=124920 00:18:08.782 21:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:08.782 21:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 124920' 00:18:08.782 Process raid pid: 124920 00:18:08.782 21:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 124920 /var/tmp/spdk-raid.sock 00:18:08.782 21:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 124920 ']' 00:18:08.783 21:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:08.783 21:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:08.783 21:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:08.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:08.783 21:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:08.783 21:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.783 [2024-07-15 21:31:42.018987] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:18:08.783 [2024-07-15 21:31:42.019202] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.042 [2024-07-15 21:31:42.174073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.042 [2024-07-15 21:31:42.355774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.300 [2024-07-15 21:31:42.541827] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:09.560 21:31:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:09.560 21:31:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:18:09.560 21:31:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:09.820 [2024-07-15 21:31:42.999572] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:09.820 [2024-07-15 21:31:42.999678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:09.820 [2024-07-15 21:31:42.999707] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:09.820 [2024-07-15 21:31:42.999740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:09.820 21:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:09.820 21:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:09.820 21:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:09.820 21:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:09.820 21:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:09.820 21:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:09.820 21:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:09.820 21:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:09.820 21:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:09.820 21:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:09.820 21:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.820 21:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.820 21:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:09.820 "name": "Existed_Raid", 00:18:09.820 "uuid": "bee8456b-cb90-45bb-97bf-95bcbea4a704", 00:18:09.820 "strip_size_kb": 0, 00:18:09.820 "state": "configuring", 00:18:09.820 "raid_level": "raid1", 00:18:09.820 "superblock": true, 00:18:09.820 "num_base_bdevs": 2, 00:18:09.820 "num_base_bdevs_discovered": 0, 00:18:09.820 "num_base_bdevs_operational": 2, 00:18:09.820 "base_bdevs_list": [ 00:18:09.820 { 00:18:09.820 "name": "BaseBdev1", 00:18:09.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.820 "is_configured": false, 00:18:09.820 "data_offset": 0, 00:18:09.820 "data_size": 0 00:18:09.820 }, 00:18:09.820 { 00:18:09.820 "name": "BaseBdev2", 00:18:09.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.820 "is_configured": false, 00:18:09.820 "data_offset": 0, 00:18:09.820 "data_size": 0 00:18:09.820 } 00:18:09.820 ] 00:18:09.820 }' 00:18:09.820 21:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:09.820 21:31:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.391 21:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:10.687 [2024-07-15 21:31:43.933945] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:10.687 [2024-07-15 21:31:43.934049] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:18:10.687 21:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:10.947 [2024-07-15 21:31:44.113641] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:10.947 [2024-07-15 21:31:44.113740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:10.947 [2024-07-15 21:31:44.113762] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:10.947 [2024-07-15 21:31:44.113793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:10.947 21:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:11.206 [2024-07-15 21:31:44.332270] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:11.206 BaseBdev1 00:18:11.206 21:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:11.206 21:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:11.206 21:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:11.206 21:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:11.206 21:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:11.206 21:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:11.206 21:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:11.206 21:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:11.466 [ 00:18:11.466 { 00:18:11.466 "name": "BaseBdev1", 00:18:11.466 "aliases": [ 00:18:11.466 "97e87fdb-001a-419f-a253-3150a91715c9" 00:18:11.466 ], 00:18:11.466 "product_name": "Malloc disk", 00:18:11.466 "block_size": 512, 00:18:11.466 "num_blocks": 65536, 00:18:11.466 "uuid": "97e87fdb-001a-419f-a253-3150a91715c9", 00:18:11.466 "assigned_rate_limits": { 00:18:11.466 "rw_ios_per_sec": 0, 00:18:11.466 "rw_mbytes_per_sec": 0, 00:18:11.466 "r_mbytes_per_sec": 0, 00:18:11.466 "w_mbytes_per_sec": 0 00:18:11.466 }, 00:18:11.466 "claimed": true, 00:18:11.466 "claim_type": "exclusive_write", 00:18:11.466 "zoned": false, 00:18:11.466 "supported_io_types": { 00:18:11.466 "read": true, 00:18:11.466 "write": true, 00:18:11.466 "unmap": true, 00:18:11.466 "flush": true, 00:18:11.466 "reset": true, 00:18:11.466 "nvme_admin": false, 00:18:11.466 "nvme_io": false, 00:18:11.466 "nvme_io_md": false, 00:18:11.466 "write_zeroes": true, 00:18:11.466 "zcopy": true, 00:18:11.466 "get_zone_info": false, 00:18:11.466 "zone_management": false, 00:18:11.466 "zone_append": false, 00:18:11.466 "compare": false, 00:18:11.466 "compare_and_write": false, 00:18:11.466 "abort": true, 00:18:11.466 "seek_hole": false, 00:18:11.466 "seek_data": false, 00:18:11.466 "copy": true, 00:18:11.466 "nvme_iov_md": false 00:18:11.466 }, 00:18:11.466 "memory_domains": [ 00:18:11.466 { 00:18:11.466 "dma_device_id": "system", 00:18:11.466 "dma_device_type": 1 00:18:11.466 }, 00:18:11.466 { 00:18:11.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.466 "dma_device_type": 2 00:18:11.466 } 00:18:11.466 ], 00:18:11.466 "driver_specific": {} 00:18:11.466 } 00:18:11.466 ] 00:18:11.467 21:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:11.467 21:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:11.467 21:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:11.467 21:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:11.467 21:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:11.467 21:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:11.467 21:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:11.467 21:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:11.467 21:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:11.467 21:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:11.467 21:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:11.467 21:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.467 21:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.726 21:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:11.726 "name": "Existed_Raid", 00:18:11.726 "uuid": "b876b142-5a11-47b7-85df-7caefb93846f", 00:18:11.726 "strip_size_kb": 0, 00:18:11.726 "state": "configuring", 00:18:11.726 "raid_level": "raid1", 00:18:11.726 "superblock": true, 00:18:11.726 "num_base_bdevs": 2, 00:18:11.726 "num_base_bdevs_discovered": 1, 00:18:11.726 "num_base_bdevs_operational": 2, 00:18:11.726 "base_bdevs_list": [ 00:18:11.726 { 00:18:11.726 "name": "BaseBdev1", 00:18:11.726 "uuid": "97e87fdb-001a-419f-a253-3150a91715c9", 00:18:11.726 "is_configured": true, 00:18:11.726 "data_offset": 2048, 00:18:11.726 "data_size": 63488 00:18:11.726 }, 00:18:11.726 { 00:18:11.726 "name": "BaseBdev2", 00:18:11.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.726 "is_configured": false, 00:18:11.726 "data_offset": 0, 00:18:11.726 "data_size": 0 00:18:11.726 } 00:18:11.726 ] 00:18:11.726 }' 00:18:11.726 21:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:11.726 21:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.295 21:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:12.295 [2024-07-15 21:31:45.570311] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:12.295 [2024-07-15 21:31:45.570426] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:18:12.295 21:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:12.555 [2024-07-15 21:31:45.754050] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:12.555 [2024-07-15 21:31:45.755713] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:12.555 [2024-07-15 21:31:45.755788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:12.555 21:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:12.555 21:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:12.555 21:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:12.555 21:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:12.556 21:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:12.556 21:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:12.556 21:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:12.556 21:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:12.556 21:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:12.556 21:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:12.556 21:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:12.556 21:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:12.556 21:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.556 21:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.815 21:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:12.815 "name": "Existed_Raid", 00:18:12.815 "uuid": "980b0b74-882a-4be7-ba68-d58069745c3f", 00:18:12.815 "strip_size_kb": 0, 00:18:12.815 "state": "configuring", 00:18:12.815 "raid_level": "raid1", 00:18:12.815 "superblock": true, 00:18:12.815 "num_base_bdevs": 2, 00:18:12.815 "num_base_bdevs_discovered": 1, 00:18:12.815 "num_base_bdevs_operational": 2, 00:18:12.815 "base_bdevs_list": [ 00:18:12.815 { 00:18:12.815 "name": "BaseBdev1", 00:18:12.815 "uuid": "97e87fdb-001a-419f-a253-3150a91715c9", 00:18:12.815 "is_configured": true, 00:18:12.815 "data_offset": 2048, 00:18:12.815 "data_size": 63488 00:18:12.815 }, 00:18:12.815 { 00:18:12.815 "name": "BaseBdev2", 00:18:12.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.815 "is_configured": false, 00:18:12.815 "data_offset": 0, 00:18:12.815 "data_size": 0 00:18:12.815 } 00:18:12.815 ] 00:18:12.815 }' 00:18:12.815 21:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:12.816 21:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.384 21:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:13.384 [2024-07-15 21:31:46.719894] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:13.384 [2024-07-15 21:31:46.720207] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:18:13.384 [2024-07-15 21:31:46.720238] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:13.384 [2024-07-15 21:31:46.720382] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:18:13.384 BaseBdev2 00:18:13.384 [2024-07-15 21:31:46.720674] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:18:13.384 [2024-07-15 21:31:46.720716] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:18:13.385 [2024-07-15 21:31:46.720897] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.385 21:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:13.385 21:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:13.385 21:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:13.385 21:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:13.385 21:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:13.385 21:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:13.385 21:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:13.644 21:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:13.904 [ 00:18:13.904 { 00:18:13.904 "name": "BaseBdev2", 00:18:13.904 "aliases": [ 00:18:13.904 "7133f228-a0b1-4a29-ab6f-ac3bf351e6f7" 00:18:13.904 ], 00:18:13.904 "product_name": "Malloc disk", 00:18:13.904 "block_size": 512, 00:18:13.904 "num_blocks": 65536, 00:18:13.904 "uuid": "7133f228-a0b1-4a29-ab6f-ac3bf351e6f7", 00:18:13.904 "assigned_rate_limits": { 00:18:13.904 "rw_ios_per_sec": 0, 00:18:13.904 "rw_mbytes_per_sec": 0, 00:18:13.904 "r_mbytes_per_sec": 0, 00:18:13.904 "w_mbytes_per_sec": 0 00:18:13.904 }, 00:18:13.904 "claimed": true, 00:18:13.904 "claim_type": "exclusive_write", 00:18:13.904 "zoned": false, 00:18:13.904 "supported_io_types": { 00:18:13.904 "read": true, 00:18:13.904 "write": true, 00:18:13.904 "unmap": true, 00:18:13.904 "flush": true, 00:18:13.904 "reset": true, 00:18:13.904 "nvme_admin": false, 00:18:13.904 "nvme_io": false, 00:18:13.904 "nvme_io_md": false, 00:18:13.904 "write_zeroes": true, 00:18:13.904 "zcopy": true, 00:18:13.904 "get_zone_info": false, 00:18:13.904 "zone_management": false, 00:18:13.904 "zone_append": false, 00:18:13.904 "compare": false, 00:18:13.904 "compare_and_write": false, 00:18:13.904 "abort": true, 00:18:13.904 "seek_hole": false, 00:18:13.904 "seek_data": false, 00:18:13.904 "copy": true, 00:18:13.904 "nvme_iov_md": false 00:18:13.904 }, 00:18:13.904 "memory_domains": [ 00:18:13.904 { 00:18:13.904 "dma_device_id": "system", 00:18:13.904 "dma_device_type": 1 00:18:13.904 }, 00:18:13.904 { 00:18:13.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.904 "dma_device_type": 2 00:18:13.904 } 00:18:13.904 ], 00:18:13.904 "driver_specific": {} 00:18:13.904 } 00:18:13.904 ] 00:18:13.904 21:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:13.904 21:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:13.904 21:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:13.904 21:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:13.904 21:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:13.904 21:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:13.904 21:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:13.904 21:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:13.904 21:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:13.904 21:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:13.904 21:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:13.904 21:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:13.904 21:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:13.904 21:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.904 21:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.165 21:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:14.165 "name": "Existed_Raid", 00:18:14.165 "uuid": "980b0b74-882a-4be7-ba68-d58069745c3f", 00:18:14.165 "strip_size_kb": 0, 00:18:14.165 "state": "online", 00:18:14.165 "raid_level": "raid1", 00:18:14.165 "superblock": true, 00:18:14.165 "num_base_bdevs": 2, 00:18:14.165 "num_base_bdevs_discovered": 2, 00:18:14.165 "num_base_bdevs_operational": 2, 00:18:14.165 "base_bdevs_list": [ 00:18:14.165 { 00:18:14.165 "name": "BaseBdev1", 00:18:14.165 "uuid": "97e87fdb-001a-419f-a253-3150a91715c9", 00:18:14.165 "is_configured": true, 00:18:14.165 "data_offset": 2048, 00:18:14.165 "data_size": 63488 00:18:14.165 }, 00:18:14.165 { 00:18:14.165 "name": "BaseBdev2", 00:18:14.165 "uuid": "7133f228-a0b1-4a29-ab6f-ac3bf351e6f7", 00:18:14.165 "is_configured": true, 00:18:14.165 "data_offset": 2048, 00:18:14.165 "data_size": 63488 00:18:14.165 } 00:18:14.165 ] 00:18:14.165 }' 00:18:14.165 21:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:14.165 21:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.734 21:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:14.734 21:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:14.734 21:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:14.734 21:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:14.734 21:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:14.734 21:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:14.734 21:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:14.734 21:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:14.734 [2024-07-15 21:31:48.038254] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:14.734 21:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:14.734 "name": "Existed_Raid", 00:18:14.734 "aliases": [ 00:18:14.734 "980b0b74-882a-4be7-ba68-d58069745c3f" 00:18:14.734 ], 00:18:14.734 "product_name": "Raid Volume", 00:18:14.734 "block_size": 512, 00:18:14.734 "num_blocks": 63488, 00:18:14.734 "uuid": "980b0b74-882a-4be7-ba68-d58069745c3f", 00:18:14.734 "assigned_rate_limits": { 00:18:14.734 "rw_ios_per_sec": 0, 00:18:14.734 "rw_mbytes_per_sec": 0, 00:18:14.734 "r_mbytes_per_sec": 0, 00:18:14.734 "w_mbytes_per_sec": 0 00:18:14.734 }, 00:18:14.734 "claimed": false, 00:18:14.734 "zoned": false, 00:18:14.734 "supported_io_types": { 00:18:14.734 "read": true, 00:18:14.734 "write": true, 00:18:14.734 "unmap": false, 00:18:14.734 "flush": false, 00:18:14.734 "reset": true, 00:18:14.734 "nvme_admin": false, 00:18:14.734 "nvme_io": false, 00:18:14.734 "nvme_io_md": false, 00:18:14.734 "write_zeroes": true, 00:18:14.734 "zcopy": false, 00:18:14.734 "get_zone_info": false, 00:18:14.734 "zone_management": false, 00:18:14.734 "zone_append": false, 00:18:14.734 "compare": false, 00:18:14.734 "compare_and_write": false, 00:18:14.734 "abort": false, 00:18:14.734 "seek_hole": false, 00:18:14.734 "seek_data": false, 00:18:14.734 "copy": false, 00:18:14.734 "nvme_iov_md": false 00:18:14.734 }, 00:18:14.734 "memory_domains": [ 00:18:14.734 { 00:18:14.734 "dma_device_id": "system", 00:18:14.734 "dma_device_type": 1 00:18:14.734 }, 00:18:14.734 { 00:18:14.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.734 "dma_device_type": 2 00:18:14.734 }, 00:18:14.734 { 00:18:14.734 "dma_device_id": "system", 00:18:14.734 "dma_device_type": 1 00:18:14.734 }, 00:18:14.734 { 00:18:14.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.734 "dma_device_type": 2 00:18:14.734 } 00:18:14.734 ], 00:18:14.734 "driver_specific": { 00:18:14.734 "raid": { 00:18:14.734 "uuid": "980b0b74-882a-4be7-ba68-d58069745c3f", 00:18:14.734 "strip_size_kb": 0, 00:18:14.734 "state": "online", 00:18:14.734 "raid_level": "raid1", 00:18:14.734 "superblock": true, 00:18:14.734 "num_base_bdevs": 2, 00:18:14.734 "num_base_bdevs_discovered": 2, 00:18:14.734 "num_base_bdevs_operational": 2, 00:18:14.734 "base_bdevs_list": [ 00:18:14.734 { 00:18:14.734 "name": "BaseBdev1", 00:18:14.734 "uuid": "97e87fdb-001a-419f-a253-3150a91715c9", 00:18:14.734 "is_configured": true, 00:18:14.734 "data_offset": 2048, 00:18:14.734 "data_size": 63488 00:18:14.734 }, 00:18:14.734 { 00:18:14.734 "name": "BaseBdev2", 00:18:14.734 "uuid": "7133f228-a0b1-4a29-ab6f-ac3bf351e6f7", 00:18:14.734 "is_configured": true, 00:18:14.734 "data_offset": 2048, 00:18:14.734 "data_size": 63488 00:18:14.734 } 00:18:14.734 ] 00:18:14.734 } 00:18:14.734 } 00:18:14.734 }' 00:18:14.734 21:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:14.734 21:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:14.734 BaseBdev2' 00:18:14.734 21:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:14.734 21:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:14.734 21:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:14.994 21:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:14.994 "name": "BaseBdev1", 00:18:14.994 "aliases": [ 00:18:14.994 "97e87fdb-001a-419f-a253-3150a91715c9" 00:18:14.994 ], 00:18:14.994 "product_name": "Malloc disk", 00:18:14.994 "block_size": 512, 00:18:14.994 "num_blocks": 65536, 00:18:14.994 "uuid": "97e87fdb-001a-419f-a253-3150a91715c9", 00:18:14.994 "assigned_rate_limits": { 00:18:14.994 "rw_ios_per_sec": 0, 00:18:14.994 "rw_mbytes_per_sec": 0, 00:18:14.994 "r_mbytes_per_sec": 0, 00:18:14.994 "w_mbytes_per_sec": 0 00:18:14.994 }, 00:18:14.994 "claimed": true, 00:18:14.994 "claim_type": "exclusive_write", 00:18:14.994 "zoned": false, 00:18:14.994 "supported_io_types": { 00:18:14.994 "read": true, 00:18:14.995 "write": true, 00:18:14.995 "unmap": true, 00:18:14.995 "flush": true, 00:18:14.995 "reset": true, 00:18:14.995 "nvme_admin": false, 00:18:14.995 "nvme_io": false, 00:18:14.995 "nvme_io_md": false, 00:18:14.995 "write_zeroes": true, 00:18:14.995 "zcopy": true, 00:18:14.995 "get_zone_info": false, 00:18:14.995 "zone_management": false, 00:18:14.995 "zone_append": false, 00:18:14.995 "compare": false, 00:18:14.995 "compare_and_write": false, 00:18:14.995 "abort": true, 00:18:14.995 "seek_hole": false, 00:18:14.995 "seek_data": false, 00:18:14.995 "copy": true, 00:18:14.995 "nvme_iov_md": false 00:18:14.995 }, 00:18:14.995 "memory_domains": [ 00:18:14.995 { 00:18:14.995 "dma_device_id": "system", 00:18:14.995 "dma_device_type": 1 00:18:14.995 }, 00:18:14.995 { 00:18:14.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.995 "dma_device_type": 2 00:18:14.995 } 00:18:14.995 ], 00:18:14.995 "driver_specific": {} 00:18:14.995 }' 00:18:14.995 21:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:14.995 21:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:15.253 21:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:15.253 21:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:15.253 21:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:15.253 21:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:15.253 21:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:15.253 21:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:15.253 21:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:15.253 21:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:15.512 21:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:15.512 21:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:15.512 21:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:15.512 21:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:15.512 21:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:15.512 21:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:15.512 "name": "BaseBdev2", 00:18:15.512 "aliases": [ 00:18:15.512 "7133f228-a0b1-4a29-ab6f-ac3bf351e6f7" 00:18:15.512 ], 00:18:15.512 "product_name": "Malloc disk", 00:18:15.512 "block_size": 512, 00:18:15.512 "num_blocks": 65536, 00:18:15.512 "uuid": "7133f228-a0b1-4a29-ab6f-ac3bf351e6f7", 00:18:15.512 "assigned_rate_limits": { 00:18:15.512 "rw_ios_per_sec": 0, 00:18:15.512 "rw_mbytes_per_sec": 0, 00:18:15.512 "r_mbytes_per_sec": 0, 00:18:15.512 "w_mbytes_per_sec": 0 00:18:15.512 }, 00:18:15.512 "claimed": true, 00:18:15.512 "claim_type": "exclusive_write", 00:18:15.512 "zoned": false, 00:18:15.512 "supported_io_types": { 00:18:15.512 "read": true, 00:18:15.512 "write": true, 00:18:15.512 "unmap": true, 00:18:15.512 "flush": true, 00:18:15.512 "reset": true, 00:18:15.512 "nvme_admin": false, 00:18:15.512 "nvme_io": false, 00:18:15.512 "nvme_io_md": false, 00:18:15.512 "write_zeroes": true, 00:18:15.512 "zcopy": true, 00:18:15.512 "get_zone_info": false, 00:18:15.512 "zone_management": false, 00:18:15.512 "zone_append": false, 00:18:15.512 "compare": false, 00:18:15.512 "compare_and_write": false, 00:18:15.512 "abort": true, 00:18:15.512 "seek_hole": false, 00:18:15.512 "seek_data": false, 00:18:15.512 "copy": true, 00:18:15.512 "nvme_iov_md": false 00:18:15.512 }, 00:18:15.512 "memory_domains": [ 00:18:15.512 { 00:18:15.512 "dma_device_id": "system", 00:18:15.512 "dma_device_type": 1 00:18:15.512 }, 00:18:15.512 { 00:18:15.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.512 "dma_device_type": 2 00:18:15.512 } 00:18:15.512 ], 00:18:15.512 "driver_specific": {} 00:18:15.512 }' 00:18:15.512 21:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:15.772 21:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:15.772 21:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:15.772 21:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:15.772 21:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:15.772 21:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:15.772 21:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:15.772 21:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:16.032 21:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:16.032 21:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:16.032 21:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:16.032 21:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:16.032 21:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:16.292 [2024-07-15 21:31:49.467510] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:16.292 21:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:16.292 21:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:18:16.292 21:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:16.292 21:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:18:16.292 21:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:18:16.292 21:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:16.292 21:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:16.292 21:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:16.292 21:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:16.292 21:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:16.292 21:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:16.292 21:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:16.292 21:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:16.292 21:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:16.292 21:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:16.292 21:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.292 21:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.550 21:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:16.550 "name": "Existed_Raid", 00:18:16.550 "uuid": "980b0b74-882a-4be7-ba68-d58069745c3f", 00:18:16.550 "strip_size_kb": 0, 00:18:16.550 "state": "online", 00:18:16.550 "raid_level": "raid1", 00:18:16.550 "superblock": true, 00:18:16.550 "num_base_bdevs": 2, 00:18:16.550 "num_base_bdevs_discovered": 1, 00:18:16.550 "num_base_bdevs_operational": 1, 00:18:16.550 "base_bdevs_list": [ 00:18:16.550 { 00:18:16.550 "name": null, 00:18:16.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.550 "is_configured": false, 00:18:16.551 "data_offset": 2048, 00:18:16.551 "data_size": 63488 00:18:16.551 }, 00:18:16.551 { 00:18:16.551 "name": "BaseBdev2", 00:18:16.551 "uuid": "7133f228-a0b1-4a29-ab6f-ac3bf351e6f7", 00:18:16.551 "is_configured": true, 00:18:16.551 "data_offset": 2048, 00:18:16.551 "data_size": 63488 00:18:16.551 } 00:18:16.551 ] 00:18:16.551 }' 00:18:16.551 21:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:16.551 21:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.118 21:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:17.118 21:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:17.118 21:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.118 21:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:17.377 21:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:17.377 21:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:17.377 21:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:17.377 [2024-07-15 21:31:50.685883] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:17.377 [2024-07-15 21:31:50.686069] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:17.637 [2024-07-15 21:31:50.777343] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:17.637 [2024-07-15 21:31:50.777469] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:17.637 [2024-07-15 21:31:50.777491] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:18:17.637 21:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:17.637 21:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:17.637 21:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.637 21:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:17.637 21:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:17.637 21:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:17.637 21:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:18:17.637 21:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 124920 00:18:17.637 21:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 124920 ']' 00:18:17.637 21:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 124920 00:18:17.637 21:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:18:17.637 21:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:17.637 21:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124920 00:18:17.637 killing process with pid 124920 00:18:17.637 21:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:17.637 21:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:17.637 21:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124920' 00:18:17.637 21:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 124920 00:18:17.637 21:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 124920 00:18:17.637 [2024-07-15 21:31:50.970840] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:17.637 [2024-07-15 21:31:50.971186] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:19.014 ************************************ 00:18:19.014 END TEST raid_state_function_test_sb 00:18:19.015 ************************************ 00:18:19.015 21:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:18:19.015 00:18:19.015 real 0m10.168s 00:18:19.015 user 0m17.782s 00:18:19.015 sys 0m1.177s 00:18:19.015 21:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:19.015 21:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.015 21:31:52 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:19.015 21:31:52 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:18:19.015 21:31:52 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:19.015 21:31:52 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:19.015 21:31:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:19.015 ************************************ 00:18:19.015 START TEST raid_superblock_test 00:18:19.015 ************************************ 00:18:19.015 21:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:18:19.015 21:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:18:19.015 21:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:18:19.015 21:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:18:19.015 21:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:18:19.015 21:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:18:19.015 21:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:18:19.015 21:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:18:19.015 21:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:18:19.015 21:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:18:19.015 21:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:18:19.015 21:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:18:19.015 21:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:18:19.015 21:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:18:19.015 21:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:18:19.015 21:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:18:19.015 21:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=125301 00:18:19.015 21:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 125301 /var/tmp/spdk-raid.sock 00:18:19.015 21:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:19.015 21:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 125301 ']' 00:18:19.015 21:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:19.015 21:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:19.015 21:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:19.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:19.015 21:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:19.015 21:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.015 [2024-07-15 21:31:52.250740] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:18:19.015 [2024-07-15 21:31:52.250935] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125301 ] 00:18:19.273 [2024-07-15 21:31:52.406800] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.273 [2024-07-15 21:31:52.585932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.532 [2024-07-15 21:31:52.773514] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:19.790 21:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:19.790 21:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:18:19.790 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:18:19.790 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:19.790 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:18:19.790 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:18:19.790 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:19.790 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:19.790 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:19.790 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:19.790 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:20.049 malloc1 00:18:20.049 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:20.049 [2024-07-15 21:31:53.413621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:20.049 [2024-07-15 21:31:53.413797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.049 [2024-07-15 21:31:53.413844] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:18:20.049 [2024-07-15 21:31:53.413878] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.049 [2024-07-15 21:31:53.415736] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.049 [2024-07-15 21:31:53.415823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:20.049 pt1 00:18:20.049 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:20.049 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:20.049 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:18:20.049 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:18:20.049 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:20.049 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:20.307 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:20.307 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:20.307 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:20.307 malloc2 00:18:20.307 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:20.566 [2024-07-15 21:31:53.827991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:20.566 [2024-07-15 21:31:53.828146] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.566 [2024-07-15 21:31:53.828189] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:18:20.566 [2024-07-15 21:31:53.828222] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.566 [2024-07-15 21:31:53.830174] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.566 [2024-07-15 21:31:53.830250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:20.566 pt2 00:18:20.566 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:20.566 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:20.567 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:18:20.826 [2024-07-15 21:31:53.987755] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:20.826 [2024-07-15 21:31:53.989422] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:20.826 [2024-07-15 21:31:53.989648] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:18:20.826 [2024-07-15 21:31:53.989686] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:20.826 [2024-07-15 21:31:53.989857] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:18:20.826 [2024-07-15 21:31:53.990193] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:18:20.826 [2024-07-15 21:31:53.990234] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:18:20.826 [2024-07-15 21:31:53.990393] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.826 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:20.826 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:20.826 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:20.826 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:20.826 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:20.826 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:20.826 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:20.826 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:20.826 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:20.826 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:20.826 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.826 21:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.826 21:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:20.826 "name": "raid_bdev1", 00:18:20.826 "uuid": "dac09430-8780-43a6-aeaa-9cfb02fe3f21", 00:18:20.826 "strip_size_kb": 0, 00:18:20.826 "state": "online", 00:18:20.826 "raid_level": "raid1", 00:18:20.826 "superblock": true, 00:18:20.826 "num_base_bdevs": 2, 00:18:20.826 "num_base_bdevs_discovered": 2, 00:18:20.826 "num_base_bdevs_operational": 2, 00:18:20.826 "base_bdevs_list": [ 00:18:20.826 { 00:18:20.826 "name": "pt1", 00:18:20.826 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:20.826 "is_configured": true, 00:18:20.826 "data_offset": 2048, 00:18:20.826 "data_size": 63488 00:18:20.826 }, 00:18:20.826 { 00:18:20.826 "name": "pt2", 00:18:20.826 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:20.826 "is_configured": true, 00:18:20.826 "data_offset": 2048, 00:18:20.826 "data_size": 63488 00:18:20.826 } 00:18:20.826 ] 00:18:20.826 }' 00:18:20.826 21:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:20.826 21:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.765 21:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:18:21.765 21:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:21.765 21:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:21.765 21:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:21.765 21:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:21.765 21:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:21.765 21:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:21.765 21:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:21.765 [2024-07-15 21:31:54.978134] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:21.765 21:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:21.765 "name": "raid_bdev1", 00:18:21.765 "aliases": [ 00:18:21.765 "dac09430-8780-43a6-aeaa-9cfb02fe3f21" 00:18:21.765 ], 00:18:21.765 "product_name": "Raid Volume", 00:18:21.765 "block_size": 512, 00:18:21.765 "num_blocks": 63488, 00:18:21.765 "uuid": "dac09430-8780-43a6-aeaa-9cfb02fe3f21", 00:18:21.765 "assigned_rate_limits": { 00:18:21.765 "rw_ios_per_sec": 0, 00:18:21.765 "rw_mbytes_per_sec": 0, 00:18:21.765 "r_mbytes_per_sec": 0, 00:18:21.765 "w_mbytes_per_sec": 0 00:18:21.765 }, 00:18:21.765 "claimed": false, 00:18:21.765 "zoned": false, 00:18:21.765 "supported_io_types": { 00:18:21.765 "read": true, 00:18:21.765 "write": true, 00:18:21.765 "unmap": false, 00:18:21.765 "flush": false, 00:18:21.765 "reset": true, 00:18:21.765 "nvme_admin": false, 00:18:21.765 "nvme_io": false, 00:18:21.765 "nvme_io_md": false, 00:18:21.765 "write_zeroes": true, 00:18:21.765 "zcopy": false, 00:18:21.765 "get_zone_info": false, 00:18:21.765 "zone_management": false, 00:18:21.765 "zone_append": false, 00:18:21.765 "compare": false, 00:18:21.765 "compare_and_write": false, 00:18:21.765 "abort": false, 00:18:21.765 "seek_hole": false, 00:18:21.765 "seek_data": false, 00:18:21.765 "copy": false, 00:18:21.765 "nvme_iov_md": false 00:18:21.765 }, 00:18:21.765 "memory_domains": [ 00:18:21.765 { 00:18:21.765 "dma_device_id": "system", 00:18:21.765 "dma_device_type": 1 00:18:21.765 }, 00:18:21.766 { 00:18:21.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.766 "dma_device_type": 2 00:18:21.766 }, 00:18:21.766 { 00:18:21.766 "dma_device_id": "system", 00:18:21.766 "dma_device_type": 1 00:18:21.766 }, 00:18:21.766 { 00:18:21.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.766 "dma_device_type": 2 00:18:21.766 } 00:18:21.766 ], 00:18:21.766 "driver_specific": { 00:18:21.766 "raid": { 00:18:21.766 "uuid": "dac09430-8780-43a6-aeaa-9cfb02fe3f21", 00:18:21.766 "strip_size_kb": 0, 00:18:21.766 "state": "online", 00:18:21.766 "raid_level": "raid1", 00:18:21.766 "superblock": true, 00:18:21.766 "num_base_bdevs": 2, 00:18:21.766 "num_base_bdevs_discovered": 2, 00:18:21.766 "num_base_bdevs_operational": 2, 00:18:21.766 "base_bdevs_list": [ 00:18:21.766 { 00:18:21.766 "name": "pt1", 00:18:21.766 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:21.766 "is_configured": true, 00:18:21.766 "data_offset": 2048, 00:18:21.766 "data_size": 63488 00:18:21.766 }, 00:18:21.766 { 00:18:21.766 "name": "pt2", 00:18:21.766 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:21.766 "is_configured": true, 00:18:21.766 "data_offset": 2048, 00:18:21.766 "data_size": 63488 00:18:21.766 } 00:18:21.766 ] 00:18:21.766 } 00:18:21.766 } 00:18:21.766 }' 00:18:21.766 21:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:21.766 21:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:21.766 pt2' 00:18:21.766 21:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:21.766 21:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:21.766 21:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:22.025 21:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:22.025 "name": "pt1", 00:18:22.025 "aliases": [ 00:18:22.025 "00000000-0000-0000-0000-000000000001" 00:18:22.025 ], 00:18:22.025 "product_name": "passthru", 00:18:22.025 "block_size": 512, 00:18:22.025 "num_blocks": 65536, 00:18:22.025 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:22.025 "assigned_rate_limits": { 00:18:22.025 "rw_ios_per_sec": 0, 00:18:22.025 "rw_mbytes_per_sec": 0, 00:18:22.025 "r_mbytes_per_sec": 0, 00:18:22.025 "w_mbytes_per_sec": 0 00:18:22.025 }, 00:18:22.025 "claimed": true, 00:18:22.025 "claim_type": "exclusive_write", 00:18:22.025 "zoned": false, 00:18:22.025 "supported_io_types": { 00:18:22.025 "read": true, 00:18:22.025 "write": true, 00:18:22.025 "unmap": true, 00:18:22.025 "flush": true, 00:18:22.025 "reset": true, 00:18:22.025 "nvme_admin": false, 00:18:22.025 "nvme_io": false, 00:18:22.025 "nvme_io_md": false, 00:18:22.025 "write_zeroes": true, 00:18:22.025 "zcopy": true, 00:18:22.025 "get_zone_info": false, 00:18:22.025 "zone_management": false, 00:18:22.025 "zone_append": false, 00:18:22.025 "compare": false, 00:18:22.025 "compare_and_write": false, 00:18:22.025 "abort": true, 00:18:22.025 "seek_hole": false, 00:18:22.025 "seek_data": false, 00:18:22.025 "copy": true, 00:18:22.025 "nvme_iov_md": false 00:18:22.025 }, 00:18:22.025 "memory_domains": [ 00:18:22.025 { 00:18:22.025 "dma_device_id": "system", 00:18:22.025 "dma_device_type": 1 00:18:22.025 }, 00:18:22.025 { 00:18:22.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.025 "dma_device_type": 2 00:18:22.025 } 00:18:22.025 ], 00:18:22.025 "driver_specific": { 00:18:22.025 "passthru": { 00:18:22.025 "name": "pt1", 00:18:22.025 "base_bdev_name": "malloc1" 00:18:22.025 } 00:18:22.025 } 00:18:22.025 }' 00:18:22.025 21:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:22.025 21:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:22.025 21:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:22.025 21:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:22.025 21:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:22.284 21:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:22.284 21:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:22.284 21:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:22.284 21:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:22.284 21:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:22.284 21:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:22.284 21:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:22.284 21:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:22.284 21:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:22.284 21:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:22.544 21:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:22.544 "name": "pt2", 00:18:22.544 "aliases": [ 00:18:22.544 "00000000-0000-0000-0000-000000000002" 00:18:22.544 ], 00:18:22.544 "product_name": "passthru", 00:18:22.544 "block_size": 512, 00:18:22.544 "num_blocks": 65536, 00:18:22.544 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:22.544 "assigned_rate_limits": { 00:18:22.544 "rw_ios_per_sec": 0, 00:18:22.544 "rw_mbytes_per_sec": 0, 00:18:22.544 "r_mbytes_per_sec": 0, 00:18:22.544 "w_mbytes_per_sec": 0 00:18:22.544 }, 00:18:22.544 "claimed": true, 00:18:22.544 "claim_type": "exclusive_write", 00:18:22.544 "zoned": false, 00:18:22.544 "supported_io_types": { 00:18:22.544 "read": true, 00:18:22.544 "write": true, 00:18:22.544 "unmap": true, 00:18:22.544 "flush": true, 00:18:22.544 "reset": true, 00:18:22.544 "nvme_admin": false, 00:18:22.544 "nvme_io": false, 00:18:22.544 "nvme_io_md": false, 00:18:22.544 "write_zeroes": true, 00:18:22.544 "zcopy": true, 00:18:22.544 "get_zone_info": false, 00:18:22.544 "zone_management": false, 00:18:22.544 "zone_append": false, 00:18:22.544 "compare": false, 00:18:22.544 "compare_and_write": false, 00:18:22.544 "abort": true, 00:18:22.544 "seek_hole": false, 00:18:22.544 "seek_data": false, 00:18:22.544 "copy": true, 00:18:22.544 "nvme_iov_md": false 00:18:22.544 }, 00:18:22.544 "memory_domains": [ 00:18:22.544 { 00:18:22.544 "dma_device_id": "system", 00:18:22.544 "dma_device_type": 1 00:18:22.544 }, 00:18:22.544 { 00:18:22.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.544 "dma_device_type": 2 00:18:22.544 } 00:18:22.544 ], 00:18:22.544 "driver_specific": { 00:18:22.544 "passthru": { 00:18:22.544 "name": "pt2", 00:18:22.544 "base_bdev_name": "malloc2" 00:18:22.544 } 00:18:22.544 } 00:18:22.544 }' 00:18:22.544 21:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:22.544 21:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:22.803 21:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:22.803 21:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:22.803 21:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:22.803 21:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:22.803 21:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:22.803 21:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:22.803 21:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:22.803 21:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:23.063 21:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:23.063 21:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:23.063 21:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:18:23.063 21:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:23.063 [2024-07-15 21:31:56.415519] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:23.063 21:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=dac09430-8780-43a6-aeaa-9cfb02fe3f21 00:18:23.063 21:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z dac09430-8780-43a6-aeaa-9cfb02fe3f21 ']' 00:18:23.063 21:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:23.322 [2024-07-15 21:31:56.575028] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:23.322 [2024-07-15 21:31:56.575097] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:23.322 [2024-07-15 21:31:56.575178] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:23.322 [2024-07-15 21:31:56.575245] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:23.322 [2024-07-15 21:31:56.575263] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:18:23.322 21:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.322 21:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:18:23.583 21:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:18:23.583 21:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:18:23.583 21:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:23.583 21:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:23.583 21:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:23.583 21:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:23.843 21:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:23.843 21:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:24.103 21:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:18:24.103 21:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:18:24.103 21:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:18:24.103 21:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:18:24.103 21:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:24.103 21:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:24.103 21:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:24.103 21:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:24.103 21:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:24.103 21:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:24.103 21:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:24.103 21:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:24.103 21:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:18:24.363 [2024-07-15 21:31:57.489421] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:24.363 [2024-07-15 21:31:57.491093] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:24.363 [2024-07-15 21:31:57.491186] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:24.363 [2024-07-15 21:31:57.491301] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:24.363 [2024-07-15 21:31:57.491336] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:24.363 [2024-07-15 21:31:57.491353] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state configuring 00:18:24.363 request: 00:18:24.363 { 00:18:24.363 "name": "raid_bdev1", 00:18:24.363 "raid_level": "raid1", 00:18:24.363 "base_bdevs": [ 00:18:24.363 "malloc1", 00:18:24.363 "malloc2" 00:18:24.363 ], 00:18:24.363 "superblock": false, 00:18:24.363 "method": "bdev_raid_create", 00:18:24.363 "req_id": 1 00:18:24.363 } 00:18:24.363 Got JSON-RPC error response 00:18:24.363 response: 00:18:24.363 { 00:18:24.363 "code": -17, 00:18:24.363 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:24.363 } 00:18:24.363 21:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:18:24.363 21:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:24.363 21:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:24.363 21:31:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:24.363 21:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.363 21:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:18:24.363 21:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:18:24.363 21:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:18:24.363 21:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:24.623 [2024-07-15 21:31:57.852673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:24.623 [2024-07-15 21:31:57.852797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.623 [2024-07-15 21:31:57.852851] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:18:24.623 [2024-07-15 21:31:57.852890] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:24.623 [2024-07-15 21:31:57.854780] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:24.623 [2024-07-15 21:31:57.854867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:24.623 [2024-07-15 21:31:57.855011] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:24.623 [2024-07-15 21:31:57.855067] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:24.623 pt1 00:18:24.623 21:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:24.623 21:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:24.623 21:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:24.623 21:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:24.623 21:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:24.623 21:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:24.623 21:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:24.623 21:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:24.623 21:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:24.623 21:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:24.623 21:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.623 21:31:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.883 21:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:24.883 "name": "raid_bdev1", 00:18:24.883 "uuid": "dac09430-8780-43a6-aeaa-9cfb02fe3f21", 00:18:24.883 "strip_size_kb": 0, 00:18:24.883 "state": "configuring", 00:18:24.883 "raid_level": "raid1", 00:18:24.883 "superblock": true, 00:18:24.883 "num_base_bdevs": 2, 00:18:24.883 "num_base_bdevs_discovered": 1, 00:18:24.883 "num_base_bdevs_operational": 2, 00:18:24.883 "base_bdevs_list": [ 00:18:24.883 { 00:18:24.883 "name": "pt1", 00:18:24.883 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:24.883 "is_configured": true, 00:18:24.883 "data_offset": 2048, 00:18:24.883 "data_size": 63488 00:18:24.883 }, 00:18:24.883 { 00:18:24.883 "name": null, 00:18:24.883 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:24.884 "is_configured": false, 00:18:24.884 "data_offset": 2048, 00:18:24.884 "data_size": 63488 00:18:24.884 } 00:18:24.884 ] 00:18:24.884 }' 00:18:24.884 21:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:24.884 21:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.453 21:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:18:25.453 21:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:18:25.453 21:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:25.453 21:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:25.453 [2024-07-15 21:31:58.755062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:25.453 [2024-07-15 21:31:58.755221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.453 [2024-07-15 21:31:58.755263] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:25.453 [2024-07-15 21:31:58.755297] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.453 [2024-07-15 21:31:58.755742] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.453 [2024-07-15 21:31:58.755814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:25.453 [2024-07-15 21:31:58.755952] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:25.453 [2024-07-15 21:31:58.755996] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:25.453 [2024-07-15 21:31:58.756121] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:18:25.453 [2024-07-15 21:31:58.756151] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:25.453 [2024-07-15 21:31:58.756280] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:18:25.453 [2024-07-15 21:31:58.756568] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:18:25.453 [2024-07-15 21:31:58.756608] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:18:25.453 [2024-07-15 21:31:58.756760] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.453 pt2 00:18:25.453 21:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:25.453 21:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:25.453 21:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:25.453 21:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:25.453 21:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:25.453 21:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:25.453 21:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:25.453 21:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:25.453 21:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:25.453 21:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:25.453 21:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:25.453 21:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:25.453 21:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.453 21:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.712 21:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:25.712 "name": "raid_bdev1", 00:18:25.712 "uuid": "dac09430-8780-43a6-aeaa-9cfb02fe3f21", 00:18:25.712 "strip_size_kb": 0, 00:18:25.712 "state": "online", 00:18:25.712 "raid_level": "raid1", 00:18:25.712 "superblock": true, 00:18:25.712 "num_base_bdevs": 2, 00:18:25.712 "num_base_bdevs_discovered": 2, 00:18:25.712 "num_base_bdevs_operational": 2, 00:18:25.712 "base_bdevs_list": [ 00:18:25.712 { 00:18:25.712 "name": "pt1", 00:18:25.712 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:25.712 "is_configured": true, 00:18:25.712 "data_offset": 2048, 00:18:25.712 "data_size": 63488 00:18:25.712 }, 00:18:25.712 { 00:18:25.712 "name": "pt2", 00:18:25.712 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:25.712 "is_configured": true, 00:18:25.712 "data_offset": 2048, 00:18:25.712 "data_size": 63488 00:18:25.712 } 00:18:25.712 ] 00:18:25.712 }' 00:18:25.712 21:31:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:25.712 21:31:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.278 21:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:18:26.278 21:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:26.278 21:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:26.278 21:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:26.278 21:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:26.278 21:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:26.278 21:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:26.278 21:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:26.536 [2024-07-15 21:31:59.729607] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:26.537 21:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:26.537 "name": "raid_bdev1", 00:18:26.537 "aliases": [ 00:18:26.537 "dac09430-8780-43a6-aeaa-9cfb02fe3f21" 00:18:26.537 ], 00:18:26.537 "product_name": "Raid Volume", 00:18:26.537 "block_size": 512, 00:18:26.537 "num_blocks": 63488, 00:18:26.537 "uuid": "dac09430-8780-43a6-aeaa-9cfb02fe3f21", 00:18:26.537 "assigned_rate_limits": { 00:18:26.537 "rw_ios_per_sec": 0, 00:18:26.537 "rw_mbytes_per_sec": 0, 00:18:26.537 "r_mbytes_per_sec": 0, 00:18:26.537 "w_mbytes_per_sec": 0 00:18:26.537 }, 00:18:26.537 "claimed": false, 00:18:26.537 "zoned": false, 00:18:26.537 "supported_io_types": { 00:18:26.537 "read": true, 00:18:26.537 "write": true, 00:18:26.537 "unmap": false, 00:18:26.537 "flush": false, 00:18:26.537 "reset": true, 00:18:26.537 "nvme_admin": false, 00:18:26.537 "nvme_io": false, 00:18:26.537 "nvme_io_md": false, 00:18:26.537 "write_zeroes": true, 00:18:26.537 "zcopy": false, 00:18:26.537 "get_zone_info": false, 00:18:26.537 "zone_management": false, 00:18:26.537 "zone_append": false, 00:18:26.537 "compare": false, 00:18:26.537 "compare_and_write": false, 00:18:26.537 "abort": false, 00:18:26.537 "seek_hole": false, 00:18:26.537 "seek_data": false, 00:18:26.537 "copy": false, 00:18:26.537 "nvme_iov_md": false 00:18:26.537 }, 00:18:26.537 "memory_domains": [ 00:18:26.537 { 00:18:26.537 "dma_device_id": "system", 00:18:26.537 "dma_device_type": 1 00:18:26.537 }, 00:18:26.537 { 00:18:26.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.537 "dma_device_type": 2 00:18:26.537 }, 00:18:26.537 { 00:18:26.537 "dma_device_id": "system", 00:18:26.537 "dma_device_type": 1 00:18:26.537 }, 00:18:26.537 { 00:18:26.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.537 "dma_device_type": 2 00:18:26.537 } 00:18:26.537 ], 00:18:26.537 "driver_specific": { 00:18:26.537 "raid": { 00:18:26.537 "uuid": "dac09430-8780-43a6-aeaa-9cfb02fe3f21", 00:18:26.537 "strip_size_kb": 0, 00:18:26.537 "state": "online", 00:18:26.537 "raid_level": "raid1", 00:18:26.537 "superblock": true, 00:18:26.537 "num_base_bdevs": 2, 00:18:26.537 "num_base_bdevs_discovered": 2, 00:18:26.537 "num_base_bdevs_operational": 2, 00:18:26.537 "base_bdevs_list": [ 00:18:26.537 { 00:18:26.537 "name": "pt1", 00:18:26.537 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:26.537 "is_configured": true, 00:18:26.537 "data_offset": 2048, 00:18:26.537 "data_size": 63488 00:18:26.537 }, 00:18:26.537 { 00:18:26.537 "name": "pt2", 00:18:26.537 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:26.537 "is_configured": true, 00:18:26.537 "data_offset": 2048, 00:18:26.537 "data_size": 63488 00:18:26.537 } 00:18:26.537 ] 00:18:26.537 } 00:18:26.537 } 00:18:26.537 }' 00:18:26.537 21:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:26.537 21:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:26.537 pt2' 00:18:26.537 21:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:26.537 21:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:26.537 21:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:26.795 21:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:26.795 "name": "pt1", 00:18:26.795 "aliases": [ 00:18:26.795 "00000000-0000-0000-0000-000000000001" 00:18:26.795 ], 00:18:26.795 "product_name": "passthru", 00:18:26.795 "block_size": 512, 00:18:26.795 "num_blocks": 65536, 00:18:26.795 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:26.795 "assigned_rate_limits": { 00:18:26.795 "rw_ios_per_sec": 0, 00:18:26.795 "rw_mbytes_per_sec": 0, 00:18:26.795 "r_mbytes_per_sec": 0, 00:18:26.795 "w_mbytes_per_sec": 0 00:18:26.795 }, 00:18:26.795 "claimed": true, 00:18:26.795 "claim_type": "exclusive_write", 00:18:26.795 "zoned": false, 00:18:26.795 "supported_io_types": { 00:18:26.795 "read": true, 00:18:26.795 "write": true, 00:18:26.795 "unmap": true, 00:18:26.795 "flush": true, 00:18:26.795 "reset": true, 00:18:26.795 "nvme_admin": false, 00:18:26.795 "nvme_io": false, 00:18:26.795 "nvme_io_md": false, 00:18:26.795 "write_zeroes": true, 00:18:26.795 "zcopy": true, 00:18:26.795 "get_zone_info": false, 00:18:26.795 "zone_management": false, 00:18:26.795 "zone_append": false, 00:18:26.795 "compare": false, 00:18:26.795 "compare_and_write": false, 00:18:26.795 "abort": true, 00:18:26.795 "seek_hole": false, 00:18:26.795 "seek_data": false, 00:18:26.795 "copy": true, 00:18:26.795 "nvme_iov_md": false 00:18:26.795 }, 00:18:26.795 "memory_domains": [ 00:18:26.795 { 00:18:26.795 "dma_device_id": "system", 00:18:26.795 "dma_device_type": 1 00:18:26.795 }, 00:18:26.795 { 00:18:26.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.795 "dma_device_type": 2 00:18:26.795 } 00:18:26.795 ], 00:18:26.795 "driver_specific": { 00:18:26.795 "passthru": { 00:18:26.795 "name": "pt1", 00:18:26.795 "base_bdev_name": "malloc1" 00:18:26.795 } 00:18:26.795 } 00:18:26.795 }' 00:18:26.795 21:31:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:26.795 21:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:26.795 21:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:26.795 21:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:26.795 21:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:27.053 21:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:27.053 21:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:27.053 21:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:27.054 21:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:27.054 21:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:27.054 21:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:27.054 21:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:27.054 21:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:27.054 21:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:27.054 21:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:27.311 21:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:27.311 "name": "pt2", 00:18:27.311 "aliases": [ 00:18:27.311 "00000000-0000-0000-0000-000000000002" 00:18:27.311 ], 00:18:27.311 "product_name": "passthru", 00:18:27.311 "block_size": 512, 00:18:27.311 "num_blocks": 65536, 00:18:27.311 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:27.311 "assigned_rate_limits": { 00:18:27.311 "rw_ios_per_sec": 0, 00:18:27.311 "rw_mbytes_per_sec": 0, 00:18:27.311 "r_mbytes_per_sec": 0, 00:18:27.311 "w_mbytes_per_sec": 0 00:18:27.311 }, 00:18:27.311 "claimed": true, 00:18:27.311 "claim_type": "exclusive_write", 00:18:27.311 "zoned": false, 00:18:27.311 "supported_io_types": { 00:18:27.311 "read": true, 00:18:27.311 "write": true, 00:18:27.311 "unmap": true, 00:18:27.311 "flush": true, 00:18:27.311 "reset": true, 00:18:27.311 "nvme_admin": false, 00:18:27.311 "nvme_io": false, 00:18:27.311 "nvme_io_md": false, 00:18:27.311 "write_zeroes": true, 00:18:27.311 "zcopy": true, 00:18:27.311 "get_zone_info": false, 00:18:27.311 "zone_management": false, 00:18:27.311 "zone_append": false, 00:18:27.311 "compare": false, 00:18:27.311 "compare_and_write": false, 00:18:27.311 "abort": true, 00:18:27.311 "seek_hole": false, 00:18:27.311 "seek_data": false, 00:18:27.311 "copy": true, 00:18:27.311 "nvme_iov_md": false 00:18:27.311 }, 00:18:27.311 "memory_domains": [ 00:18:27.311 { 00:18:27.311 "dma_device_id": "system", 00:18:27.311 "dma_device_type": 1 00:18:27.311 }, 00:18:27.311 { 00:18:27.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.311 "dma_device_type": 2 00:18:27.311 } 00:18:27.311 ], 00:18:27.311 "driver_specific": { 00:18:27.311 "passthru": { 00:18:27.311 "name": "pt2", 00:18:27.311 "base_bdev_name": "malloc2" 00:18:27.311 } 00:18:27.311 } 00:18:27.311 }' 00:18:27.311 21:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:27.311 21:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:27.568 21:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:27.568 21:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:27.568 21:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:27.568 21:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:27.568 21:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:27.568 21:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:27.568 21:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:27.568 21:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:27.826 21:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:27.826 21:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:27.826 21:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:27.826 21:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:18:27.826 [2024-07-15 21:32:01.147130] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:27.826 21:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' dac09430-8780-43a6-aeaa-9cfb02fe3f21 '!=' dac09430-8780-43a6-aeaa-9cfb02fe3f21 ']' 00:18:27.826 21:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:18:27.826 21:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:27.826 21:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:27.826 21:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:28.083 [2024-07-15 21:32:01.326643] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:28.083 21:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:28.083 21:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:28.083 21:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:28.083 21:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:28.083 21:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:28.083 21:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:28.083 21:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:28.083 21:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:28.083 21:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:28.083 21:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:28.083 21:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.083 21:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.341 21:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:28.341 "name": "raid_bdev1", 00:18:28.341 "uuid": "dac09430-8780-43a6-aeaa-9cfb02fe3f21", 00:18:28.341 "strip_size_kb": 0, 00:18:28.341 "state": "online", 00:18:28.341 "raid_level": "raid1", 00:18:28.341 "superblock": true, 00:18:28.341 "num_base_bdevs": 2, 00:18:28.341 "num_base_bdevs_discovered": 1, 00:18:28.341 "num_base_bdevs_operational": 1, 00:18:28.341 "base_bdevs_list": [ 00:18:28.341 { 00:18:28.341 "name": null, 00:18:28.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.342 "is_configured": false, 00:18:28.342 "data_offset": 2048, 00:18:28.342 "data_size": 63488 00:18:28.342 }, 00:18:28.342 { 00:18:28.342 "name": "pt2", 00:18:28.342 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:28.342 "is_configured": true, 00:18:28.342 "data_offset": 2048, 00:18:28.342 "data_size": 63488 00:18:28.342 } 00:18:28.342 ] 00:18:28.342 }' 00:18:28.342 21:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:28.342 21:32:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.908 21:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:28.908 [2024-07-15 21:32:02.284978] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:28.908 [2024-07-15 21:32:02.285073] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:28.908 [2024-07-15 21:32:02.285176] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:28.908 [2024-07-15 21:32:02.285234] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:28.908 [2024-07-15 21:32:02.285252] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:18:29.165 21:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.165 21:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:18:29.165 21:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:18:29.165 21:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:18:29.165 21:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:18:29.165 21:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:29.165 21:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:29.422 21:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:18:29.422 21:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:29.422 21:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:18:29.422 21:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:18:29.422 21:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=1 00:18:29.422 21:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:29.679 [2024-07-15 21:32:02.827948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:29.679 [2024-07-15 21:32:02.828089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.679 [2024-07-15 21:32:02.828124] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:29.679 [2024-07-15 21:32:02.828160] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.679 [2024-07-15 21:32:02.830016] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.679 [2024-07-15 21:32:02.830104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:29.679 [2024-07-15 21:32:02.830241] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:29.679 [2024-07-15 21:32:02.830318] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:29.679 [2024-07-15 21:32:02.830466] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:18:29.679 [2024-07-15 21:32:02.830497] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:29.679 [2024-07-15 21:32:02.830589] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:29.679 [2024-07-15 21:32:02.830863] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:18:29.679 [2024-07-15 21:32:02.830916] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:18:29.679 [2024-07-15 21:32:02.831057] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.679 pt2 00:18:29.679 21:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:29.679 21:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:29.679 21:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:29.679 21:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:29.679 21:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:29.679 21:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:29.679 21:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:29.679 21:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:29.679 21:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:29.679 21:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:29.679 21:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.679 21:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.679 21:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:29.679 "name": "raid_bdev1", 00:18:29.679 "uuid": "dac09430-8780-43a6-aeaa-9cfb02fe3f21", 00:18:29.679 "strip_size_kb": 0, 00:18:29.679 "state": "online", 00:18:29.679 "raid_level": "raid1", 00:18:29.679 "superblock": true, 00:18:29.679 "num_base_bdevs": 2, 00:18:29.679 "num_base_bdevs_discovered": 1, 00:18:29.679 "num_base_bdevs_operational": 1, 00:18:29.679 "base_bdevs_list": [ 00:18:29.679 { 00:18:29.679 "name": null, 00:18:29.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.680 "is_configured": false, 00:18:29.680 "data_offset": 2048, 00:18:29.680 "data_size": 63488 00:18:29.680 }, 00:18:29.680 { 00:18:29.680 "name": "pt2", 00:18:29.680 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:29.680 "is_configured": true, 00:18:29.680 "data_offset": 2048, 00:18:29.680 "data_size": 63488 00:18:29.680 } 00:18:29.680 ] 00:18:29.680 }' 00:18:29.680 21:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:29.680 21:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.246 21:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:30.505 [2024-07-15 21:32:03.770239] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:30.505 [2024-07-15 21:32:03.770326] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:30.505 [2024-07-15 21:32:03.770404] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:30.505 [2024-07-15 21:32:03.770457] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:30.505 [2024-07-15 21:32:03.770473] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:18:30.505 21:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.505 21:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:18:30.763 21:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:18:30.763 21:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:18:30.763 21:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:18:30.764 21:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:30.764 [2024-07-15 21:32:04.133575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:30.764 [2024-07-15 21:32:04.133687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.764 [2024-07-15 21:32:04.133737] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:30.764 [2024-07-15 21:32:04.133776] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.764 [2024-07-15 21:32:04.135626] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.764 [2024-07-15 21:32:04.135708] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:30.764 [2024-07-15 21:32:04.135846] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:30.764 [2024-07-15 21:32:04.135906] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:30.764 [2024-07-15 21:32:04.136086] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:30.764 [2024-07-15 21:32:04.136120] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:30.764 [2024-07-15 21:32:04.136142] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:18:30.764 [2024-07-15 21:32:04.136228] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:30.764 [2024-07-15 21:32:04.136321] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:18:30.764 [2024-07-15 21:32:04.136351] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:30.764 [2024-07-15 21:32:04.136450] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:30.764 [2024-07-15 21:32:04.136724] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:18:30.764 [2024-07-15 21:32:04.136764] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:18:30.764 [2024-07-15 21:32:04.136919] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.764 pt1 00:18:31.028 21:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:18:31.028 21:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:31.028 21:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:31.028 21:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:31.028 21:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:31.028 21:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:31.028 21:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:31.028 21:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:31.028 21:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:31.028 21:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:31.028 21:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:31.028 21:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.028 21:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.028 21:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:31.028 "name": "raid_bdev1", 00:18:31.028 "uuid": "dac09430-8780-43a6-aeaa-9cfb02fe3f21", 00:18:31.028 "strip_size_kb": 0, 00:18:31.028 "state": "online", 00:18:31.028 "raid_level": "raid1", 00:18:31.029 "superblock": true, 00:18:31.029 "num_base_bdevs": 2, 00:18:31.029 "num_base_bdevs_discovered": 1, 00:18:31.029 "num_base_bdevs_operational": 1, 00:18:31.029 "base_bdevs_list": [ 00:18:31.029 { 00:18:31.029 "name": null, 00:18:31.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.029 "is_configured": false, 00:18:31.029 "data_offset": 2048, 00:18:31.029 "data_size": 63488 00:18:31.029 }, 00:18:31.029 { 00:18:31.029 "name": "pt2", 00:18:31.029 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:31.029 "is_configured": true, 00:18:31.029 "data_offset": 2048, 00:18:31.029 "data_size": 63488 00:18:31.029 } 00:18:31.029 ] 00:18:31.029 }' 00:18:31.029 21:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:31.029 21:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.596 21:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:18:31.596 21:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:31.856 21:32:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:18:31.856 21:32:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:31.856 21:32:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:18:32.115 [2024-07-15 21:32:05.287708] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:32.115 21:32:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' dac09430-8780-43a6-aeaa-9cfb02fe3f21 '!=' dac09430-8780-43a6-aeaa-9cfb02fe3f21 ']' 00:18:32.115 21:32:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 125301 00:18:32.115 21:32:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 125301 ']' 00:18:32.115 21:32:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 125301 00:18:32.115 21:32:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:18:32.115 21:32:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:32.115 21:32:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125301 00:18:32.115 killing process with pid 125301 00:18:32.116 21:32:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:32.116 21:32:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:32.116 21:32:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125301' 00:18:32.116 21:32:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 125301 00:18:32.116 21:32:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 125301 00:18:32.116 [2024-07-15 21:32:05.324284] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:32.116 [2024-07-15 21:32:05.324359] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:32.116 [2024-07-15 21:32:05.324401] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:32.116 [2024-07-15 21:32:05.324433] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:18:32.375 [2024-07-15 21:32:05.504590] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:33.312 ************************************ 00:18:33.312 END TEST raid_superblock_test 00:18:33.312 ************************************ 00:18:33.312 21:32:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:18:33.312 00:18:33.312 real 0m14.487s 00:18:33.312 user 0m26.249s 00:18:33.312 sys 0m1.755s 00:18:33.312 21:32:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:33.312 21:32:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.572 21:32:06 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:33.572 21:32:06 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:18:33.572 21:32:06 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:33.572 21:32:06 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:33.572 21:32:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:33.572 ************************************ 00:18:33.572 START TEST raid_read_error_test 00:18:33.572 ************************************ 00:18:33.572 21:32:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 read 00:18:33.572 21:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:18:33.572 21:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:18:33.572 21:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:18:33.572 21:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:33.572 21:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:18:33.572 21:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:33.572 21:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:18:33.572 21:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:33.572 21:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:33.572 21:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:18:33.572 21:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:33.572 21:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:33.572 21:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:18:33.572 21:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:18:33.572 21:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:18:33.572 21:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:18:33.572 21:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:18:33.572 21:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:18:33.572 21:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:18:33.572 21:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:18:33.572 21:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:18:33.572 21:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.m3vq2BI2x0 00:18:33.572 21:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=125836 00:18:33.572 21:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 125836 /var/tmp/spdk-raid.sock 00:18:33.572 21:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:33.572 21:32:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 125836 ']' 00:18:33.572 21:32:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:33.572 21:32:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:33.572 21:32:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:33.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:33.572 21:32:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:33.572 21:32:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.572 [2024-07-15 21:32:06.817391] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:18:33.572 [2024-07-15 21:32:06.817585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125836 ] 00:18:33.832 [2024-07-15 21:32:06.975274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.832 [2024-07-15 21:32:07.159603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.091 [2024-07-15 21:32:07.347657] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:34.350 21:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:34.350 21:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:18:34.350 21:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:34.350 21:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:34.610 BaseBdev1_malloc 00:18:34.610 21:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:34.610 true 00:18:34.610 21:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:34.868 [2024-07-15 21:32:08.144689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:34.868 [2024-07-15 21:32:08.144836] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.868 [2024-07-15 21:32:08.144908] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:34.868 [2024-07-15 21:32:08.144942] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.868 [2024-07-15 21:32:08.146778] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.868 [2024-07-15 21:32:08.146850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:34.868 BaseBdev1 00:18:34.868 21:32:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:34.868 21:32:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:35.127 BaseBdev2_malloc 00:18:35.127 21:32:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:35.387 true 00:18:35.387 21:32:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:35.646 [2024-07-15 21:32:08.774513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:35.646 [2024-07-15 21:32:08.774693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.646 [2024-07-15 21:32:08.774739] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:18:35.646 [2024-07-15 21:32:08.774790] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.646 [2024-07-15 21:32:08.776551] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.646 [2024-07-15 21:32:08.776622] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:35.646 BaseBdev2 00:18:35.646 21:32:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:18:35.646 [2024-07-15 21:32:08.954230] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:35.646 [2024-07-15 21:32:08.955972] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:35.646 [2024-07-15 21:32:08.956220] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:18:35.646 [2024-07-15 21:32:08.956259] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:35.646 [2024-07-15 21:32:08.956403] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:18:35.646 [2024-07-15 21:32:08.956743] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:18:35.646 [2024-07-15 21:32:08.956783] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:18:35.646 [2024-07-15 21:32:08.956959] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.647 21:32:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:35.647 21:32:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:35.647 21:32:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:35.647 21:32:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:35.647 21:32:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:35.647 21:32:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:35.647 21:32:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:35.647 21:32:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:35.647 21:32:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:35.647 21:32:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:35.647 21:32:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.647 21:32:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.906 21:32:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:35.906 "name": "raid_bdev1", 00:18:35.906 "uuid": "264aae08-8bfe-4a92-aeca-f10994f04570", 00:18:35.906 "strip_size_kb": 0, 00:18:35.906 "state": "online", 00:18:35.906 "raid_level": "raid1", 00:18:35.906 "superblock": true, 00:18:35.906 "num_base_bdevs": 2, 00:18:35.906 "num_base_bdevs_discovered": 2, 00:18:35.906 "num_base_bdevs_operational": 2, 00:18:35.906 "base_bdevs_list": [ 00:18:35.906 { 00:18:35.906 "name": "BaseBdev1", 00:18:35.906 "uuid": "0c9d9638-cdfa-5b84-86a8-cb3f47b16f90", 00:18:35.906 "is_configured": true, 00:18:35.906 "data_offset": 2048, 00:18:35.906 "data_size": 63488 00:18:35.906 }, 00:18:35.906 { 00:18:35.906 "name": "BaseBdev2", 00:18:35.906 "uuid": "77235b24-2b93-51f4-a30a-a459696db4c8", 00:18:35.906 "is_configured": true, 00:18:35.906 "data_offset": 2048, 00:18:35.906 "data_size": 63488 00:18:35.906 } 00:18:35.906 ] 00:18:35.906 }' 00:18:35.906 21:32:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:35.906 21:32:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.474 21:32:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:36.474 21:32:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:18:36.734 [2024-07-15 21:32:09.854204] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:18:37.672 21:32:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:37.672 21:32:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:37.672 21:32:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:18:37.672 21:32:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:18:37.672 21:32:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:18:37.672 21:32:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:37.672 21:32:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:37.672 21:32:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:37.672 21:32:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:37.672 21:32:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:37.672 21:32:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:37.672 21:32:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:37.672 21:32:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:37.672 21:32:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:37.672 21:32:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:37.672 21:32:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.672 21:32:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.931 21:32:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:37.931 "name": "raid_bdev1", 00:18:37.931 "uuid": "264aae08-8bfe-4a92-aeca-f10994f04570", 00:18:37.931 "strip_size_kb": 0, 00:18:37.931 "state": "online", 00:18:37.931 "raid_level": "raid1", 00:18:37.931 "superblock": true, 00:18:37.931 "num_base_bdevs": 2, 00:18:37.931 "num_base_bdevs_discovered": 2, 00:18:37.931 "num_base_bdevs_operational": 2, 00:18:37.931 "base_bdevs_list": [ 00:18:37.931 { 00:18:37.931 "name": "BaseBdev1", 00:18:37.931 "uuid": "0c9d9638-cdfa-5b84-86a8-cb3f47b16f90", 00:18:37.931 "is_configured": true, 00:18:37.931 "data_offset": 2048, 00:18:37.931 "data_size": 63488 00:18:37.931 }, 00:18:37.931 { 00:18:37.931 "name": "BaseBdev2", 00:18:37.931 "uuid": "77235b24-2b93-51f4-a30a-a459696db4c8", 00:18:37.931 "is_configured": true, 00:18:37.931 "data_offset": 2048, 00:18:37.931 "data_size": 63488 00:18:37.931 } 00:18:37.931 ] 00:18:37.931 }' 00:18:37.931 21:32:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:37.931 21:32:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.500 21:32:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:38.759 [2024-07-15 21:32:12.011813] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:38.759 [2024-07-15 21:32:12.011914] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:38.759 [2024-07-15 21:32:12.014312] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:38.759 [2024-07-15 21:32:12.014381] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:38.759 [2024-07-15 21:32:12.014458] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:38.759 [2024-07-15 21:32:12.014481] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:18:38.759 0 00:18:38.759 21:32:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 125836 00:18:38.759 21:32:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 125836 ']' 00:18:38.759 21:32:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 125836 00:18:38.759 21:32:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:18:38.759 21:32:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:38.760 21:32:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125836 00:18:38.760 21:32:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:38.760 21:32:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:38.760 21:32:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125836' 00:18:38.760 killing process with pid 125836 00:18:38.760 21:32:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 125836 00:18:38.760 [2024-07-15 21:32:12.070348] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:38.760 21:32:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 125836 00:18:39.019 [2024-07-15 21:32:12.189901] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:40.401 21:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.m3vq2BI2x0 00:18:40.401 21:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:40.401 21:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:40.401 ************************************ 00:18:40.401 END TEST raid_read_error_test 00:18:40.401 ************************************ 00:18:40.401 21:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:18:40.401 21:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:18:40.401 21:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:40.401 21:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:40.401 21:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:18:40.401 00:18:40.401 real 0m6.653s 00:18:40.401 user 0m9.767s 00:18:40.401 sys 0m0.716s 00:18:40.401 21:32:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:40.401 21:32:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.401 21:32:13 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:40.401 21:32:13 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:18:40.401 21:32:13 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:40.401 21:32:13 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:40.401 21:32:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:40.401 ************************************ 00:18:40.401 START TEST raid_write_error_test 00:18:40.401 ************************************ 00:18:40.401 21:32:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 write 00:18:40.401 21:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:18:40.401 21:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:18:40.401 21:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:18:40.401 21:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:40.401 21:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:18:40.401 21:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:40.401 21:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:18:40.401 21:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:40.401 21:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:40.401 21:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:18:40.401 21:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:40.401 21:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:40.401 21:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:18:40.401 21:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:18:40.401 21:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:18:40.401 21:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:18:40.401 21:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:18:40.401 21:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:18:40.401 21:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:18:40.401 21:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:18:40.401 21:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:18:40.401 21:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.15Zpbugnr5 00:18:40.401 21:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=126037 00:18:40.401 21:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 126037 /var/tmp/spdk-raid.sock 00:18:40.401 21:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:40.401 21:32:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 126037 ']' 00:18:40.401 21:32:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:40.401 21:32:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:40.401 21:32:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:40.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:40.401 21:32:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:40.401 21:32:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.401 [2024-07-15 21:32:13.539926] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:18:40.401 [2024-07-15 21:32:13.540126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126037 ] 00:18:40.401 [2024-07-15 21:32:13.693422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.660 [2024-07-15 21:32:13.871678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.919 [2024-07-15 21:32:14.060379] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:41.179 21:32:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:41.179 21:32:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:18:41.179 21:32:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:41.179 21:32:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:41.179 BaseBdev1_malloc 00:18:41.179 21:32:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:41.437 true 00:18:41.437 21:32:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:41.696 [2024-07-15 21:32:14.925934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:41.696 [2024-07-15 21:32:14.926103] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.696 [2024-07-15 21:32:14.926155] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:41.696 [2024-07-15 21:32:14.926215] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.696 [2024-07-15 21:32:14.928219] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.696 [2024-07-15 21:32:14.928311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:41.696 BaseBdev1 00:18:41.696 21:32:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:41.697 21:32:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:41.956 BaseBdev2_malloc 00:18:41.956 21:32:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:42.215 true 00:18:42.215 21:32:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:42.215 [2024-07-15 21:32:15.513912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:42.215 [2024-07-15 21:32:15.514059] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.215 [2024-07-15 21:32:15.514108] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:18:42.215 [2024-07-15 21:32:15.514141] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.215 [2024-07-15 21:32:15.515917] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.215 [2024-07-15 21:32:15.515987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:42.215 BaseBdev2 00:18:42.215 21:32:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:18:42.475 [2024-07-15 21:32:15.693613] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:42.475 [2024-07-15 21:32:15.695201] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:42.475 [2024-07-15 21:32:15.695469] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:18:42.475 [2024-07-15 21:32:15.695514] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:42.475 [2024-07-15 21:32:15.695639] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:18:42.475 [2024-07-15 21:32:15.695947] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:18:42.475 [2024-07-15 21:32:15.695986] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:18:42.475 [2024-07-15 21:32:15.696148] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:42.475 21:32:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:42.475 21:32:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:42.475 21:32:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:42.475 21:32:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:42.475 21:32:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:42.475 21:32:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:42.475 21:32:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:42.475 21:32:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:42.475 21:32:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:42.475 21:32:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:42.475 21:32:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.475 21:32:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.734 21:32:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:42.734 "name": "raid_bdev1", 00:18:42.734 "uuid": "7145e65d-d064-4a6c-a38b-f861bab0326d", 00:18:42.734 "strip_size_kb": 0, 00:18:42.734 "state": "online", 00:18:42.734 "raid_level": "raid1", 00:18:42.734 "superblock": true, 00:18:42.734 "num_base_bdevs": 2, 00:18:42.734 "num_base_bdevs_discovered": 2, 00:18:42.734 "num_base_bdevs_operational": 2, 00:18:42.734 "base_bdevs_list": [ 00:18:42.734 { 00:18:42.734 "name": "BaseBdev1", 00:18:42.734 "uuid": "60557512-593f-51e7-b4b4-b25076f1fc07", 00:18:42.734 "is_configured": true, 00:18:42.734 "data_offset": 2048, 00:18:42.734 "data_size": 63488 00:18:42.734 }, 00:18:42.734 { 00:18:42.734 "name": "BaseBdev2", 00:18:42.734 "uuid": "54a8dc67-fcd9-568f-bc2e-c364322efd19", 00:18:42.734 "is_configured": true, 00:18:42.734 "data_offset": 2048, 00:18:42.734 "data_size": 63488 00:18:42.734 } 00:18:42.734 ] 00:18:42.734 }' 00:18:42.734 21:32:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:42.734 21:32:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.303 21:32:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:18:43.303 21:32:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:43.303 [2024-07-15 21:32:16.541293] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:18:44.241 21:32:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:44.501 [2024-07-15 21:32:17.634492] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:18:44.501 [2024-07-15 21:32:17.634688] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:44.501 [2024-07-15 21:32:17.634918] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ad0 00:18:44.501 21:32:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:44.501 21:32:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:18:44.501 21:32:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:18:44.501 21:32:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=1 00:18:44.501 21:32:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:44.501 21:32:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:44.501 21:32:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:44.501 21:32:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:44.501 21:32:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:44.501 21:32:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:44.501 21:32:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:44.501 21:32:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:44.501 21:32:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:44.501 21:32:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:44.501 21:32:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.501 21:32:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.501 21:32:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:44.501 "name": "raid_bdev1", 00:18:44.501 "uuid": "7145e65d-d064-4a6c-a38b-f861bab0326d", 00:18:44.501 "strip_size_kb": 0, 00:18:44.501 "state": "online", 00:18:44.501 "raid_level": "raid1", 00:18:44.501 "superblock": true, 00:18:44.501 "num_base_bdevs": 2, 00:18:44.501 "num_base_bdevs_discovered": 1, 00:18:44.501 "num_base_bdevs_operational": 1, 00:18:44.501 "base_bdevs_list": [ 00:18:44.501 { 00:18:44.501 "name": null, 00:18:44.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.501 "is_configured": false, 00:18:44.501 "data_offset": 2048, 00:18:44.501 "data_size": 63488 00:18:44.501 }, 00:18:44.501 { 00:18:44.501 "name": "BaseBdev2", 00:18:44.501 "uuid": "54a8dc67-fcd9-568f-bc2e-c364322efd19", 00:18:44.501 "is_configured": true, 00:18:44.501 "data_offset": 2048, 00:18:44.501 "data_size": 63488 00:18:44.501 } 00:18:44.501 ] 00:18:44.501 }' 00:18:44.501 21:32:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:44.501 21:32:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.070 21:32:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:45.330 [2024-07-15 21:32:18.564293] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:45.330 [2024-07-15 21:32:18.564407] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:45.330 [2024-07-15 21:32:18.566788] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:45.330 [2024-07-15 21:32:18.566856] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:45.330 [2024-07-15 21:32:18.566911] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:45.330 [2024-07-15 21:32:18.566933] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:18:45.330 0 00:18:45.330 21:32:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 126037 00:18:45.330 21:32:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 126037 ']' 00:18:45.331 21:32:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 126037 00:18:45.331 21:32:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:18:45.331 21:32:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:45.331 21:32:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 126037 00:18:45.331 killing process with pid 126037 00:18:45.331 21:32:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:45.331 21:32:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:45.331 21:32:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 126037' 00:18:45.331 21:32:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 126037 00:18:45.331 21:32:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 126037 00:18:45.331 [2024-07-15 21:32:18.594696] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:45.590 [2024-07-15 21:32:18.716256] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:46.971 21:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:46.971 21:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.15Zpbugnr5 00:18:46.971 21:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:46.971 ************************************ 00:18:46.971 END TEST raid_write_error_test 00:18:46.971 ************************************ 00:18:46.971 21:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:18:46.971 21:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:18:46.971 21:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:46.971 21:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:46.971 21:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:18:46.971 00:18:46.971 real 0m6.465s 00:18:46.971 user 0m9.372s 00:18:46.971 sys 0m0.738s 00:18:46.971 21:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:46.971 21:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.971 21:32:19 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:46.971 21:32:19 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:18:46.971 21:32:19 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:18:46.971 21:32:19 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:18:46.971 21:32:19 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:46.971 21:32:19 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:46.971 21:32:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:46.971 ************************************ 00:18:46.971 START TEST raid_state_function_test 00:18:46.971 ************************************ 00:18:46.971 21:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 false 00:18:46.971 21:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:18:46.971 21:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:18:46.971 21:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:18:46.971 21:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:46.971 21:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:46.971 21:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:46.971 21:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:46.971 21:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:18:46.971 21:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:46.971 21:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:46.971 21:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:18:46.971 21:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:46.971 21:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:46.971 21:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:18:46.971 21:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:46.971 21:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:46.971 21:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:46.971 21:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:46.971 21:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:46.971 21:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:46.971 21:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:46.971 21:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:18:46.971 21:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:18:46.971 21:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:18:46.971 21:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:18:46.971 21:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:18:46.971 21:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=126233 00:18:46.971 21:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 126233' 00:18:46.971 21:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:46.971 Process raid pid: 126233 00:18:46.971 21:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 126233 /var/tmp/spdk-raid.sock 00:18:46.971 21:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 126233 ']' 00:18:46.971 21:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:46.971 21:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:46.971 21:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:46.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:46.971 21:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:46.971 21:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.971 [2024-07-15 21:32:20.067790] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:18:46.972 [2024-07-15 21:32:20.067995] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.972 [2024-07-15 21:32:20.225630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.231 [2024-07-15 21:32:20.405703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.231 [2024-07-15 21:32:20.589781] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:47.490 21:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:47.490 21:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:18:47.490 21:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:47.749 [2024-07-15 21:32:21.033452] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:47.749 [2024-07-15 21:32:21.033579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:47.749 [2024-07-15 21:32:21.033624] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:47.749 [2024-07-15 21:32:21.033654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:47.749 [2024-07-15 21:32:21.033669] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:47.749 [2024-07-15 21:32:21.033688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:47.749 21:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:47.749 21:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:47.749 21:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:47.749 21:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:47.749 21:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:47.749 21:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:47.749 21:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:47.749 21:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:47.749 21:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:47.749 21:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:47.749 21:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.749 21:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:48.008 21:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:48.008 "name": "Existed_Raid", 00:18:48.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.008 "strip_size_kb": 64, 00:18:48.008 "state": "configuring", 00:18:48.008 "raid_level": "raid0", 00:18:48.008 "superblock": false, 00:18:48.008 "num_base_bdevs": 3, 00:18:48.008 "num_base_bdevs_discovered": 0, 00:18:48.008 "num_base_bdevs_operational": 3, 00:18:48.008 "base_bdevs_list": [ 00:18:48.008 { 00:18:48.008 "name": "BaseBdev1", 00:18:48.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.008 "is_configured": false, 00:18:48.008 "data_offset": 0, 00:18:48.008 "data_size": 0 00:18:48.008 }, 00:18:48.008 { 00:18:48.008 "name": "BaseBdev2", 00:18:48.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.008 "is_configured": false, 00:18:48.008 "data_offset": 0, 00:18:48.008 "data_size": 0 00:18:48.008 }, 00:18:48.008 { 00:18:48.008 "name": "BaseBdev3", 00:18:48.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.008 "is_configured": false, 00:18:48.008 "data_offset": 0, 00:18:48.008 "data_size": 0 00:18:48.008 } 00:18:48.008 ] 00:18:48.008 }' 00:18:48.008 21:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:48.008 21:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.575 21:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:48.834 [2024-07-15 21:32:21.999845] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:48.834 [2024-07-15 21:32:21.999975] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:18:48.834 21:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:48.834 [2024-07-15 21:32:22.183494] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:48.834 [2024-07-15 21:32:22.183642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:48.834 [2024-07-15 21:32:22.183670] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:48.834 [2024-07-15 21:32:22.183696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:48.834 [2024-07-15 21:32:22.183709] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:48.834 [2024-07-15 21:32:22.183738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:48.834 21:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:49.092 [2024-07-15 21:32:22.398263] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:49.092 BaseBdev1 00:18:49.092 21:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:49.092 21:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:49.092 21:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:49.092 21:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:49.092 21:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:49.092 21:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:49.092 21:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:49.351 21:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:49.610 [ 00:18:49.610 { 00:18:49.610 "name": "BaseBdev1", 00:18:49.610 "aliases": [ 00:18:49.610 "be8731ec-379f-4aab-ac75-eb5d219d0a72" 00:18:49.610 ], 00:18:49.610 "product_name": "Malloc disk", 00:18:49.610 "block_size": 512, 00:18:49.610 "num_blocks": 65536, 00:18:49.610 "uuid": "be8731ec-379f-4aab-ac75-eb5d219d0a72", 00:18:49.610 "assigned_rate_limits": { 00:18:49.610 "rw_ios_per_sec": 0, 00:18:49.610 "rw_mbytes_per_sec": 0, 00:18:49.610 "r_mbytes_per_sec": 0, 00:18:49.611 "w_mbytes_per_sec": 0 00:18:49.611 }, 00:18:49.611 "claimed": true, 00:18:49.611 "claim_type": "exclusive_write", 00:18:49.611 "zoned": false, 00:18:49.611 "supported_io_types": { 00:18:49.611 "read": true, 00:18:49.611 "write": true, 00:18:49.611 "unmap": true, 00:18:49.611 "flush": true, 00:18:49.611 "reset": true, 00:18:49.611 "nvme_admin": false, 00:18:49.611 "nvme_io": false, 00:18:49.611 "nvme_io_md": false, 00:18:49.611 "write_zeroes": true, 00:18:49.611 "zcopy": true, 00:18:49.611 "get_zone_info": false, 00:18:49.611 "zone_management": false, 00:18:49.611 "zone_append": false, 00:18:49.611 "compare": false, 00:18:49.611 "compare_and_write": false, 00:18:49.611 "abort": true, 00:18:49.611 "seek_hole": false, 00:18:49.611 "seek_data": false, 00:18:49.611 "copy": true, 00:18:49.611 "nvme_iov_md": false 00:18:49.611 }, 00:18:49.611 "memory_domains": [ 00:18:49.611 { 00:18:49.611 "dma_device_id": "system", 00:18:49.611 "dma_device_type": 1 00:18:49.611 }, 00:18:49.611 { 00:18:49.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.611 "dma_device_type": 2 00:18:49.611 } 00:18:49.611 ], 00:18:49.611 "driver_specific": {} 00:18:49.611 } 00:18:49.611 ] 00:18:49.611 21:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:49.611 21:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:49.611 21:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:49.611 21:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:49.611 21:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:49.611 21:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:49.611 21:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:49.611 21:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:49.611 21:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:49.611 21:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:49.611 21:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:49.611 21:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.611 21:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:49.611 21:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:49.611 "name": "Existed_Raid", 00:18:49.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.611 "strip_size_kb": 64, 00:18:49.611 "state": "configuring", 00:18:49.611 "raid_level": "raid0", 00:18:49.611 "superblock": false, 00:18:49.611 "num_base_bdevs": 3, 00:18:49.611 "num_base_bdevs_discovered": 1, 00:18:49.611 "num_base_bdevs_operational": 3, 00:18:49.611 "base_bdevs_list": [ 00:18:49.611 { 00:18:49.611 "name": "BaseBdev1", 00:18:49.611 "uuid": "be8731ec-379f-4aab-ac75-eb5d219d0a72", 00:18:49.611 "is_configured": true, 00:18:49.611 "data_offset": 0, 00:18:49.611 "data_size": 65536 00:18:49.611 }, 00:18:49.611 { 00:18:49.611 "name": "BaseBdev2", 00:18:49.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.611 "is_configured": false, 00:18:49.611 "data_offset": 0, 00:18:49.611 "data_size": 0 00:18:49.611 }, 00:18:49.611 { 00:18:49.611 "name": "BaseBdev3", 00:18:49.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.611 "is_configured": false, 00:18:49.611 "data_offset": 0, 00:18:49.611 "data_size": 0 00:18:49.611 } 00:18:49.611 ] 00:18:49.611 }' 00:18:49.611 21:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:49.611 21:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.177 21:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:50.440 [2024-07-15 21:32:23.700048] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:50.440 [2024-07-15 21:32:23.700234] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:18:50.440 21:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:50.705 [2024-07-15 21:32:23.879797] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:50.705 [2024-07-15 21:32:23.881883] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:50.705 [2024-07-15 21:32:23.882001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:50.705 [2024-07-15 21:32:23.882029] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:50.705 [2024-07-15 21:32:23.882076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:50.705 21:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:50.705 21:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:50.705 21:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:50.705 21:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:50.705 21:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:50.705 21:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:50.705 21:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:50.705 21:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:50.705 21:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:50.705 21:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:50.705 21:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:50.705 21:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:50.705 21:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.705 21:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:50.705 21:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:50.705 "name": "Existed_Raid", 00:18:50.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.705 "strip_size_kb": 64, 00:18:50.705 "state": "configuring", 00:18:50.705 "raid_level": "raid0", 00:18:50.705 "superblock": false, 00:18:50.705 "num_base_bdevs": 3, 00:18:50.705 "num_base_bdevs_discovered": 1, 00:18:50.705 "num_base_bdevs_operational": 3, 00:18:50.705 "base_bdevs_list": [ 00:18:50.705 { 00:18:50.705 "name": "BaseBdev1", 00:18:50.705 "uuid": "be8731ec-379f-4aab-ac75-eb5d219d0a72", 00:18:50.705 "is_configured": true, 00:18:50.705 "data_offset": 0, 00:18:50.705 "data_size": 65536 00:18:50.705 }, 00:18:50.705 { 00:18:50.705 "name": "BaseBdev2", 00:18:50.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.705 "is_configured": false, 00:18:50.705 "data_offset": 0, 00:18:50.705 "data_size": 0 00:18:50.705 }, 00:18:50.705 { 00:18:50.705 "name": "BaseBdev3", 00:18:50.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.705 "is_configured": false, 00:18:50.705 "data_offset": 0, 00:18:50.705 "data_size": 0 00:18:50.705 } 00:18:50.705 ] 00:18:50.705 }' 00:18:50.705 21:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:50.705 21:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.642 21:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:51.643 [2024-07-15 21:32:24.916569] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:51.643 BaseBdev2 00:18:51.643 21:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:51.643 21:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:51.643 21:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:51.643 21:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:51.643 21:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:51.643 21:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:51.643 21:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:51.903 21:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:51.903 [ 00:18:51.903 { 00:18:51.903 "name": "BaseBdev2", 00:18:51.903 "aliases": [ 00:18:51.903 "383376da-148b-40cb-b9f3-a71efef46f65" 00:18:51.903 ], 00:18:51.903 "product_name": "Malloc disk", 00:18:51.903 "block_size": 512, 00:18:51.903 "num_blocks": 65536, 00:18:51.903 "uuid": "383376da-148b-40cb-b9f3-a71efef46f65", 00:18:51.903 "assigned_rate_limits": { 00:18:51.903 "rw_ios_per_sec": 0, 00:18:51.903 "rw_mbytes_per_sec": 0, 00:18:51.903 "r_mbytes_per_sec": 0, 00:18:51.903 "w_mbytes_per_sec": 0 00:18:51.903 }, 00:18:51.903 "claimed": true, 00:18:51.903 "claim_type": "exclusive_write", 00:18:51.903 "zoned": false, 00:18:51.903 "supported_io_types": { 00:18:51.903 "read": true, 00:18:51.903 "write": true, 00:18:51.903 "unmap": true, 00:18:51.903 "flush": true, 00:18:51.903 "reset": true, 00:18:51.903 "nvme_admin": false, 00:18:51.903 "nvme_io": false, 00:18:51.903 "nvme_io_md": false, 00:18:51.903 "write_zeroes": true, 00:18:51.903 "zcopy": true, 00:18:51.903 "get_zone_info": false, 00:18:51.903 "zone_management": false, 00:18:51.903 "zone_append": false, 00:18:51.903 "compare": false, 00:18:51.903 "compare_and_write": false, 00:18:51.903 "abort": true, 00:18:51.903 "seek_hole": false, 00:18:51.903 "seek_data": false, 00:18:51.903 "copy": true, 00:18:51.903 "nvme_iov_md": false 00:18:51.903 }, 00:18:51.903 "memory_domains": [ 00:18:51.903 { 00:18:51.903 "dma_device_id": "system", 00:18:51.903 "dma_device_type": 1 00:18:51.903 }, 00:18:51.903 { 00:18:51.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.903 "dma_device_type": 2 00:18:51.903 } 00:18:51.903 ], 00:18:51.903 "driver_specific": {} 00:18:51.903 } 00:18:51.903 ] 00:18:51.903 21:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:51.903 21:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:51.903 21:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:51.903 21:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:51.903 21:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:51.903 21:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:51.903 21:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:51.903 21:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:51.903 21:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:51.903 21:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:51.903 21:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:51.903 21:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:51.903 21:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:51.903 21:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.903 21:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:52.163 21:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:52.163 "name": "Existed_Raid", 00:18:52.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.163 "strip_size_kb": 64, 00:18:52.163 "state": "configuring", 00:18:52.163 "raid_level": "raid0", 00:18:52.163 "superblock": false, 00:18:52.163 "num_base_bdevs": 3, 00:18:52.163 "num_base_bdevs_discovered": 2, 00:18:52.163 "num_base_bdevs_operational": 3, 00:18:52.163 "base_bdevs_list": [ 00:18:52.163 { 00:18:52.163 "name": "BaseBdev1", 00:18:52.163 "uuid": "be8731ec-379f-4aab-ac75-eb5d219d0a72", 00:18:52.163 "is_configured": true, 00:18:52.163 "data_offset": 0, 00:18:52.163 "data_size": 65536 00:18:52.163 }, 00:18:52.163 { 00:18:52.163 "name": "BaseBdev2", 00:18:52.163 "uuid": "383376da-148b-40cb-b9f3-a71efef46f65", 00:18:52.163 "is_configured": true, 00:18:52.163 "data_offset": 0, 00:18:52.163 "data_size": 65536 00:18:52.163 }, 00:18:52.163 { 00:18:52.163 "name": "BaseBdev3", 00:18:52.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.163 "is_configured": false, 00:18:52.163 "data_offset": 0, 00:18:52.163 "data_size": 0 00:18:52.163 } 00:18:52.163 ] 00:18:52.163 }' 00:18:52.163 21:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:52.163 21:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.733 21:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:52.992 [2024-07-15 21:32:26.206936] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:52.992 [2024-07-15 21:32:26.207094] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:18:52.992 [2024-07-15 21:32:26.207114] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:52.992 [2024-07-15 21:32:26.207282] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:18:52.992 [2024-07-15 21:32:26.207627] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:18:52.992 [2024-07-15 21:32:26.207668] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:18:52.992 [2024-07-15 21:32:26.207942] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:52.992 BaseBdev3 00:18:52.992 21:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:18:52.992 21:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:52.992 21:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:52.992 21:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:52.992 21:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:52.992 21:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:52.992 21:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:53.251 21:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:53.251 [ 00:18:53.251 { 00:18:53.251 "name": "BaseBdev3", 00:18:53.251 "aliases": [ 00:18:53.251 "ddae511a-9603-473e-8a32-e51b4a5fa549" 00:18:53.251 ], 00:18:53.251 "product_name": "Malloc disk", 00:18:53.251 "block_size": 512, 00:18:53.251 "num_blocks": 65536, 00:18:53.251 "uuid": "ddae511a-9603-473e-8a32-e51b4a5fa549", 00:18:53.251 "assigned_rate_limits": { 00:18:53.251 "rw_ios_per_sec": 0, 00:18:53.251 "rw_mbytes_per_sec": 0, 00:18:53.251 "r_mbytes_per_sec": 0, 00:18:53.251 "w_mbytes_per_sec": 0 00:18:53.251 }, 00:18:53.251 "claimed": true, 00:18:53.251 "claim_type": "exclusive_write", 00:18:53.251 "zoned": false, 00:18:53.251 "supported_io_types": { 00:18:53.251 "read": true, 00:18:53.251 "write": true, 00:18:53.251 "unmap": true, 00:18:53.251 "flush": true, 00:18:53.251 "reset": true, 00:18:53.251 "nvme_admin": false, 00:18:53.251 "nvme_io": false, 00:18:53.251 "nvme_io_md": false, 00:18:53.251 "write_zeroes": true, 00:18:53.251 "zcopy": true, 00:18:53.251 "get_zone_info": false, 00:18:53.251 "zone_management": false, 00:18:53.251 "zone_append": false, 00:18:53.251 "compare": false, 00:18:53.251 "compare_and_write": false, 00:18:53.251 "abort": true, 00:18:53.251 "seek_hole": false, 00:18:53.251 "seek_data": false, 00:18:53.251 "copy": true, 00:18:53.251 "nvme_iov_md": false 00:18:53.251 }, 00:18:53.251 "memory_domains": [ 00:18:53.251 { 00:18:53.251 "dma_device_id": "system", 00:18:53.251 "dma_device_type": 1 00:18:53.251 }, 00:18:53.251 { 00:18:53.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:53.251 "dma_device_type": 2 00:18:53.251 } 00:18:53.251 ], 00:18:53.251 "driver_specific": {} 00:18:53.251 } 00:18:53.251 ] 00:18:53.251 21:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:53.251 21:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:53.251 21:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:53.251 21:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:18:53.251 21:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:53.251 21:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:53.251 21:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:53.251 21:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:53.251 21:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:53.251 21:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:53.251 21:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:53.251 21:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:53.251 21:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:53.251 21:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.251 21:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.510 21:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:53.510 "name": "Existed_Raid", 00:18:53.510 "uuid": "a0c78885-ebe3-4ef5-b389-df1e500d8876", 00:18:53.510 "strip_size_kb": 64, 00:18:53.510 "state": "online", 00:18:53.510 "raid_level": "raid0", 00:18:53.510 "superblock": false, 00:18:53.510 "num_base_bdevs": 3, 00:18:53.510 "num_base_bdevs_discovered": 3, 00:18:53.510 "num_base_bdevs_operational": 3, 00:18:53.510 "base_bdevs_list": [ 00:18:53.510 { 00:18:53.510 "name": "BaseBdev1", 00:18:53.510 "uuid": "be8731ec-379f-4aab-ac75-eb5d219d0a72", 00:18:53.510 "is_configured": true, 00:18:53.510 "data_offset": 0, 00:18:53.510 "data_size": 65536 00:18:53.510 }, 00:18:53.510 { 00:18:53.510 "name": "BaseBdev2", 00:18:53.510 "uuid": "383376da-148b-40cb-b9f3-a71efef46f65", 00:18:53.510 "is_configured": true, 00:18:53.510 "data_offset": 0, 00:18:53.510 "data_size": 65536 00:18:53.510 }, 00:18:53.510 { 00:18:53.510 "name": "BaseBdev3", 00:18:53.510 "uuid": "ddae511a-9603-473e-8a32-e51b4a5fa549", 00:18:53.510 "is_configured": true, 00:18:53.510 "data_offset": 0, 00:18:53.510 "data_size": 65536 00:18:53.510 } 00:18:53.510 ] 00:18:53.510 }' 00:18:53.510 21:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:53.510 21:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.078 21:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:54.078 21:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:54.078 21:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:54.078 21:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:54.078 21:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:54.078 21:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:54.078 21:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:54.078 21:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:54.337 [2024-07-15 21:32:27.516996] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:54.337 21:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:54.337 "name": "Existed_Raid", 00:18:54.337 "aliases": [ 00:18:54.337 "a0c78885-ebe3-4ef5-b389-df1e500d8876" 00:18:54.337 ], 00:18:54.337 "product_name": "Raid Volume", 00:18:54.337 "block_size": 512, 00:18:54.337 "num_blocks": 196608, 00:18:54.337 "uuid": "a0c78885-ebe3-4ef5-b389-df1e500d8876", 00:18:54.337 "assigned_rate_limits": { 00:18:54.337 "rw_ios_per_sec": 0, 00:18:54.337 "rw_mbytes_per_sec": 0, 00:18:54.337 "r_mbytes_per_sec": 0, 00:18:54.337 "w_mbytes_per_sec": 0 00:18:54.337 }, 00:18:54.337 "claimed": false, 00:18:54.337 "zoned": false, 00:18:54.337 "supported_io_types": { 00:18:54.337 "read": true, 00:18:54.337 "write": true, 00:18:54.337 "unmap": true, 00:18:54.337 "flush": true, 00:18:54.337 "reset": true, 00:18:54.337 "nvme_admin": false, 00:18:54.337 "nvme_io": false, 00:18:54.337 "nvme_io_md": false, 00:18:54.337 "write_zeroes": true, 00:18:54.337 "zcopy": false, 00:18:54.337 "get_zone_info": false, 00:18:54.337 "zone_management": false, 00:18:54.337 "zone_append": false, 00:18:54.337 "compare": false, 00:18:54.337 "compare_and_write": false, 00:18:54.337 "abort": false, 00:18:54.337 "seek_hole": false, 00:18:54.337 "seek_data": false, 00:18:54.337 "copy": false, 00:18:54.337 "nvme_iov_md": false 00:18:54.337 }, 00:18:54.337 "memory_domains": [ 00:18:54.337 { 00:18:54.337 "dma_device_id": "system", 00:18:54.337 "dma_device_type": 1 00:18:54.337 }, 00:18:54.337 { 00:18:54.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.337 "dma_device_type": 2 00:18:54.337 }, 00:18:54.337 { 00:18:54.337 "dma_device_id": "system", 00:18:54.337 "dma_device_type": 1 00:18:54.337 }, 00:18:54.337 { 00:18:54.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.337 "dma_device_type": 2 00:18:54.337 }, 00:18:54.337 { 00:18:54.337 "dma_device_id": "system", 00:18:54.337 "dma_device_type": 1 00:18:54.337 }, 00:18:54.337 { 00:18:54.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.337 "dma_device_type": 2 00:18:54.337 } 00:18:54.337 ], 00:18:54.337 "driver_specific": { 00:18:54.337 "raid": { 00:18:54.337 "uuid": "a0c78885-ebe3-4ef5-b389-df1e500d8876", 00:18:54.337 "strip_size_kb": 64, 00:18:54.337 "state": "online", 00:18:54.337 "raid_level": "raid0", 00:18:54.337 "superblock": false, 00:18:54.337 "num_base_bdevs": 3, 00:18:54.337 "num_base_bdevs_discovered": 3, 00:18:54.338 "num_base_bdevs_operational": 3, 00:18:54.338 "base_bdevs_list": [ 00:18:54.338 { 00:18:54.338 "name": "BaseBdev1", 00:18:54.338 "uuid": "be8731ec-379f-4aab-ac75-eb5d219d0a72", 00:18:54.338 "is_configured": true, 00:18:54.338 "data_offset": 0, 00:18:54.338 "data_size": 65536 00:18:54.338 }, 00:18:54.338 { 00:18:54.338 "name": "BaseBdev2", 00:18:54.338 "uuid": "383376da-148b-40cb-b9f3-a71efef46f65", 00:18:54.338 "is_configured": true, 00:18:54.338 "data_offset": 0, 00:18:54.338 "data_size": 65536 00:18:54.338 }, 00:18:54.338 { 00:18:54.338 "name": "BaseBdev3", 00:18:54.338 "uuid": "ddae511a-9603-473e-8a32-e51b4a5fa549", 00:18:54.338 "is_configured": true, 00:18:54.338 "data_offset": 0, 00:18:54.338 "data_size": 65536 00:18:54.338 } 00:18:54.338 ] 00:18:54.338 } 00:18:54.338 } 00:18:54.338 }' 00:18:54.338 21:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:54.338 21:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:54.338 BaseBdev2 00:18:54.338 BaseBdev3' 00:18:54.338 21:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:54.338 21:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:54.338 21:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:54.596 21:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:54.596 "name": "BaseBdev1", 00:18:54.596 "aliases": [ 00:18:54.596 "be8731ec-379f-4aab-ac75-eb5d219d0a72" 00:18:54.596 ], 00:18:54.596 "product_name": "Malloc disk", 00:18:54.596 "block_size": 512, 00:18:54.596 "num_blocks": 65536, 00:18:54.596 "uuid": "be8731ec-379f-4aab-ac75-eb5d219d0a72", 00:18:54.596 "assigned_rate_limits": { 00:18:54.596 "rw_ios_per_sec": 0, 00:18:54.596 "rw_mbytes_per_sec": 0, 00:18:54.596 "r_mbytes_per_sec": 0, 00:18:54.596 "w_mbytes_per_sec": 0 00:18:54.596 }, 00:18:54.596 "claimed": true, 00:18:54.596 "claim_type": "exclusive_write", 00:18:54.596 "zoned": false, 00:18:54.596 "supported_io_types": { 00:18:54.596 "read": true, 00:18:54.596 "write": true, 00:18:54.596 "unmap": true, 00:18:54.596 "flush": true, 00:18:54.596 "reset": true, 00:18:54.596 "nvme_admin": false, 00:18:54.596 "nvme_io": false, 00:18:54.596 "nvme_io_md": false, 00:18:54.596 "write_zeroes": true, 00:18:54.596 "zcopy": true, 00:18:54.596 "get_zone_info": false, 00:18:54.596 "zone_management": false, 00:18:54.596 "zone_append": false, 00:18:54.596 "compare": false, 00:18:54.596 "compare_and_write": false, 00:18:54.596 "abort": true, 00:18:54.596 "seek_hole": false, 00:18:54.596 "seek_data": false, 00:18:54.596 "copy": true, 00:18:54.596 "nvme_iov_md": false 00:18:54.596 }, 00:18:54.596 "memory_domains": [ 00:18:54.596 { 00:18:54.596 "dma_device_id": "system", 00:18:54.596 "dma_device_type": 1 00:18:54.596 }, 00:18:54.596 { 00:18:54.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.596 "dma_device_type": 2 00:18:54.596 } 00:18:54.596 ], 00:18:54.596 "driver_specific": {} 00:18:54.596 }' 00:18:54.596 21:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:54.596 21:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:54.596 21:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:54.596 21:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:54.596 21:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:54.855 21:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:54.855 21:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:54.855 21:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:54.855 21:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:54.855 21:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:54.855 21:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:54.855 21:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:54.855 21:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:54.855 21:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:54.855 21:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:55.114 21:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:55.114 "name": "BaseBdev2", 00:18:55.114 "aliases": [ 00:18:55.114 "383376da-148b-40cb-b9f3-a71efef46f65" 00:18:55.114 ], 00:18:55.114 "product_name": "Malloc disk", 00:18:55.114 "block_size": 512, 00:18:55.114 "num_blocks": 65536, 00:18:55.114 "uuid": "383376da-148b-40cb-b9f3-a71efef46f65", 00:18:55.114 "assigned_rate_limits": { 00:18:55.114 "rw_ios_per_sec": 0, 00:18:55.114 "rw_mbytes_per_sec": 0, 00:18:55.114 "r_mbytes_per_sec": 0, 00:18:55.114 "w_mbytes_per_sec": 0 00:18:55.114 }, 00:18:55.114 "claimed": true, 00:18:55.114 "claim_type": "exclusive_write", 00:18:55.114 "zoned": false, 00:18:55.114 "supported_io_types": { 00:18:55.114 "read": true, 00:18:55.114 "write": true, 00:18:55.114 "unmap": true, 00:18:55.114 "flush": true, 00:18:55.114 "reset": true, 00:18:55.114 "nvme_admin": false, 00:18:55.114 "nvme_io": false, 00:18:55.114 "nvme_io_md": false, 00:18:55.114 "write_zeroes": true, 00:18:55.114 "zcopy": true, 00:18:55.114 "get_zone_info": false, 00:18:55.114 "zone_management": false, 00:18:55.114 "zone_append": false, 00:18:55.114 "compare": false, 00:18:55.114 "compare_and_write": false, 00:18:55.114 "abort": true, 00:18:55.114 "seek_hole": false, 00:18:55.114 "seek_data": false, 00:18:55.114 "copy": true, 00:18:55.114 "nvme_iov_md": false 00:18:55.114 }, 00:18:55.114 "memory_domains": [ 00:18:55.114 { 00:18:55.114 "dma_device_id": "system", 00:18:55.114 "dma_device_type": 1 00:18:55.114 }, 00:18:55.114 { 00:18:55.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.114 "dma_device_type": 2 00:18:55.114 } 00:18:55.114 ], 00:18:55.114 "driver_specific": {} 00:18:55.114 }' 00:18:55.114 21:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:55.114 21:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:55.114 21:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:55.114 21:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:55.114 21:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:55.372 21:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:55.372 21:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:55.372 21:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:55.372 21:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:55.372 21:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:55.372 21:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:55.372 21:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:55.372 21:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:55.631 21:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:55.631 21:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:55.631 21:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:55.631 "name": "BaseBdev3", 00:18:55.631 "aliases": [ 00:18:55.631 "ddae511a-9603-473e-8a32-e51b4a5fa549" 00:18:55.631 ], 00:18:55.631 "product_name": "Malloc disk", 00:18:55.631 "block_size": 512, 00:18:55.631 "num_blocks": 65536, 00:18:55.631 "uuid": "ddae511a-9603-473e-8a32-e51b4a5fa549", 00:18:55.631 "assigned_rate_limits": { 00:18:55.631 "rw_ios_per_sec": 0, 00:18:55.631 "rw_mbytes_per_sec": 0, 00:18:55.631 "r_mbytes_per_sec": 0, 00:18:55.631 "w_mbytes_per_sec": 0 00:18:55.631 }, 00:18:55.631 "claimed": true, 00:18:55.631 "claim_type": "exclusive_write", 00:18:55.631 "zoned": false, 00:18:55.631 "supported_io_types": { 00:18:55.631 "read": true, 00:18:55.631 "write": true, 00:18:55.631 "unmap": true, 00:18:55.631 "flush": true, 00:18:55.631 "reset": true, 00:18:55.631 "nvme_admin": false, 00:18:55.631 "nvme_io": false, 00:18:55.631 "nvme_io_md": false, 00:18:55.631 "write_zeroes": true, 00:18:55.631 "zcopy": true, 00:18:55.631 "get_zone_info": false, 00:18:55.631 "zone_management": false, 00:18:55.631 "zone_append": false, 00:18:55.631 "compare": false, 00:18:55.631 "compare_and_write": false, 00:18:55.631 "abort": true, 00:18:55.631 "seek_hole": false, 00:18:55.631 "seek_data": false, 00:18:55.631 "copy": true, 00:18:55.631 "nvme_iov_md": false 00:18:55.631 }, 00:18:55.631 "memory_domains": [ 00:18:55.631 { 00:18:55.631 "dma_device_id": "system", 00:18:55.631 "dma_device_type": 1 00:18:55.631 }, 00:18:55.631 { 00:18:55.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.631 "dma_device_type": 2 00:18:55.631 } 00:18:55.631 ], 00:18:55.631 "driver_specific": {} 00:18:55.631 }' 00:18:55.631 21:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:55.889 21:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:55.889 21:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:55.889 21:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:55.889 21:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:55.889 21:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:55.889 21:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:55.889 21:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:56.147 21:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:56.147 21:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:56.147 21:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:56.147 21:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:56.147 21:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:56.406 [2024-07-15 21:32:29.569185] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:56.406 [2024-07-15 21:32:29.569348] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:56.406 [2024-07-15 21:32:29.569430] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:56.406 21:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:56.406 21:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:18:56.406 21:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:56.406 21:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:56.406 21:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:18:56.406 21:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:18:56.406 21:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:56.406 21:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:18:56.406 21:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:56.406 21:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:56.406 21:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:56.407 21:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:56.407 21:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:56.407 21:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:56.407 21:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:56.407 21:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.407 21:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:56.667 21:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:56.667 "name": "Existed_Raid", 00:18:56.667 "uuid": "a0c78885-ebe3-4ef5-b389-df1e500d8876", 00:18:56.667 "strip_size_kb": 64, 00:18:56.667 "state": "offline", 00:18:56.667 "raid_level": "raid0", 00:18:56.667 "superblock": false, 00:18:56.667 "num_base_bdevs": 3, 00:18:56.667 "num_base_bdevs_discovered": 2, 00:18:56.667 "num_base_bdevs_operational": 2, 00:18:56.667 "base_bdevs_list": [ 00:18:56.667 { 00:18:56.667 "name": null, 00:18:56.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.667 "is_configured": false, 00:18:56.667 "data_offset": 0, 00:18:56.667 "data_size": 65536 00:18:56.667 }, 00:18:56.667 { 00:18:56.667 "name": "BaseBdev2", 00:18:56.667 "uuid": "383376da-148b-40cb-b9f3-a71efef46f65", 00:18:56.667 "is_configured": true, 00:18:56.667 "data_offset": 0, 00:18:56.667 "data_size": 65536 00:18:56.667 }, 00:18:56.667 { 00:18:56.667 "name": "BaseBdev3", 00:18:56.667 "uuid": "ddae511a-9603-473e-8a32-e51b4a5fa549", 00:18:56.667 "is_configured": true, 00:18:56.667 "data_offset": 0, 00:18:56.667 "data_size": 65536 00:18:56.667 } 00:18:56.667 ] 00:18:56.667 }' 00:18:56.667 21:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:56.667 21:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.233 21:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:57.233 21:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:57.233 21:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.233 21:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:57.491 21:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:57.491 21:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:57.491 21:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:57.491 [2024-07-15 21:32:30.808394] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:57.749 21:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:57.749 21:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:57.749 21:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.749 21:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:57.750 21:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:57.750 21:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:57.750 21:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:58.007 [2024-07-15 21:32:31.245049] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:58.007 [2024-07-15 21:32:31.245153] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:18:58.007 21:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:58.007 21:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:58.007 21:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.007 21:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:58.265 21:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:58.265 21:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:58.265 21:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:18:58.265 21:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:58.265 21:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:58.265 21:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:58.522 BaseBdev2 00:18:58.522 21:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:58.522 21:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:58.522 21:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:58.522 21:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:58.522 21:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:58.522 21:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:58.522 21:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:58.781 21:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:58.781 [ 00:18:58.781 { 00:18:58.781 "name": "BaseBdev2", 00:18:58.781 "aliases": [ 00:18:58.781 "60e6a2ac-15f3-4087-a143-8f8b3ab30a62" 00:18:58.781 ], 00:18:58.781 "product_name": "Malloc disk", 00:18:58.781 "block_size": 512, 00:18:58.781 "num_blocks": 65536, 00:18:58.781 "uuid": "60e6a2ac-15f3-4087-a143-8f8b3ab30a62", 00:18:58.781 "assigned_rate_limits": { 00:18:58.781 "rw_ios_per_sec": 0, 00:18:58.781 "rw_mbytes_per_sec": 0, 00:18:58.781 "r_mbytes_per_sec": 0, 00:18:58.781 "w_mbytes_per_sec": 0 00:18:58.781 }, 00:18:58.781 "claimed": false, 00:18:58.781 "zoned": false, 00:18:58.781 "supported_io_types": { 00:18:58.781 "read": true, 00:18:58.781 "write": true, 00:18:58.781 "unmap": true, 00:18:58.781 "flush": true, 00:18:58.781 "reset": true, 00:18:58.781 "nvme_admin": false, 00:18:58.781 "nvme_io": false, 00:18:58.781 "nvme_io_md": false, 00:18:58.781 "write_zeroes": true, 00:18:58.781 "zcopy": true, 00:18:58.781 "get_zone_info": false, 00:18:58.781 "zone_management": false, 00:18:58.781 "zone_append": false, 00:18:58.781 "compare": false, 00:18:58.781 "compare_and_write": false, 00:18:58.781 "abort": true, 00:18:58.781 "seek_hole": false, 00:18:58.781 "seek_data": false, 00:18:58.781 "copy": true, 00:18:58.781 "nvme_iov_md": false 00:18:58.781 }, 00:18:58.781 "memory_domains": [ 00:18:58.781 { 00:18:58.781 "dma_device_id": "system", 00:18:58.781 "dma_device_type": 1 00:18:58.781 }, 00:18:58.781 { 00:18:58.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.781 "dma_device_type": 2 00:18:58.781 } 00:18:58.781 ], 00:18:58.781 "driver_specific": {} 00:18:58.781 } 00:18:58.781 ] 00:18:58.781 21:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:58.781 21:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:58.781 21:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:58.781 21:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:59.038 BaseBdev3 00:18:59.038 21:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:59.038 21:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:59.038 21:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:59.038 21:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:59.038 21:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:59.038 21:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:59.038 21:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:59.296 21:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:59.555 [ 00:18:59.555 { 00:18:59.555 "name": "BaseBdev3", 00:18:59.555 "aliases": [ 00:18:59.555 "16b1c8f3-78b8-4b7a-96d7-684ab03530c8" 00:18:59.555 ], 00:18:59.555 "product_name": "Malloc disk", 00:18:59.555 "block_size": 512, 00:18:59.555 "num_blocks": 65536, 00:18:59.555 "uuid": "16b1c8f3-78b8-4b7a-96d7-684ab03530c8", 00:18:59.555 "assigned_rate_limits": { 00:18:59.555 "rw_ios_per_sec": 0, 00:18:59.555 "rw_mbytes_per_sec": 0, 00:18:59.555 "r_mbytes_per_sec": 0, 00:18:59.555 "w_mbytes_per_sec": 0 00:18:59.555 }, 00:18:59.555 "claimed": false, 00:18:59.555 "zoned": false, 00:18:59.555 "supported_io_types": { 00:18:59.555 "read": true, 00:18:59.555 "write": true, 00:18:59.555 "unmap": true, 00:18:59.555 "flush": true, 00:18:59.555 "reset": true, 00:18:59.555 "nvme_admin": false, 00:18:59.555 "nvme_io": false, 00:18:59.555 "nvme_io_md": false, 00:18:59.555 "write_zeroes": true, 00:18:59.555 "zcopy": true, 00:18:59.555 "get_zone_info": false, 00:18:59.555 "zone_management": false, 00:18:59.555 "zone_append": false, 00:18:59.555 "compare": false, 00:18:59.555 "compare_and_write": false, 00:18:59.555 "abort": true, 00:18:59.555 "seek_hole": false, 00:18:59.555 "seek_data": false, 00:18:59.555 "copy": true, 00:18:59.555 "nvme_iov_md": false 00:18:59.555 }, 00:18:59.555 "memory_domains": [ 00:18:59.555 { 00:18:59.555 "dma_device_id": "system", 00:18:59.555 "dma_device_type": 1 00:18:59.555 }, 00:18:59.555 { 00:18:59.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.555 "dma_device_type": 2 00:18:59.555 } 00:18:59.555 ], 00:18:59.555 "driver_specific": {} 00:18:59.555 } 00:18:59.555 ] 00:18:59.555 21:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:59.555 21:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:59.555 21:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:59.555 21:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:59.555 [2024-07-15 21:32:32.868192] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:59.555 [2024-07-15 21:32:32.868314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:59.555 [2024-07-15 21:32:32.868370] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:59.555 [2024-07-15 21:32:32.869967] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:59.555 21:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:59.555 21:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:59.555 21:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:59.555 21:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:59.555 21:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:59.555 21:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:59.555 21:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:59.555 21:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:59.555 21:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:59.555 21:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:59.555 21:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.555 21:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.813 21:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:59.813 "name": "Existed_Raid", 00:18:59.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.813 "strip_size_kb": 64, 00:18:59.813 "state": "configuring", 00:18:59.813 "raid_level": "raid0", 00:18:59.813 "superblock": false, 00:18:59.813 "num_base_bdevs": 3, 00:18:59.813 "num_base_bdevs_discovered": 2, 00:18:59.813 "num_base_bdevs_operational": 3, 00:18:59.813 "base_bdevs_list": [ 00:18:59.813 { 00:18:59.813 "name": "BaseBdev1", 00:18:59.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.813 "is_configured": false, 00:18:59.813 "data_offset": 0, 00:18:59.813 "data_size": 0 00:18:59.813 }, 00:18:59.813 { 00:18:59.813 "name": "BaseBdev2", 00:18:59.813 "uuid": "60e6a2ac-15f3-4087-a143-8f8b3ab30a62", 00:18:59.813 "is_configured": true, 00:18:59.813 "data_offset": 0, 00:18:59.813 "data_size": 65536 00:18:59.813 }, 00:18:59.813 { 00:18:59.813 "name": "BaseBdev3", 00:18:59.813 "uuid": "16b1c8f3-78b8-4b7a-96d7-684ab03530c8", 00:18:59.813 "is_configured": true, 00:18:59.813 "data_offset": 0, 00:18:59.813 "data_size": 65536 00:18:59.813 } 00:18:59.813 ] 00:18:59.813 }' 00:18:59.813 21:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:59.813 21:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.379 21:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:00.635 [2024-07-15 21:32:33.818496] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:00.635 21:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:00.636 21:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:00.636 21:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:00.636 21:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:00.636 21:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:00.636 21:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:00.636 21:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:00.636 21:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:00.636 21:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:00.636 21:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:00.636 21:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.636 21:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.892 21:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:00.892 "name": "Existed_Raid", 00:19:00.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.892 "strip_size_kb": 64, 00:19:00.892 "state": "configuring", 00:19:00.892 "raid_level": "raid0", 00:19:00.892 "superblock": false, 00:19:00.892 "num_base_bdevs": 3, 00:19:00.892 "num_base_bdevs_discovered": 1, 00:19:00.892 "num_base_bdevs_operational": 3, 00:19:00.892 "base_bdevs_list": [ 00:19:00.892 { 00:19:00.892 "name": "BaseBdev1", 00:19:00.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.892 "is_configured": false, 00:19:00.892 "data_offset": 0, 00:19:00.892 "data_size": 0 00:19:00.892 }, 00:19:00.892 { 00:19:00.892 "name": null, 00:19:00.892 "uuid": "60e6a2ac-15f3-4087-a143-8f8b3ab30a62", 00:19:00.892 "is_configured": false, 00:19:00.892 "data_offset": 0, 00:19:00.892 "data_size": 65536 00:19:00.892 }, 00:19:00.892 { 00:19:00.892 "name": "BaseBdev3", 00:19:00.892 "uuid": "16b1c8f3-78b8-4b7a-96d7-684ab03530c8", 00:19:00.892 "is_configured": true, 00:19:00.892 "data_offset": 0, 00:19:00.892 "data_size": 65536 00:19:00.892 } 00:19:00.892 ] 00:19:00.892 }' 00:19:00.892 21:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:00.892 21:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.457 21:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.457 21:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:01.457 21:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:01.457 21:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:01.715 [2024-07-15 21:32:34.975297] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:01.715 BaseBdev1 00:19:01.715 21:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:01.715 21:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:01.715 21:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:01.715 21:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:01.715 21:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:01.715 21:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:01.715 21:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:01.973 21:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:01.973 [ 00:19:01.973 { 00:19:01.973 "name": "BaseBdev1", 00:19:01.973 "aliases": [ 00:19:01.973 "410baf8e-326c-40b4-a9e2-32f7ca6f5a08" 00:19:01.973 ], 00:19:01.973 "product_name": "Malloc disk", 00:19:01.973 "block_size": 512, 00:19:01.973 "num_blocks": 65536, 00:19:01.973 "uuid": "410baf8e-326c-40b4-a9e2-32f7ca6f5a08", 00:19:01.973 "assigned_rate_limits": { 00:19:01.973 "rw_ios_per_sec": 0, 00:19:01.973 "rw_mbytes_per_sec": 0, 00:19:01.973 "r_mbytes_per_sec": 0, 00:19:01.973 "w_mbytes_per_sec": 0 00:19:01.973 }, 00:19:01.973 "claimed": true, 00:19:01.973 "claim_type": "exclusive_write", 00:19:01.973 "zoned": false, 00:19:01.973 "supported_io_types": { 00:19:01.973 "read": true, 00:19:01.973 "write": true, 00:19:01.973 "unmap": true, 00:19:01.973 "flush": true, 00:19:01.973 "reset": true, 00:19:01.973 "nvme_admin": false, 00:19:01.973 "nvme_io": false, 00:19:01.973 "nvme_io_md": false, 00:19:01.973 "write_zeroes": true, 00:19:01.973 "zcopy": true, 00:19:01.973 "get_zone_info": false, 00:19:01.973 "zone_management": false, 00:19:01.973 "zone_append": false, 00:19:01.973 "compare": false, 00:19:01.973 "compare_and_write": false, 00:19:01.973 "abort": true, 00:19:01.973 "seek_hole": false, 00:19:01.973 "seek_data": false, 00:19:01.973 "copy": true, 00:19:01.973 "nvme_iov_md": false 00:19:01.973 }, 00:19:01.973 "memory_domains": [ 00:19:01.973 { 00:19:01.973 "dma_device_id": "system", 00:19:01.973 "dma_device_type": 1 00:19:01.973 }, 00:19:01.973 { 00:19:01.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.973 "dma_device_type": 2 00:19:01.973 } 00:19:01.973 ], 00:19:01.973 "driver_specific": {} 00:19:01.973 } 00:19:01.973 ] 00:19:02.232 21:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:02.232 21:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:02.232 21:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:02.232 21:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:02.232 21:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:02.232 21:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:02.232 21:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:02.232 21:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:02.232 21:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:02.232 21:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:02.232 21:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:02.232 21:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.232 21:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:02.232 21:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:02.232 "name": "Existed_Raid", 00:19:02.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.232 "strip_size_kb": 64, 00:19:02.232 "state": "configuring", 00:19:02.232 "raid_level": "raid0", 00:19:02.232 "superblock": false, 00:19:02.232 "num_base_bdevs": 3, 00:19:02.232 "num_base_bdevs_discovered": 2, 00:19:02.232 "num_base_bdevs_operational": 3, 00:19:02.232 "base_bdevs_list": [ 00:19:02.232 { 00:19:02.232 "name": "BaseBdev1", 00:19:02.232 "uuid": "410baf8e-326c-40b4-a9e2-32f7ca6f5a08", 00:19:02.232 "is_configured": true, 00:19:02.232 "data_offset": 0, 00:19:02.232 "data_size": 65536 00:19:02.232 }, 00:19:02.232 { 00:19:02.232 "name": null, 00:19:02.232 "uuid": "60e6a2ac-15f3-4087-a143-8f8b3ab30a62", 00:19:02.232 "is_configured": false, 00:19:02.232 "data_offset": 0, 00:19:02.232 "data_size": 65536 00:19:02.232 }, 00:19:02.232 { 00:19:02.232 "name": "BaseBdev3", 00:19:02.232 "uuid": "16b1c8f3-78b8-4b7a-96d7-684ab03530c8", 00:19:02.232 "is_configured": true, 00:19:02.232 "data_offset": 0, 00:19:02.232 "data_size": 65536 00:19:02.232 } 00:19:02.232 ] 00:19:02.232 }' 00:19:02.232 21:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:02.232 21:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.797 21:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.797 21:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:03.055 21:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:03.055 21:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:03.312 [2024-07-15 21:32:36.508606] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:03.312 21:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:03.312 21:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:03.312 21:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:03.312 21:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:03.312 21:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:03.312 21:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:03.312 21:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:03.312 21:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:03.312 21:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:03.312 21:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:03.312 21:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.312 21:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:03.569 21:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:03.569 "name": "Existed_Raid", 00:19:03.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.569 "strip_size_kb": 64, 00:19:03.569 "state": "configuring", 00:19:03.569 "raid_level": "raid0", 00:19:03.569 "superblock": false, 00:19:03.569 "num_base_bdevs": 3, 00:19:03.569 "num_base_bdevs_discovered": 1, 00:19:03.569 "num_base_bdevs_operational": 3, 00:19:03.569 "base_bdevs_list": [ 00:19:03.569 { 00:19:03.569 "name": "BaseBdev1", 00:19:03.569 "uuid": "410baf8e-326c-40b4-a9e2-32f7ca6f5a08", 00:19:03.569 "is_configured": true, 00:19:03.569 "data_offset": 0, 00:19:03.569 "data_size": 65536 00:19:03.569 }, 00:19:03.569 { 00:19:03.569 "name": null, 00:19:03.569 "uuid": "60e6a2ac-15f3-4087-a143-8f8b3ab30a62", 00:19:03.569 "is_configured": false, 00:19:03.569 "data_offset": 0, 00:19:03.569 "data_size": 65536 00:19:03.569 }, 00:19:03.569 { 00:19:03.569 "name": null, 00:19:03.569 "uuid": "16b1c8f3-78b8-4b7a-96d7-684ab03530c8", 00:19:03.569 "is_configured": false, 00:19:03.569 "data_offset": 0, 00:19:03.569 "data_size": 65536 00:19:03.569 } 00:19:03.569 ] 00:19:03.569 }' 00:19:03.569 21:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:03.569 21:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.133 21:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.133 21:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:04.391 21:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:04.391 21:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:04.391 [2024-07-15 21:32:37.678754] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:04.391 21:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:04.391 21:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:04.391 21:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:04.391 21:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:04.391 21:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:04.391 21:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:04.391 21:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:04.391 21:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:04.391 21:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:04.391 21:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:04.391 21:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.391 21:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:04.648 21:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:04.648 "name": "Existed_Raid", 00:19:04.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.648 "strip_size_kb": 64, 00:19:04.648 "state": "configuring", 00:19:04.648 "raid_level": "raid0", 00:19:04.648 "superblock": false, 00:19:04.648 "num_base_bdevs": 3, 00:19:04.648 "num_base_bdevs_discovered": 2, 00:19:04.648 "num_base_bdevs_operational": 3, 00:19:04.648 "base_bdevs_list": [ 00:19:04.648 { 00:19:04.648 "name": "BaseBdev1", 00:19:04.648 "uuid": "410baf8e-326c-40b4-a9e2-32f7ca6f5a08", 00:19:04.648 "is_configured": true, 00:19:04.648 "data_offset": 0, 00:19:04.648 "data_size": 65536 00:19:04.648 }, 00:19:04.648 { 00:19:04.648 "name": null, 00:19:04.648 "uuid": "60e6a2ac-15f3-4087-a143-8f8b3ab30a62", 00:19:04.648 "is_configured": false, 00:19:04.648 "data_offset": 0, 00:19:04.648 "data_size": 65536 00:19:04.648 }, 00:19:04.648 { 00:19:04.648 "name": "BaseBdev3", 00:19:04.648 "uuid": "16b1c8f3-78b8-4b7a-96d7-684ab03530c8", 00:19:04.648 "is_configured": true, 00:19:04.648 "data_offset": 0, 00:19:04.648 "data_size": 65536 00:19:04.648 } 00:19:04.648 ] 00:19:04.648 }' 00:19:04.649 21:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:04.649 21:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.214 21:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.214 21:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:05.472 21:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:05.472 21:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:05.472 [2024-07-15 21:32:38.760829] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:05.730 21:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:05.730 21:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:05.730 21:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:05.730 21:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:05.730 21:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:05.730 21:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:05.730 21:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:05.730 21:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:05.730 21:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:05.730 21:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:05.730 21:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.730 21:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:05.730 21:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:05.730 "name": "Existed_Raid", 00:19:05.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.730 "strip_size_kb": 64, 00:19:05.730 "state": "configuring", 00:19:05.730 "raid_level": "raid0", 00:19:05.730 "superblock": false, 00:19:05.730 "num_base_bdevs": 3, 00:19:05.730 "num_base_bdevs_discovered": 1, 00:19:05.730 "num_base_bdevs_operational": 3, 00:19:05.730 "base_bdevs_list": [ 00:19:05.730 { 00:19:05.730 "name": null, 00:19:05.730 "uuid": "410baf8e-326c-40b4-a9e2-32f7ca6f5a08", 00:19:05.730 "is_configured": false, 00:19:05.730 "data_offset": 0, 00:19:05.730 "data_size": 65536 00:19:05.730 }, 00:19:05.730 { 00:19:05.730 "name": null, 00:19:05.730 "uuid": "60e6a2ac-15f3-4087-a143-8f8b3ab30a62", 00:19:05.730 "is_configured": false, 00:19:05.730 "data_offset": 0, 00:19:05.730 "data_size": 65536 00:19:05.730 }, 00:19:05.730 { 00:19:05.730 "name": "BaseBdev3", 00:19:05.730 "uuid": "16b1c8f3-78b8-4b7a-96d7-684ab03530c8", 00:19:05.730 "is_configured": true, 00:19:05.730 "data_offset": 0, 00:19:05.730 "data_size": 65536 00:19:05.730 } 00:19:05.730 ] 00:19:05.730 }' 00:19:05.730 21:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:05.730 21:32:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.296 21:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.296 21:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:06.553 21:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:06.553 21:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:06.811 [2024-07-15 21:32:39.960576] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:06.811 21:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:06.811 21:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:06.811 21:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:06.811 21:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:06.811 21:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:06.811 21:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:06.811 21:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:06.811 21:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:06.811 21:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:06.811 21:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:06.811 21:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.811 21:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.811 21:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:06.811 "name": "Existed_Raid", 00:19:06.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.811 "strip_size_kb": 64, 00:19:06.811 "state": "configuring", 00:19:06.811 "raid_level": "raid0", 00:19:06.811 "superblock": false, 00:19:06.811 "num_base_bdevs": 3, 00:19:06.811 "num_base_bdevs_discovered": 2, 00:19:06.811 "num_base_bdevs_operational": 3, 00:19:06.811 "base_bdevs_list": [ 00:19:06.811 { 00:19:06.811 "name": null, 00:19:06.811 "uuid": "410baf8e-326c-40b4-a9e2-32f7ca6f5a08", 00:19:06.811 "is_configured": false, 00:19:06.811 "data_offset": 0, 00:19:06.811 "data_size": 65536 00:19:06.811 }, 00:19:06.811 { 00:19:06.811 "name": "BaseBdev2", 00:19:06.811 "uuid": "60e6a2ac-15f3-4087-a143-8f8b3ab30a62", 00:19:06.811 "is_configured": true, 00:19:06.811 "data_offset": 0, 00:19:06.811 "data_size": 65536 00:19:06.811 }, 00:19:06.811 { 00:19:06.811 "name": "BaseBdev3", 00:19:06.811 "uuid": "16b1c8f3-78b8-4b7a-96d7-684ab03530c8", 00:19:06.811 "is_configured": true, 00:19:06.811 "data_offset": 0, 00:19:06.811 "data_size": 65536 00:19:06.811 } 00:19:06.811 ] 00:19:06.811 }' 00:19:06.811 21:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:06.811 21:32:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.376 21:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.376 21:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:07.634 21:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:07.635 21:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:07.635 21:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.893 21:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 410baf8e-326c-40b4-a9e2-32f7ca6f5a08 00:19:07.893 [2024-07-15 21:32:41.257826] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:07.893 [2024-07-15 21:32:41.257946] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:19:07.893 [2024-07-15 21:32:41.257981] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:07.893 [2024-07-15 21:32:41.258127] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:19:07.893 [2024-07-15 21:32:41.258409] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:19:07.893 [2024-07-15 21:32:41.258448] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008d80 00:19:07.893 [2024-07-15 21:32:41.258685] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.893 NewBaseBdev 00:19:08.150 21:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:08.150 21:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:19:08.150 21:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:08.150 21:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:08.150 21:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:08.150 21:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:08.151 21:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:08.151 21:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:08.409 [ 00:19:08.409 { 00:19:08.409 "name": "NewBaseBdev", 00:19:08.409 "aliases": [ 00:19:08.409 "410baf8e-326c-40b4-a9e2-32f7ca6f5a08" 00:19:08.409 ], 00:19:08.409 "product_name": "Malloc disk", 00:19:08.409 "block_size": 512, 00:19:08.409 "num_blocks": 65536, 00:19:08.409 "uuid": "410baf8e-326c-40b4-a9e2-32f7ca6f5a08", 00:19:08.409 "assigned_rate_limits": { 00:19:08.409 "rw_ios_per_sec": 0, 00:19:08.409 "rw_mbytes_per_sec": 0, 00:19:08.409 "r_mbytes_per_sec": 0, 00:19:08.409 "w_mbytes_per_sec": 0 00:19:08.409 }, 00:19:08.409 "claimed": true, 00:19:08.409 "claim_type": "exclusive_write", 00:19:08.409 "zoned": false, 00:19:08.409 "supported_io_types": { 00:19:08.409 "read": true, 00:19:08.409 "write": true, 00:19:08.409 "unmap": true, 00:19:08.409 "flush": true, 00:19:08.409 "reset": true, 00:19:08.409 "nvme_admin": false, 00:19:08.409 "nvme_io": false, 00:19:08.409 "nvme_io_md": false, 00:19:08.409 "write_zeroes": true, 00:19:08.409 "zcopy": true, 00:19:08.409 "get_zone_info": false, 00:19:08.409 "zone_management": false, 00:19:08.409 "zone_append": false, 00:19:08.409 "compare": false, 00:19:08.409 "compare_and_write": false, 00:19:08.409 "abort": true, 00:19:08.409 "seek_hole": false, 00:19:08.409 "seek_data": false, 00:19:08.409 "copy": true, 00:19:08.409 "nvme_iov_md": false 00:19:08.409 }, 00:19:08.409 "memory_domains": [ 00:19:08.409 { 00:19:08.409 "dma_device_id": "system", 00:19:08.409 "dma_device_type": 1 00:19:08.409 }, 00:19:08.409 { 00:19:08.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:08.409 "dma_device_type": 2 00:19:08.409 } 00:19:08.409 ], 00:19:08.409 "driver_specific": {} 00:19:08.409 } 00:19:08.409 ] 00:19:08.409 21:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:08.409 21:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:19:08.409 21:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:08.409 21:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:08.409 21:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:08.409 21:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:08.409 21:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:08.409 21:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:08.409 21:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:08.410 21:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:08.410 21:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:08.410 21:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.410 21:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.410 21:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:08.410 "name": "Existed_Raid", 00:19:08.410 "uuid": "1ad19402-4e08-4039-9506-b08975189130", 00:19:08.410 "strip_size_kb": 64, 00:19:08.410 "state": "online", 00:19:08.410 "raid_level": "raid0", 00:19:08.410 "superblock": false, 00:19:08.410 "num_base_bdevs": 3, 00:19:08.410 "num_base_bdevs_discovered": 3, 00:19:08.410 "num_base_bdevs_operational": 3, 00:19:08.410 "base_bdevs_list": [ 00:19:08.410 { 00:19:08.410 "name": "NewBaseBdev", 00:19:08.410 "uuid": "410baf8e-326c-40b4-a9e2-32f7ca6f5a08", 00:19:08.410 "is_configured": true, 00:19:08.410 "data_offset": 0, 00:19:08.410 "data_size": 65536 00:19:08.410 }, 00:19:08.410 { 00:19:08.410 "name": "BaseBdev2", 00:19:08.410 "uuid": "60e6a2ac-15f3-4087-a143-8f8b3ab30a62", 00:19:08.410 "is_configured": true, 00:19:08.410 "data_offset": 0, 00:19:08.410 "data_size": 65536 00:19:08.410 }, 00:19:08.410 { 00:19:08.410 "name": "BaseBdev3", 00:19:08.410 "uuid": "16b1c8f3-78b8-4b7a-96d7-684ab03530c8", 00:19:08.410 "is_configured": true, 00:19:08.410 "data_offset": 0, 00:19:08.410 "data_size": 65536 00:19:08.410 } 00:19:08.410 ] 00:19:08.410 }' 00:19:08.410 21:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:08.410 21:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.019 21:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:09.020 21:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:09.020 21:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:09.020 21:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:09.020 21:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:09.020 21:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:09.020 21:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:09.020 21:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:09.279 [2024-07-15 21:32:42.535844] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:09.279 21:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:09.279 "name": "Existed_Raid", 00:19:09.279 "aliases": [ 00:19:09.279 "1ad19402-4e08-4039-9506-b08975189130" 00:19:09.279 ], 00:19:09.279 "product_name": "Raid Volume", 00:19:09.279 "block_size": 512, 00:19:09.279 "num_blocks": 196608, 00:19:09.279 "uuid": "1ad19402-4e08-4039-9506-b08975189130", 00:19:09.279 "assigned_rate_limits": { 00:19:09.279 "rw_ios_per_sec": 0, 00:19:09.279 "rw_mbytes_per_sec": 0, 00:19:09.279 "r_mbytes_per_sec": 0, 00:19:09.279 "w_mbytes_per_sec": 0 00:19:09.279 }, 00:19:09.279 "claimed": false, 00:19:09.279 "zoned": false, 00:19:09.279 "supported_io_types": { 00:19:09.279 "read": true, 00:19:09.279 "write": true, 00:19:09.279 "unmap": true, 00:19:09.279 "flush": true, 00:19:09.279 "reset": true, 00:19:09.279 "nvme_admin": false, 00:19:09.279 "nvme_io": false, 00:19:09.279 "nvme_io_md": false, 00:19:09.279 "write_zeroes": true, 00:19:09.279 "zcopy": false, 00:19:09.279 "get_zone_info": false, 00:19:09.279 "zone_management": false, 00:19:09.279 "zone_append": false, 00:19:09.279 "compare": false, 00:19:09.279 "compare_and_write": false, 00:19:09.279 "abort": false, 00:19:09.279 "seek_hole": false, 00:19:09.279 "seek_data": false, 00:19:09.279 "copy": false, 00:19:09.279 "nvme_iov_md": false 00:19:09.279 }, 00:19:09.279 "memory_domains": [ 00:19:09.279 { 00:19:09.279 "dma_device_id": "system", 00:19:09.279 "dma_device_type": 1 00:19:09.279 }, 00:19:09.279 { 00:19:09.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.279 "dma_device_type": 2 00:19:09.279 }, 00:19:09.279 { 00:19:09.279 "dma_device_id": "system", 00:19:09.279 "dma_device_type": 1 00:19:09.279 }, 00:19:09.279 { 00:19:09.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.279 "dma_device_type": 2 00:19:09.279 }, 00:19:09.279 { 00:19:09.279 "dma_device_id": "system", 00:19:09.279 "dma_device_type": 1 00:19:09.279 }, 00:19:09.279 { 00:19:09.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.279 "dma_device_type": 2 00:19:09.279 } 00:19:09.279 ], 00:19:09.279 "driver_specific": { 00:19:09.279 "raid": { 00:19:09.279 "uuid": "1ad19402-4e08-4039-9506-b08975189130", 00:19:09.279 "strip_size_kb": 64, 00:19:09.279 "state": "online", 00:19:09.279 "raid_level": "raid0", 00:19:09.279 "superblock": false, 00:19:09.279 "num_base_bdevs": 3, 00:19:09.279 "num_base_bdevs_discovered": 3, 00:19:09.279 "num_base_bdevs_operational": 3, 00:19:09.279 "base_bdevs_list": [ 00:19:09.279 { 00:19:09.279 "name": "NewBaseBdev", 00:19:09.279 "uuid": "410baf8e-326c-40b4-a9e2-32f7ca6f5a08", 00:19:09.279 "is_configured": true, 00:19:09.279 "data_offset": 0, 00:19:09.279 "data_size": 65536 00:19:09.279 }, 00:19:09.279 { 00:19:09.279 "name": "BaseBdev2", 00:19:09.279 "uuid": "60e6a2ac-15f3-4087-a143-8f8b3ab30a62", 00:19:09.279 "is_configured": true, 00:19:09.279 "data_offset": 0, 00:19:09.279 "data_size": 65536 00:19:09.279 }, 00:19:09.279 { 00:19:09.279 "name": "BaseBdev3", 00:19:09.279 "uuid": "16b1c8f3-78b8-4b7a-96d7-684ab03530c8", 00:19:09.279 "is_configured": true, 00:19:09.279 "data_offset": 0, 00:19:09.279 "data_size": 65536 00:19:09.279 } 00:19:09.279 ] 00:19:09.279 } 00:19:09.279 } 00:19:09.279 }' 00:19:09.279 21:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:09.279 21:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:09.279 BaseBdev2 00:19:09.279 BaseBdev3' 00:19:09.279 21:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:09.279 21:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:09.279 21:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:09.538 21:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:09.538 "name": "NewBaseBdev", 00:19:09.538 "aliases": [ 00:19:09.538 "410baf8e-326c-40b4-a9e2-32f7ca6f5a08" 00:19:09.538 ], 00:19:09.538 "product_name": "Malloc disk", 00:19:09.538 "block_size": 512, 00:19:09.538 "num_blocks": 65536, 00:19:09.538 "uuid": "410baf8e-326c-40b4-a9e2-32f7ca6f5a08", 00:19:09.538 "assigned_rate_limits": { 00:19:09.538 "rw_ios_per_sec": 0, 00:19:09.538 "rw_mbytes_per_sec": 0, 00:19:09.538 "r_mbytes_per_sec": 0, 00:19:09.538 "w_mbytes_per_sec": 0 00:19:09.538 }, 00:19:09.538 "claimed": true, 00:19:09.538 "claim_type": "exclusive_write", 00:19:09.538 "zoned": false, 00:19:09.538 "supported_io_types": { 00:19:09.538 "read": true, 00:19:09.538 "write": true, 00:19:09.538 "unmap": true, 00:19:09.538 "flush": true, 00:19:09.538 "reset": true, 00:19:09.538 "nvme_admin": false, 00:19:09.538 "nvme_io": false, 00:19:09.538 "nvme_io_md": false, 00:19:09.538 "write_zeroes": true, 00:19:09.538 "zcopy": true, 00:19:09.538 "get_zone_info": false, 00:19:09.538 "zone_management": false, 00:19:09.538 "zone_append": false, 00:19:09.538 "compare": false, 00:19:09.538 "compare_and_write": false, 00:19:09.538 "abort": true, 00:19:09.538 "seek_hole": false, 00:19:09.538 "seek_data": false, 00:19:09.538 "copy": true, 00:19:09.538 "nvme_iov_md": false 00:19:09.538 }, 00:19:09.538 "memory_domains": [ 00:19:09.538 { 00:19:09.539 "dma_device_id": "system", 00:19:09.539 "dma_device_type": 1 00:19:09.539 }, 00:19:09.539 { 00:19:09.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.539 "dma_device_type": 2 00:19:09.539 } 00:19:09.539 ], 00:19:09.539 "driver_specific": {} 00:19:09.539 }' 00:19:09.539 21:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:09.539 21:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:09.539 21:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:09.539 21:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:09.797 21:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:09.797 21:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:09.797 21:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:09.797 21:32:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:09.797 21:32:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:09.797 21:32:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:09.797 21:32:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:10.098 21:32:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:10.098 21:32:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:10.098 21:32:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:10.098 21:32:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:10.098 21:32:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:10.098 "name": "BaseBdev2", 00:19:10.098 "aliases": [ 00:19:10.098 "60e6a2ac-15f3-4087-a143-8f8b3ab30a62" 00:19:10.098 ], 00:19:10.098 "product_name": "Malloc disk", 00:19:10.098 "block_size": 512, 00:19:10.098 "num_blocks": 65536, 00:19:10.098 "uuid": "60e6a2ac-15f3-4087-a143-8f8b3ab30a62", 00:19:10.098 "assigned_rate_limits": { 00:19:10.098 "rw_ios_per_sec": 0, 00:19:10.098 "rw_mbytes_per_sec": 0, 00:19:10.098 "r_mbytes_per_sec": 0, 00:19:10.098 "w_mbytes_per_sec": 0 00:19:10.098 }, 00:19:10.098 "claimed": true, 00:19:10.098 "claim_type": "exclusive_write", 00:19:10.098 "zoned": false, 00:19:10.098 "supported_io_types": { 00:19:10.098 "read": true, 00:19:10.098 "write": true, 00:19:10.098 "unmap": true, 00:19:10.098 "flush": true, 00:19:10.098 "reset": true, 00:19:10.098 "nvme_admin": false, 00:19:10.098 "nvme_io": false, 00:19:10.098 "nvme_io_md": false, 00:19:10.098 "write_zeroes": true, 00:19:10.098 "zcopy": true, 00:19:10.098 "get_zone_info": false, 00:19:10.098 "zone_management": false, 00:19:10.098 "zone_append": false, 00:19:10.098 "compare": false, 00:19:10.098 "compare_and_write": false, 00:19:10.098 "abort": true, 00:19:10.098 "seek_hole": false, 00:19:10.098 "seek_data": false, 00:19:10.098 "copy": true, 00:19:10.098 "nvme_iov_md": false 00:19:10.098 }, 00:19:10.098 "memory_domains": [ 00:19:10.098 { 00:19:10.098 "dma_device_id": "system", 00:19:10.098 "dma_device_type": 1 00:19:10.098 }, 00:19:10.098 { 00:19:10.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.098 "dma_device_type": 2 00:19:10.098 } 00:19:10.098 ], 00:19:10.098 "driver_specific": {} 00:19:10.098 }' 00:19:10.098 21:32:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:10.098 21:32:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:10.356 21:32:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:10.356 21:32:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:10.356 21:32:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:10.356 21:32:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:10.356 21:32:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:10.356 21:32:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:10.356 21:32:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:10.356 21:32:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:10.613 21:32:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:10.613 21:32:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:10.613 21:32:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:10.613 21:32:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:10.613 21:32:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:10.870 21:32:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:10.870 "name": "BaseBdev3", 00:19:10.870 "aliases": [ 00:19:10.870 "16b1c8f3-78b8-4b7a-96d7-684ab03530c8" 00:19:10.870 ], 00:19:10.870 "product_name": "Malloc disk", 00:19:10.870 "block_size": 512, 00:19:10.870 "num_blocks": 65536, 00:19:10.870 "uuid": "16b1c8f3-78b8-4b7a-96d7-684ab03530c8", 00:19:10.870 "assigned_rate_limits": { 00:19:10.870 "rw_ios_per_sec": 0, 00:19:10.870 "rw_mbytes_per_sec": 0, 00:19:10.870 "r_mbytes_per_sec": 0, 00:19:10.870 "w_mbytes_per_sec": 0 00:19:10.870 }, 00:19:10.870 "claimed": true, 00:19:10.870 "claim_type": "exclusive_write", 00:19:10.870 "zoned": false, 00:19:10.870 "supported_io_types": { 00:19:10.870 "read": true, 00:19:10.870 "write": true, 00:19:10.870 "unmap": true, 00:19:10.870 "flush": true, 00:19:10.870 "reset": true, 00:19:10.870 "nvme_admin": false, 00:19:10.870 "nvme_io": false, 00:19:10.870 "nvme_io_md": false, 00:19:10.870 "write_zeroes": true, 00:19:10.870 "zcopy": true, 00:19:10.870 "get_zone_info": false, 00:19:10.870 "zone_management": false, 00:19:10.870 "zone_append": false, 00:19:10.870 "compare": false, 00:19:10.870 "compare_and_write": false, 00:19:10.870 "abort": true, 00:19:10.870 "seek_hole": false, 00:19:10.870 "seek_data": false, 00:19:10.870 "copy": true, 00:19:10.870 "nvme_iov_md": false 00:19:10.870 }, 00:19:10.870 "memory_domains": [ 00:19:10.870 { 00:19:10.870 "dma_device_id": "system", 00:19:10.870 "dma_device_type": 1 00:19:10.870 }, 00:19:10.870 { 00:19:10.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.870 "dma_device_type": 2 00:19:10.870 } 00:19:10.870 ], 00:19:10.870 "driver_specific": {} 00:19:10.870 }' 00:19:10.870 21:32:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:10.870 21:32:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:10.870 21:32:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:10.870 21:32:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:10.870 21:32:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:10.870 21:32:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:10.870 21:32:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:11.127 21:32:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:11.127 21:32:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:11.127 21:32:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:11.127 21:32:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:11.127 21:32:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:11.127 21:32:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:11.385 [2024-07-15 21:32:44.603997] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:11.385 [2024-07-15 21:32:44.604096] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:11.385 [2024-07-15 21:32:44.604189] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:11.385 [2024-07-15 21:32:44.604256] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:11.385 [2024-07-15 21:32:44.604279] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name Existed_Raid, state offline 00:19:11.385 21:32:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 126233 00:19:11.385 21:32:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 126233 ']' 00:19:11.385 21:32:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 126233 00:19:11.385 21:32:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:19:11.385 21:32:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:11.385 21:32:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 126233 00:19:11.385 killing process with pid 126233 00:19:11.385 21:32:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:11.385 21:32:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:11.385 21:32:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 126233' 00:19:11.385 21:32:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 126233 00:19:11.385 21:32:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 126233 00:19:11.385 [2024-07-15 21:32:44.645571] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:11.642 [2024-07-15 21:32:44.919070] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:19:13.019 00:19:13.019 real 0m26.085s 00:19:13.019 user 0m48.257s 00:19:13.019 sys 0m3.167s 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.019 ************************************ 00:19:13.019 END TEST raid_state_function_test 00:19:13.019 ************************************ 00:19:13.019 21:32:46 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:13.019 21:32:46 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:19:13.019 21:32:46 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:13.019 21:32:46 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:13.019 21:32:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:13.019 ************************************ 00:19:13.019 START TEST raid_state_function_test_sb 00:19:13.019 ************************************ 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 true 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=127206 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 127206' 00:19:13.019 Process raid pid: 127206 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 127206 /var/tmp/spdk-raid.sock 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 127206 ']' 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:13.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:13.019 21:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.019 [2024-07-15 21:32:46.228417] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:19:13.019 [2024-07-15 21:32:46.228630] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:13.019 [2024-07-15 21:32:46.383680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.278 [2024-07-15 21:32:46.582342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.536 [2024-07-15 21:32:46.768525] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:13.797 21:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:13.797 21:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:19:13.797 21:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:14.060 [2024-07-15 21:32:47.220130] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:14.060 [2024-07-15 21:32:47.220268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:14.060 [2024-07-15 21:32:47.220299] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:14.060 [2024-07-15 21:32:47.220336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:14.060 [2024-07-15 21:32:47.220369] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:14.060 [2024-07-15 21:32:47.220390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:14.060 21:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:14.060 21:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:14.060 21:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:14.060 21:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:14.060 21:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:14.060 21:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:14.060 21:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:14.060 21:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:14.060 21:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:14.060 21:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:14.060 21:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.060 21:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.060 21:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:14.060 "name": "Existed_Raid", 00:19:14.060 "uuid": "2eabcf34-aedf-4070-80aa-7f1512dbfb11", 00:19:14.060 "strip_size_kb": 64, 00:19:14.060 "state": "configuring", 00:19:14.060 "raid_level": "raid0", 00:19:14.060 "superblock": true, 00:19:14.060 "num_base_bdevs": 3, 00:19:14.060 "num_base_bdevs_discovered": 0, 00:19:14.060 "num_base_bdevs_operational": 3, 00:19:14.060 "base_bdevs_list": [ 00:19:14.060 { 00:19:14.060 "name": "BaseBdev1", 00:19:14.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.060 "is_configured": false, 00:19:14.060 "data_offset": 0, 00:19:14.060 "data_size": 0 00:19:14.060 }, 00:19:14.060 { 00:19:14.060 "name": "BaseBdev2", 00:19:14.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.060 "is_configured": false, 00:19:14.060 "data_offset": 0, 00:19:14.060 "data_size": 0 00:19:14.060 }, 00:19:14.060 { 00:19:14.060 "name": "BaseBdev3", 00:19:14.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.060 "is_configured": false, 00:19:14.061 "data_offset": 0, 00:19:14.061 "data_size": 0 00:19:14.061 } 00:19:14.061 ] 00:19:14.061 }' 00:19:14.061 21:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:14.061 21:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.638 21:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:14.896 [2024-07-15 21:32:48.138449] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:14.896 [2024-07-15 21:32:48.138539] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:19:14.896 21:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:15.153 [2024-07-15 21:32:48.322156] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:15.153 [2024-07-15 21:32:48.322277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:15.153 [2024-07-15 21:32:48.322302] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:15.153 [2024-07-15 21:32:48.322325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:15.153 [2024-07-15 21:32:48.322339] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:15.153 [2024-07-15 21:32:48.322364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:15.153 21:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:15.410 [2024-07-15 21:32:48.536029] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:15.410 BaseBdev1 00:19:15.410 21:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:15.410 21:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:15.410 21:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:15.410 21:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:15.410 21:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:15.410 21:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:15.410 21:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:15.410 21:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:15.668 [ 00:19:15.668 { 00:19:15.668 "name": "BaseBdev1", 00:19:15.668 "aliases": [ 00:19:15.668 "6b1cf739-469f-4777-a714-c9843f039953" 00:19:15.668 ], 00:19:15.668 "product_name": "Malloc disk", 00:19:15.668 "block_size": 512, 00:19:15.668 "num_blocks": 65536, 00:19:15.668 "uuid": "6b1cf739-469f-4777-a714-c9843f039953", 00:19:15.668 "assigned_rate_limits": { 00:19:15.668 "rw_ios_per_sec": 0, 00:19:15.668 "rw_mbytes_per_sec": 0, 00:19:15.668 "r_mbytes_per_sec": 0, 00:19:15.668 "w_mbytes_per_sec": 0 00:19:15.668 }, 00:19:15.668 "claimed": true, 00:19:15.668 "claim_type": "exclusive_write", 00:19:15.668 "zoned": false, 00:19:15.668 "supported_io_types": { 00:19:15.668 "read": true, 00:19:15.668 "write": true, 00:19:15.668 "unmap": true, 00:19:15.668 "flush": true, 00:19:15.668 "reset": true, 00:19:15.668 "nvme_admin": false, 00:19:15.668 "nvme_io": false, 00:19:15.668 "nvme_io_md": false, 00:19:15.668 "write_zeroes": true, 00:19:15.668 "zcopy": true, 00:19:15.668 "get_zone_info": false, 00:19:15.668 "zone_management": false, 00:19:15.668 "zone_append": false, 00:19:15.668 "compare": false, 00:19:15.668 "compare_and_write": false, 00:19:15.668 "abort": true, 00:19:15.668 "seek_hole": false, 00:19:15.668 "seek_data": false, 00:19:15.668 "copy": true, 00:19:15.668 "nvme_iov_md": false 00:19:15.669 }, 00:19:15.669 "memory_domains": [ 00:19:15.669 { 00:19:15.669 "dma_device_id": "system", 00:19:15.669 "dma_device_type": 1 00:19:15.669 }, 00:19:15.669 { 00:19:15.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:15.669 "dma_device_type": 2 00:19:15.669 } 00:19:15.669 ], 00:19:15.669 "driver_specific": {} 00:19:15.669 } 00:19:15.669 ] 00:19:15.669 21:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:15.669 21:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:15.669 21:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:15.669 21:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:15.669 21:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:15.669 21:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:15.669 21:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:15.669 21:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:15.669 21:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:15.669 21:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:15.669 21:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:15.669 21:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.669 21:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.926 21:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:15.926 "name": "Existed_Raid", 00:19:15.926 "uuid": "b47d3a57-c6f4-450e-be02-4cf68c3351ac", 00:19:15.926 "strip_size_kb": 64, 00:19:15.926 "state": "configuring", 00:19:15.926 "raid_level": "raid0", 00:19:15.926 "superblock": true, 00:19:15.926 "num_base_bdevs": 3, 00:19:15.926 "num_base_bdevs_discovered": 1, 00:19:15.926 "num_base_bdevs_operational": 3, 00:19:15.926 "base_bdevs_list": [ 00:19:15.926 { 00:19:15.926 "name": "BaseBdev1", 00:19:15.926 "uuid": "6b1cf739-469f-4777-a714-c9843f039953", 00:19:15.926 "is_configured": true, 00:19:15.926 "data_offset": 2048, 00:19:15.926 "data_size": 63488 00:19:15.926 }, 00:19:15.926 { 00:19:15.926 "name": "BaseBdev2", 00:19:15.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.926 "is_configured": false, 00:19:15.926 "data_offset": 0, 00:19:15.926 "data_size": 0 00:19:15.926 }, 00:19:15.926 { 00:19:15.926 "name": "BaseBdev3", 00:19:15.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.926 "is_configured": false, 00:19:15.926 "data_offset": 0, 00:19:15.926 "data_size": 0 00:19:15.926 } 00:19:15.926 ] 00:19:15.927 }' 00:19:15.927 21:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:15.927 21:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.491 21:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:16.491 [2024-07-15 21:32:49.837806] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:16.491 [2024-07-15 21:32:49.837908] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:19:16.491 21:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:16.748 [2024-07-15 21:32:50.017533] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:16.748 [2024-07-15 21:32:50.019081] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:16.748 [2024-07-15 21:32:50.019171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:16.748 [2024-07-15 21:32:50.019209] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:16.748 [2024-07-15 21:32:50.019250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:16.748 21:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:16.748 21:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:16.748 21:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:16.748 21:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:16.748 21:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:16.748 21:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:16.748 21:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:16.748 21:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:16.748 21:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:16.748 21:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:16.748 21:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:16.748 21:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:16.748 21:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:16.748 21:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.005 21:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:17.005 "name": "Existed_Raid", 00:19:17.005 "uuid": "428ed0a3-9ec5-4b3c-91aa-e045a78dae79", 00:19:17.005 "strip_size_kb": 64, 00:19:17.005 "state": "configuring", 00:19:17.005 "raid_level": "raid0", 00:19:17.005 "superblock": true, 00:19:17.005 "num_base_bdevs": 3, 00:19:17.005 "num_base_bdevs_discovered": 1, 00:19:17.005 "num_base_bdevs_operational": 3, 00:19:17.005 "base_bdevs_list": [ 00:19:17.005 { 00:19:17.005 "name": "BaseBdev1", 00:19:17.005 "uuid": "6b1cf739-469f-4777-a714-c9843f039953", 00:19:17.005 "is_configured": true, 00:19:17.005 "data_offset": 2048, 00:19:17.005 "data_size": 63488 00:19:17.005 }, 00:19:17.005 { 00:19:17.005 "name": "BaseBdev2", 00:19:17.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.005 "is_configured": false, 00:19:17.006 "data_offset": 0, 00:19:17.006 "data_size": 0 00:19:17.006 }, 00:19:17.006 { 00:19:17.006 "name": "BaseBdev3", 00:19:17.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.006 "is_configured": false, 00:19:17.006 "data_offset": 0, 00:19:17.006 "data_size": 0 00:19:17.006 } 00:19:17.006 ] 00:19:17.006 }' 00:19:17.006 21:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:17.006 21:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.570 21:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:17.828 [2024-07-15 21:32:51.040573] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:17.828 BaseBdev2 00:19:17.828 21:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:17.828 21:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:17.828 21:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:17.828 21:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:17.828 21:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:17.828 21:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:17.828 21:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:18.086 21:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:18.086 [ 00:19:18.086 { 00:19:18.086 "name": "BaseBdev2", 00:19:18.086 "aliases": [ 00:19:18.086 "1cf60f3a-b37c-4de1-a2de-025a0bdc8b59" 00:19:18.086 ], 00:19:18.086 "product_name": "Malloc disk", 00:19:18.086 "block_size": 512, 00:19:18.086 "num_blocks": 65536, 00:19:18.086 "uuid": "1cf60f3a-b37c-4de1-a2de-025a0bdc8b59", 00:19:18.086 "assigned_rate_limits": { 00:19:18.086 "rw_ios_per_sec": 0, 00:19:18.086 "rw_mbytes_per_sec": 0, 00:19:18.086 "r_mbytes_per_sec": 0, 00:19:18.086 "w_mbytes_per_sec": 0 00:19:18.086 }, 00:19:18.086 "claimed": true, 00:19:18.086 "claim_type": "exclusive_write", 00:19:18.086 "zoned": false, 00:19:18.086 "supported_io_types": { 00:19:18.086 "read": true, 00:19:18.086 "write": true, 00:19:18.086 "unmap": true, 00:19:18.086 "flush": true, 00:19:18.086 "reset": true, 00:19:18.086 "nvme_admin": false, 00:19:18.086 "nvme_io": false, 00:19:18.086 "nvme_io_md": false, 00:19:18.086 "write_zeroes": true, 00:19:18.086 "zcopy": true, 00:19:18.086 "get_zone_info": false, 00:19:18.086 "zone_management": false, 00:19:18.086 "zone_append": false, 00:19:18.086 "compare": false, 00:19:18.086 "compare_and_write": false, 00:19:18.086 "abort": true, 00:19:18.086 "seek_hole": false, 00:19:18.086 "seek_data": false, 00:19:18.086 "copy": true, 00:19:18.086 "nvme_iov_md": false 00:19:18.086 }, 00:19:18.086 "memory_domains": [ 00:19:18.086 { 00:19:18.086 "dma_device_id": "system", 00:19:18.086 "dma_device_type": 1 00:19:18.086 }, 00:19:18.086 { 00:19:18.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:18.086 "dma_device_type": 2 00:19:18.086 } 00:19:18.086 ], 00:19:18.086 "driver_specific": {} 00:19:18.086 } 00:19:18.086 ] 00:19:18.086 21:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:18.086 21:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:18.086 21:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:18.086 21:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:18.086 21:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:18.086 21:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:18.086 21:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:18.086 21:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:18.086 21:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:18.086 21:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:18.086 21:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:18.086 21:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:18.086 21:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:18.086 21:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.086 21:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:18.347 21:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:18.347 "name": "Existed_Raid", 00:19:18.347 "uuid": "428ed0a3-9ec5-4b3c-91aa-e045a78dae79", 00:19:18.347 "strip_size_kb": 64, 00:19:18.347 "state": "configuring", 00:19:18.347 "raid_level": "raid0", 00:19:18.347 "superblock": true, 00:19:18.347 "num_base_bdevs": 3, 00:19:18.347 "num_base_bdevs_discovered": 2, 00:19:18.347 "num_base_bdevs_operational": 3, 00:19:18.347 "base_bdevs_list": [ 00:19:18.347 { 00:19:18.347 "name": "BaseBdev1", 00:19:18.347 "uuid": "6b1cf739-469f-4777-a714-c9843f039953", 00:19:18.347 "is_configured": true, 00:19:18.347 "data_offset": 2048, 00:19:18.347 "data_size": 63488 00:19:18.347 }, 00:19:18.347 { 00:19:18.347 "name": "BaseBdev2", 00:19:18.347 "uuid": "1cf60f3a-b37c-4de1-a2de-025a0bdc8b59", 00:19:18.347 "is_configured": true, 00:19:18.347 "data_offset": 2048, 00:19:18.347 "data_size": 63488 00:19:18.347 }, 00:19:18.347 { 00:19:18.347 "name": "BaseBdev3", 00:19:18.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.347 "is_configured": false, 00:19:18.347 "data_offset": 0, 00:19:18.347 "data_size": 0 00:19:18.347 } 00:19:18.347 ] 00:19:18.347 }' 00:19:18.347 21:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:18.347 21:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.910 21:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:19.168 [2024-07-15 21:32:52.406593] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:19.168 [2024-07-15 21:32:52.406903] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:19:19.168 [2024-07-15 21:32:52.406937] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:19.168 [2024-07-15 21:32:52.407085] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:19:19.168 BaseBdev3 00:19:19.168 [2024-07-15 21:32:52.407382] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:19:19.168 [2024-07-15 21:32:52.407425] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:19:19.168 [2024-07-15 21:32:52.407586] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:19.168 21:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:19.168 21:32:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:19.168 21:32:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:19.168 21:32:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:19.168 21:32:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:19.168 21:32:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:19.168 21:32:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:19.430 21:32:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:19.430 [ 00:19:19.430 { 00:19:19.430 "name": "BaseBdev3", 00:19:19.430 "aliases": [ 00:19:19.430 "8075c3f5-3a59-4e9d-afba-d63d1b9d0956" 00:19:19.430 ], 00:19:19.430 "product_name": "Malloc disk", 00:19:19.430 "block_size": 512, 00:19:19.430 "num_blocks": 65536, 00:19:19.430 "uuid": "8075c3f5-3a59-4e9d-afba-d63d1b9d0956", 00:19:19.430 "assigned_rate_limits": { 00:19:19.430 "rw_ios_per_sec": 0, 00:19:19.430 "rw_mbytes_per_sec": 0, 00:19:19.430 "r_mbytes_per_sec": 0, 00:19:19.430 "w_mbytes_per_sec": 0 00:19:19.430 }, 00:19:19.430 "claimed": true, 00:19:19.430 "claim_type": "exclusive_write", 00:19:19.430 "zoned": false, 00:19:19.430 "supported_io_types": { 00:19:19.430 "read": true, 00:19:19.430 "write": true, 00:19:19.430 "unmap": true, 00:19:19.430 "flush": true, 00:19:19.430 "reset": true, 00:19:19.430 "nvme_admin": false, 00:19:19.430 "nvme_io": false, 00:19:19.430 "nvme_io_md": false, 00:19:19.430 "write_zeroes": true, 00:19:19.430 "zcopy": true, 00:19:19.430 "get_zone_info": false, 00:19:19.430 "zone_management": false, 00:19:19.430 "zone_append": false, 00:19:19.430 "compare": false, 00:19:19.430 "compare_and_write": false, 00:19:19.430 "abort": true, 00:19:19.430 "seek_hole": false, 00:19:19.430 "seek_data": false, 00:19:19.430 "copy": true, 00:19:19.430 "nvme_iov_md": false 00:19:19.430 }, 00:19:19.430 "memory_domains": [ 00:19:19.430 { 00:19:19.430 "dma_device_id": "system", 00:19:19.430 "dma_device_type": 1 00:19:19.430 }, 00:19:19.430 { 00:19:19.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:19.430 "dma_device_type": 2 00:19:19.430 } 00:19:19.430 ], 00:19:19.431 "driver_specific": {} 00:19:19.431 } 00:19:19.431 ] 00:19:19.431 21:32:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:19.431 21:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:19.431 21:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:19.431 21:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:19:19.431 21:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:19.431 21:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:19.431 21:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:19.431 21:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:19.431 21:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:19.431 21:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:19.431 21:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:19.431 21:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:19.431 21:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:19.431 21:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.431 21:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:19.699 21:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:19.699 "name": "Existed_Raid", 00:19:19.699 "uuid": "428ed0a3-9ec5-4b3c-91aa-e045a78dae79", 00:19:19.699 "strip_size_kb": 64, 00:19:19.699 "state": "online", 00:19:19.699 "raid_level": "raid0", 00:19:19.699 "superblock": true, 00:19:19.699 "num_base_bdevs": 3, 00:19:19.699 "num_base_bdevs_discovered": 3, 00:19:19.699 "num_base_bdevs_operational": 3, 00:19:19.699 "base_bdevs_list": [ 00:19:19.699 { 00:19:19.699 "name": "BaseBdev1", 00:19:19.699 "uuid": "6b1cf739-469f-4777-a714-c9843f039953", 00:19:19.699 "is_configured": true, 00:19:19.699 "data_offset": 2048, 00:19:19.699 "data_size": 63488 00:19:19.699 }, 00:19:19.699 { 00:19:19.699 "name": "BaseBdev2", 00:19:19.699 "uuid": "1cf60f3a-b37c-4de1-a2de-025a0bdc8b59", 00:19:19.699 "is_configured": true, 00:19:19.699 "data_offset": 2048, 00:19:19.699 "data_size": 63488 00:19:19.699 }, 00:19:19.699 { 00:19:19.699 "name": "BaseBdev3", 00:19:19.699 "uuid": "8075c3f5-3a59-4e9d-afba-d63d1b9d0956", 00:19:19.699 "is_configured": true, 00:19:19.699 "data_offset": 2048, 00:19:19.699 "data_size": 63488 00:19:19.699 } 00:19:19.699 ] 00:19:19.699 }' 00:19:19.699 21:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:19.699 21:32:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.277 21:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:20.277 21:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:20.277 21:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:20.277 21:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:20.277 21:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:20.277 21:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:20.277 21:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:20.277 21:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:20.534 [2024-07-15 21:32:53.664574] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:20.534 21:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:20.534 "name": "Existed_Raid", 00:19:20.534 "aliases": [ 00:19:20.534 "428ed0a3-9ec5-4b3c-91aa-e045a78dae79" 00:19:20.534 ], 00:19:20.534 "product_name": "Raid Volume", 00:19:20.534 "block_size": 512, 00:19:20.534 "num_blocks": 190464, 00:19:20.534 "uuid": "428ed0a3-9ec5-4b3c-91aa-e045a78dae79", 00:19:20.534 "assigned_rate_limits": { 00:19:20.534 "rw_ios_per_sec": 0, 00:19:20.534 "rw_mbytes_per_sec": 0, 00:19:20.534 "r_mbytes_per_sec": 0, 00:19:20.534 "w_mbytes_per_sec": 0 00:19:20.534 }, 00:19:20.534 "claimed": false, 00:19:20.534 "zoned": false, 00:19:20.534 "supported_io_types": { 00:19:20.534 "read": true, 00:19:20.534 "write": true, 00:19:20.534 "unmap": true, 00:19:20.534 "flush": true, 00:19:20.534 "reset": true, 00:19:20.534 "nvme_admin": false, 00:19:20.534 "nvme_io": false, 00:19:20.534 "nvme_io_md": false, 00:19:20.534 "write_zeroes": true, 00:19:20.534 "zcopy": false, 00:19:20.534 "get_zone_info": false, 00:19:20.534 "zone_management": false, 00:19:20.534 "zone_append": false, 00:19:20.534 "compare": false, 00:19:20.534 "compare_and_write": false, 00:19:20.534 "abort": false, 00:19:20.534 "seek_hole": false, 00:19:20.534 "seek_data": false, 00:19:20.534 "copy": false, 00:19:20.534 "nvme_iov_md": false 00:19:20.534 }, 00:19:20.534 "memory_domains": [ 00:19:20.534 { 00:19:20.534 "dma_device_id": "system", 00:19:20.534 "dma_device_type": 1 00:19:20.534 }, 00:19:20.534 { 00:19:20.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.534 "dma_device_type": 2 00:19:20.534 }, 00:19:20.534 { 00:19:20.534 "dma_device_id": "system", 00:19:20.534 "dma_device_type": 1 00:19:20.534 }, 00:19:20.534 { 00:19:20.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.534 "dma_device_type": 2 00:19:20.534 }, 00:19:20.534 { 00:19:20.534 "dma_device_id": "system", 00:19:20.534 "dma_device_type": 1 00:19:20.534 }, 00:19:20.534 { 00:19:20.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.534 "dma_device_type": 2 00:19:20.534 } 00:19:20.534 ], 00:19:20.534 "driver_specific": { 00:19:20.534 "raid": { 00:19:20.534 "uuid": "428ed0a3-9ec5-4b3c-91aa-e045a78dae79", 00:19:20.534 "strip_size_kb": 64, 00:19:20.534 "state": "online", 00:19:20.534 "raid_level": "raid0", 00:19:20.534 "superblock": true, 00:19:20.534 "num_base_bdevs": 3, 00:19:20.534 "num_base_bdevs_discovered": 3, 00:19:20.534 "num_base_bdevs_operational": 3, 00:19:20.534 "base_bdevs_list": [ 00:19:20.534 { 00:19:20.534 "name": "BaseBdev1", 00:19:20.534 "uuid": "6b1cf739-469f-4777-a714-c9843f039953", 00:19:20.534 "is_configured": true, 00:19:20.534 "data_offset": 2048, 00:19:20.534 "data_size": 63488 00:19:20.534 }, 00:19:20.534 { 00:19:20.534 "name": "BaseBdev2", 00:19:20.534 "uuid": "1cf60f3a-b37c-4de1-a2de-025a0bdc8b59", 00:19:20.534 "is_configured": true, 00:19:20.534 "data_offset": 2048, 00:19:20.534 "data_size": 63488 00:19:20.534 }, 00:19:20.534 { 00:19:20.534 "name": "BaseBdev3", 00:19:20.534 "uuid": "8075c3f5-3a59-4e9d-afba-d63d1b9d0956", 00:19:20.534 "is_configured": true, 00:19:20.534 "data_offset": 2048, 00:19:20.534 "data_size": 63488 00:19:20.534 } 00:19:20.534 ] 00:19:20.534 } 00:19:20.534 } 00:19:20.534 }' 00:19:20.534 21:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:20.534 21:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:20.534 BaseBdev2 00:19:20.534 BaseBdev3' 00:19:20.534 21:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:20.534 21:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:20.534 21:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:20.534 21:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:20.534 "name": "BaseBdev1", 00:19:20.534 "aliases": [ 00:19:20.534 "6b1cf739-469f-4777-a714-c9843f039953" 00:19:20.534 ], 00:19:20.535 "product_name": "Malloc disk", 00:19:20.535 "block_size": 512, 00:19:20.535 "num_blocks": 65536, 00:19:20.535 "uuid": "6b1cf739-469f-4777-a714-c9843f039953", 00:19:20.535 "assigned_rate_limits": { 00:19:20.535 "rw_ios_per_sec": 0, 00:19:20.535 "rw_mbytes_per_sec": 0, 00:19:20.535 "r_mbytes_per_sec": 0, 00:19:20.535 "w_mbytes_per_sec": 0 00:19:20.535 }, 00:19:20.535 "claimed": true, 00:19:20.535 "claim_type": "exclusive_write", 00:19:20.535 "zoned": false, 00:19:20.535 "supported_io_types": { 00:19:20.535 "read": true, 00:19:20.535 "write": true, 00:19:20.535 "unmap": true, 00:19:20.535 "flush": true, 00:19:20.535 "reset": true, 00:19:20.535 "nvme_admin": false, 00:19:20.535 "nvme_io": false, 00:19:20.535 "nvme_io_md": false, 00:19:20.535 "write_zeroes": true, 00:19:20.535 "zcopy": true, 00:19:20.535 "get_zone_info": false, 00:19:20.535 "zone_management": false, 00:19:20.535 "zone_append": false, 00:19:20.535 "compare": false, 00:19:20.535 "compare_and_write": false, 00:19:20.535 "abort": true, 00:19:20.535 "seek_hole": false, 00:19:20.535 "seek_data": false, 00:19:20.535 "copy": true, 00:19:20.535 "nvme_iov_md": false 00:19:20.535 }, 00:19:20.535 "memory_domains": [ 00:19:20.535 { 00:19:20.535 "dma_device_id": "system", 00:19:20.535 "dma_device_type": 1 00:19:20.535 }, 00:19:20.535 { 00:19:20.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.535 "dma_device_type": 2 00:19:20.535 } 00:19:20.535 ], 00:19:20.535 "driver_specific": {} 00:19:20.535 }' 00:19:20.535 21:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:20.792 21:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:20.792 21:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:20.792 21:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:20.792 21:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:20.792 21:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:20.792 21:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:20.792 21:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:21.049 21:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:21.049 21:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:21.049 21:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:21.049 21:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:21.049 21:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:21.049 21:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:21.049 21:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:21.307 21:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:21.307 "name": "BaseBdev2", 00:19:21.307 "aliases": [ 00:19:21.307 "1cf60f3a-b37c-4de1-a2de-025a0bdc8b59" 00:19:21.307 ], 00:19:21.307 "product_name": "Malloc disk", 00:19:21.307 "block_size": 512, 00:19:21.307 "num_blocks": 65536, 00:19:21.307 "uuid": "1cf60f3a-b37c-4de1-a2de-025a0bdc8b59", 00:19:21.307 "assigned_rate_limits": { 00:19:21.307 "rw_ios_per_sec": 0, 00:19:21.307 "rw_mbytes_per_sec": 0, 00:19:21.307 "r_mbytes_per_sec": 0, 00:19:21.307 "w_mbytes_per_sec": 0 00:19:21.307 }, 00:19:21.307 "claimed": true, 00:19:21.307 "claim_type": "exclusive_write", 00:19:21.307 "zoned": false, 00:19:21.307 "supported_io_types": { 00:19:21.307 "read": true, 00:19:21.307 "write": true, 00:19:21.307 "unmap": true, 00:19:21.307 "flush": true, 00:19:21.307 "reset": true, 00:19:21.307 "nvme_admin": false, 00:19:21.307 "nvme_io": false, 00:19:21.307 "nvme_io_md": false, 00:19:21.307 "write_zeroes": true, 00:19:21.307 "zcopy": true, 00:19:21.307 "get_zone_info": false, 00:19:21.307 "zone_management": false, 00:19:21.307 "zone_append": false, 00:19:21.307 "compare": false, 00:19:21.307 "compare_and_write": false, 00:19:21.307 "abort": true, 00:19:21.307 "seek_hole": false, 00:19:21.307 "seek_data": false, 00:19:21.307 "copy": true, 00:19:21.307 "nvme_iov_md": false 00:19:21.307 }, 00:19:21.307 "memory_domains": [ 00:19:21.307 { 00:19:21.307 "dma_device_id": "system", 00:19:21.307 "dma_device_type": 1 00:19:21.307 }, 00:19:21.307 { 00:19:21.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.307 "dma_device_type": 2 00:19:21.307 } 00:19:21.307 ], 00:19:21.307 "driver_specific": {} 00:19:21.307 }' 00:19:21.307 21:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:21.307 21:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:21.307 21:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:21.307 21:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:21.307 21:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:21.564 21:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:21.564 21:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:21.564 21:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:21.564 21:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:21.564 21:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:21.564 21:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:21.564 21:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:21.564 21:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:21.564 21:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:21.564 21:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:21.822 21:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:21.822 "name": "BaseBdev3", 00:19:21.822 "aliases": [ 00:19:21.822 "8075c3f5-3a59-4e9d-afba-d63d1b9d0956" 00:19:21.822 ], 00:19:21.822 "product_name": "Malloc disk", 00:19:21.822 "block_size": 512, 00:19:21.822 "num_blocks": 65536, 00:19:21.822 "uuid": "8075c3f5-3a59-4e9d-afba-d63d1b9d0956", 00:19:21.822 "assigned_rate_limits": { 00:19:21.822 "rw_ios_per_sec": 0, 00:19:21.822 "rw_mbytes_per_sec": 0, 00:19:21.822 "r_mbytes_per_sec": 0, 00:19:21.822 "w_mbytes_per_sec": 0 00:19:21.822 }, 00:19:21.822 "claimed": true, 00:19:21.822 "claim_type": "exclusive_write", 00:19:21.822 "zoned": false, 00:19:21.822 "supported_io_types": { 00:19:21.822 "read": true, 00:19:21.822 "write": true, 00:19:21.822 "unmap": true, 00:19:21.822 "flush": true, 00:19:21.822 "reset": true, 00:19:21.822 "nvme_admin": false, 00:19:21.822 "nvme_io": false, 00:19:21.822 "nvme_io_md": false, 00:19:21.822 "write_zeroes": true, 00:19:21.822 "zcopy": true, 00:19:21.822 "get_zone_info": false, 00:19:21.822 "zone_management": false, 00:19:21.822 "zone_append": false, 00:19:21.822 "compare": false, 00:19:21.822 "compare_and_write": false, 00:19:21.822 "abort": true, 00:19:21.822 "seek_hole": false, 00:19:21.822 "seek_data": false, 00:19:21.822 "copy": true, 00:19:21.822 "nvme_iov_md": false 00:19:21.822 }, 00:19:21.822 "memory_domains": [ 00:19:21.822 { 00:19:21.822 "dma_device_id": "system", 00:19:21.822 "dma_device_type": 1 00:19:21.822 }, 00:19:21.822 { 00:19:21.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.822 "dma_device_type": 2 00:19:21.822 } 00:19:21.822 ], 00:19:21.822 "driver_specific": {} 00:19:21.822 }' 00:19:21.822 21:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:21.822 21:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:22.080 21:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:22.080 21:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:22.080 21:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:22.080 21:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:22.080 21:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:22.080 21:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:22.080 21:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:22.080 21:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:22.080 21:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:22.338 21:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:22.338 21:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:22.338 [2024-07-15 21:32:55.680901] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:22.338 [2024-07-15 21:32:55.680988] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:22.338 [2024-07-15 21:32:55.681067] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:22.595 21:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:22.595 21:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:19:22.595 21:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:22.595 21:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:19:22.595 21:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:19:22.595 21:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:19:22.595 21:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:22.595 21:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:19:22.595 21:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:22.595 21:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:22.595 21:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:22.595 21:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:22.595 21:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:22.595 21:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:22.595 21:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:22.595 21:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:22.595 21:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.595 21:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:22.595 "name": "Existed_Raid", 00:19:22.595 "uuid": "428ed0a3-9ec5-4b3c-91aa-e045a78dae79", 00:19:22.595 "strip_size_kb": 64, 00:19:22.595 "state": "offline", 00:19:22.595 "raid_level": "raid0", 00:19:22.595 "superblock": true, 00:19:22.595 "num_base_bdevs": 3, 00:19:22.595 "num_base_bdevs_discovered": 2, 00:19:22.595 "num_base_bdevs_operational": 2, 00:19:22.595 "base_bdevs_list": [ 00:19:22.595 { 00:19:22.595 "name": null, 00:19:22.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.595 "is_configured": false, 00:19:22.595 "data_offset": 2048, 00:19:22.595 "data_size": 63488 00:19:22.595 }, 00:19:22.595 { 00:19:22.595 "name": "BaseBdev2", 00:19:22.595 "uuid": "1cf60f3a-b37c-4de1-a2de-025a0bdc8b59", 00:19:22.595 "is_configured": true, 00:19:22.595 "data_offset": 2048, 00:19:22.595 "data_size": 63488 00:19:22.595 }, 00:19:22.595 { 00:19:22.595 "name": "BaseBdev3", 00:19:22.595 "uuid": "8075c3f5-3a59-4e9d-afba-d63d1b9d0956", 00:19:22.595 "is_configured": true, 00:19:22.595 "data_offset": 2048, 00:19:22.595 "data_size": 63488 00:19:22.595 } 00:19:22.595 ] 00:19:22.595 }' 00:19:22.595 21:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:22.595 21:32:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.526 21:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:23.526 21:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:23.526 21:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.526 21:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:23.526 21:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:23.526 21:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:23.527 21:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:23.784 [2024-07-15 21:32:56.915451] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:23.784 21:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:23.784 21:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:23.784 21:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:23.784 21:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.041 21:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:24.041 21:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:24.041 21:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:24.041 [2024-07-15 21:32:57.349451] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:24.041 [2024-07-15 21:32:57.349551] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:19:24.298 21:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:24.298 21:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:24.298 21:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.298 21:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:24.298 21:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:24.298 21:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:24.298 21:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:19:24.298 21:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:24.298 21:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:24.298 21:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:24.565 BaseBdev2 00:19:24.565 21:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:24.565 21:32:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:24.565 21:32:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:24.565 21:32:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:24.565 21:32:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:24.565 21:32:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:24.565 21:32:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:24.839 21:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:24.839 [ 00:19:24.839 { 00:19:24.839 "name": "BaseBdev2", 00:19:24.839 "aliases": [ 00:19:24.839 "738035ef-03b8-4fb4-bed5-51295906202a" 00:19:24.839 ], 00:19:24.839 "product_name": "Malloc disk", 00:19:24.839 "block_size": 512, 00:19:24.839 "num_blocks": 65536, 00:19:24.839 "uuid": "738035ef-03b8-4fb4-bed5-51295906202a", 00:19:24.839 "assigned_rate_limits": { 00:19:24.839 "rw_ios_per_sec": 0, 00:19:24.839 "rw_mbytes_per_sec": 0, 00:19:24.839 "r_mbytes_per_sec": 0, 00:19:24.839 "w_mbytes_per_sec": 0 00:19:24.839 }, 00:19:24.839 "claimed": false, 00:19:24.839 "zoned": false, 00:19:24.839 "supported_io_types": { 00:19:24.839 "read": true, 00:19:24.839 "write": true, 00:19:24.839 "unmap": true, 00:19:24.839 "flush": true, 00:19:24.839 "reset": true, 00:19:24.839 "nvme_admin": false, 00:19:24.839 "nvme_io": false, 00:19:24.839 "nvme_io_md": false, 00:19:24.839 "write_zeroes": true, 00:19:24.839 "zcopy": true, 00:19:24.839 "get_zone_info": false, 00:19:24.839 "zone_management": false, 00:19:24.839 "zone_append": false, 00:19:24.839 "compare": false, 00:19:24.839 "compare_and_write": false, 00:19:24.839 "abort": true, 00:19:24.839 "seek_hole": false, 00:19:24.839 "seek_data": false, 00:19:24.839 "copy": true, 00:19:24.839 "nvme_iov_md": false 00:19:24.839 }, 00:19:24.839 "memory_domains": [ 00:19:24.839 { 00:19:24.839 "dma_device_id": "system", 00:19:24.839 "dma_device_type": 1 00:19:24.839 }, 00:19:24.839 { 00:19:24.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.839 "dma_device_type": 2 00:19:24.839 } 00:19:24.839 ], 00:19:24.839 "driver_specific": {} 00:19:24.839 } 00:19:24.839 ] 00:19:25.096 21:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:25.096 21:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:25.096 21:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:25.096 21:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:25.096 BaseBdev3 00:19:25.096 21:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:25.096 21:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:25.096 21:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:25.096 21:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:25.096 21:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:25.096 21:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:25.096 21:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:25.354 21:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:25.611 [ 00:19:25.611 { 00:19:25.611 "name": "BaseBdev3", 00:19:25.611 "aliases": [ 00:19:25.611 "87c6221f-7675-4fa7-b890-bef0aa07fb8b" 00:19:25.611 ], 00:19:25.611 "product_name": "Malloc disk", 00:19:25.611 "block_size": 512, 00:19:25.611 "num_blocks": 65536, 00:19:25.611 "uuid": "87c6221f-7675-4fa7-b890-bef0aa07fb8b", 00:19:25.611 "assigned_rate_limits": { 00:19:25.611 "rw_ios_per_sec": 0, 00:19:25.611 "rw_mbytes_per_sec": 0, 00:19:25.611 "r_mbytes_per_sec": 0, 00:19:25.611 "w_mbytes_per_sec": 0 00:19:25.611 }, 00:19:25.611 "claimed": false, 00:19:25.611 "zoned": false, 00:19:25.611 "supported_io_types": { 00:19:25.611 "read": true, 00:19:25.611 "write": true, 00:19:25.611 "unmap": true, 00:19:25.611 "flush": true, 00:19:25.611 "reset": true, 00:19:25.611 "nvme_admin": false, 00:19:25.611 "nvme_io": false, 00:19:25.611 "nvme_io_md": false, 00:19:25.611 "write_zeroes": true, 00:19:25.611 "zcopy": true, 00:19:25.611 "get_zone_info": false, 00:19:25.611 "zone_management": false, 00:19:25.611 "zone_append": false, 00:19:25.611 "compare": false, 00:19:25.611 "compare_and_write": false, 00:19:25.611 "abort": true, 00:19:25.611 "seek_hole": false, 00:19:25.611 "seek_data": false, 00:19:25.611 "copy": true, 00:19:25.611 "nvme_iov_md": false 00:19:25.611 }, 00:19:25.611 "memory_domains": [ 00:19:25.611 { 00:19:25.611 "dma_device_id": "system", 00:19:25.611 "dma_device_type": 1 00:19:25.611 }, 00:19:25.611 { 00:19:25.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.611 "dma_device_type": 2 00:19:25.611 } 00:19:25.611 ], 00:19:25.611 "driver_specific": {} 00:19:25.611 } 00:19:25.611 ] 00:19:25.611 21:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:25.611 21:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:25.611 21:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:25.611 21:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:25.870 [2024-07-15 21:32:58.992750] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:25.870 [2024-07-15 21:32:58.993235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:25.870 [2024-07-15 21:32:58.993363] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:25.870 [2024-07-15 21:32:58.994996] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:25.870 21:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:25.870 21:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:25.870 21:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:25.870 21:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:25.870 21:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:25.870 21:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:25.870 21:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:25.870 21:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:25.871 21:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:25.871 21:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:25.871 21:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.871 21:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:25.871 21:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:25.871 "name": "Existed_Raid", 00:19:25.871 "uuid": "a16acbf1-fdb3-4336-9a60-2054379dd5da", 00:19:25.871 "strip_size_kb": 64, 00:19:25.871 "state": "configuring", 00:19:25.871 "raid_level": "raid0", 00:19:25.871 "superblock": true, 00:19:25.871 "num_base_bdevs": 3, 00:19:25.871 "num_base_bdevs_discovered": 2, 00:19:25.871 "num_base_bdevs_operational": 3, 00:19:25.871 "base_bdevs_list": [ 00:19:25.871 { 00:19:25.871 "name": "BaseBdev1", 00:19:25.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.871 "is_configured": false, 00:19:25.871 "data_offset": 0, 00:19:25.871 "data_size": 0 00:19:25.871 }, 00:19:25.871 { 00:19:25.871 "name": "BaseBdev2", 00:19:25.871 "uuid": "738035ef-03b8-4fb4-bed5-51295906202a", 00:19:25.871 "is_configured": true, 00:19:25.871 "data_offset": 2048, 00:19:25.871 "data_size": 63488 00:19:25.871 }, 00:19:25.871 { 00:19:25.871 "name": "BaseBdev3", 00:19:25.871 "uuid": "87c6221f-7675-4fa7-b890-bef0aa07fb8b", 00:19:25.871 "is_configured": true, 00:19:25.871 "data_offset": 2048, 00:19:25.871 "data_size": 63488 00:19:25.871 } 00:19:25.871 ] 00:19:25.871 }' 00:19:25.871 21:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:25.871 21:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.435 21:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:26.692 [2024-07-15 21:32:59.958986] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:26.692 21:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:26.692 21:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:26.692 21:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:26.692 21:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:26.692 21:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:26.692 21:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:26.692 21:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:26.692 21:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:26.692 21:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:26.692 21:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:26.692 21:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.692 21:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:26.948 21:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:26.948 "name": "Existed_Raid", 00:19:26.948 "uuid": "a16acbf1-fdb3-4336-9a60-2054379dd5da", 00:19:26.948 "strip_size_kb": 64, 00:19:26.948 "state": "configuring", 00:19:26.948 "raid_level": "raid0", 00:19:26.948 "superblock": true, 00:19:26.948 "num_base_bdevs": 3, 00:19:26.948 "num_base_bdevs_discovered": 1, 00:19:26.948 "num_base_bdevs_operational": 3, 00:19:26.948 "base_bdevs_list": [ 00:19:26.948 { 00:19:26.948 "name": "BaseBdev1", 00:19:26.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.948 "is_configured": false, 00:19:26.948 "data_offset": 0, 00:19:26.948 "data_size": 0 00:19:26.948 }, 00:19:26.948 { 00:19:26.948 "name": null, 00:19:26.948 "uuid": "738035ef-03b8-4fb4-bed5-51295906202a", 00:19:26.948 "is_configured": false, 00:19:26.948 "data_offset": 2048, 00:19:26.948 "data_size": 63488 00:19:26.948 }, 00:19:26.948 { 00:19:26.948 "name": "BaseBdev3", 00:19:26.948 "uuid": "87c6221f-7675-4fa7-b890-bef0aa07fb8b", 00:19:26.948 "is_configured": true, 00:19:26.948 "data_offset": 2048, 00:19:26.948 "data_size": 63488 00:19:26.948 } 00:19:26.948 ] 00:19:26.948 }' 00:19:26.948 21:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:26.948 21:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.511 21:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.511 21:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:27.767 21:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:27.767 21:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:27.767 [2024-07-15 21:33:01.135612] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:27.767 BaseBdev1 00:19:28.023 21:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:28.023 21:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:28.023 21:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:28.023 21:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:28.023 21:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:28.023 21:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:28.023 21:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:28.023 21:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:28.280 [ 00:19:28.280 { 00:19:28.280 "name": "BaseBdev1", 00:19:28.280 "aliases": [ 00:19:28.280 "258219a9-91f5-4c3a-8ff5-abf6d5b40184" 00:19:28.280 ], 00:19:28.280 "product_name": "Malloc disk", 00:19:28.280 "block_size": 512, 00:19:28.280 "num_blocks": 65536, 00:19:28.280 "uuid": "258219a9-91f5-4c3a-8ff5-abf6d5b40184", 00:19:28.280 "assigned_rate_limits": { 00:19:28.280 "rw_ios_per_sec": 0, 00:19:28.280 "rw_mbytes_per_sec": 0, 00:19:28.280 "r_mbytes_per_sec": 0, 00:19:28.280 "w_mbytes_per_sec": 0 00:19:28.280 }, 00:19:28.280 "claimed": true, 00:19:28.280 "claim_type": "exclusive_write", 00:19:28.280 "zoned": false, 00:19:28.280 "supported_io_types": { 00:19:28.280 "read": true, 00:19:28.280 "write": true, 00:19:28.280 "unmap": true, 00:19:28.280 "flush": true, 00:19:28.280 "reset": true, 00:19:28.280 "nvme_admin": false, 00:19:28.280 "nvme_io": false, 00:19:28.280 "nvme_io_md": false, 00:19:28.280 "write_zeroes": true, 00:19:28.280 "zcopy": true, 00:19:28.280 "get_zone_info": false, 00:19:28.280 "zone_management": false, 00:19:28.280 "zone_append": false, 00:19:28.280 "compare": false, 00:19:28.280 "compare_and_write": false, 00:19:28.280 "abort": true, 00:19:28.280 "seek_hole": false, 00:19:28.280 "seek_data": false, 00:19:28.280 "copy": true, 00:19:28.280 "nvme_iov_md": false 00:19:28.280 }, 00:19:28.280 "memory_domains": [ 00:19:28.280 { 00:19:28.280 "dma_device_id": "system", 00:19:28.280 "dma_device_type": 1 00:19:28.280 }, 00:19:28.280 { 00:19:28.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.280 "dma_device_type": 2 00:19:28.280 } 00:19:28.280 ], 00:19:28.280 "driver_specific": {} 00:19:28.280 } 00:19:28.280 ] 00:19:28.280 21:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:28.280 21:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:28.280 21:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:28.280 21:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:28.280 21:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:28.280 21:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:28.280 21:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:28.280 21:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:28.280 21:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:28.280 21:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:28.280 21:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:28.280 21:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.280 21:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.536 21:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:28.536 "name": "Existed_Raid", 00:19:28.536 "uuid": "a16acbf1-fdb3-4336-9a60-2054379dd5da", 00:19:28.536 "strip_size_kb": 64, 00:19:28.536 "state": "configuring", 00:19:28.536 "raid_level": "raid0", 00:19:28.536 "superblock": true, 00:19:28.537 "num_base_bdevs": 3, 00:19:28.537 "num_base_bdevs_discovered": 2, 00:19:28.537 "num_base_bdevs_operational": 3, 00:19:28.537 "base_bdevs_list": [ 00:19:28.537 { 00:19:28.537 "name": "BaseBdev1", 00:19:28.537 "uuid": "258219a9-91f5-4c3a-8ff5-abf6d5b40184", 00:19:28.537 "is_configured": true, 00:19:28.537 "data_offset": 2048, 00:19:28.537 "data_size": 63488 00:19:28.537 }, 00:19:28.537 { 00:19:28.537 "name": null, 00:19:28.537 "uuid": "738035ef-03b8-4fb4-bed5-51295906202a", 00:19:28.537 "is_configured": false, 00:19:28.537 "data_offset": 2048, 00:19:28.537 "data_size": 63488 00:19:28.537 }, 00:19:28.537 { 00:19:28.537 "name": "BaseBdev3", 00:19:28.537 "uuid": "87c6221f-7675-4fa7-b890-bef0aa07fb8b", 00:19:28.537 "is_configured": true, 00:19:28.537 "data_offset": 2048, 00:19:28.537 "data_size": 63488 00:19:28.537 } 00:19:28.537 ] 00:19:28.537 }' 00:19:28.537 21:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:28.537 21:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.100 21:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.100 21:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:29.357 21:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:29.357 21:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:29.357 [2024-07-15 21:33:02.644969] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:29.357 21:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:29.357 21:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:29.357 21:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:29.357 21:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:29.357 21:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:29.357 21:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:29.357 21:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:29.357 21:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:29.357 21:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:29.357 21:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:29.357 21:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.357 21:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.615 21:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:29.615 "name": "Existed_Raid", 00:19:29.615 "uuid": "a16acbf1-fdb3-4336-9a60-2054379dd5da", 00:19:29.615 "strip_size_kb": 64, 00:19:29.615 "state": "configuring", 00:19:29.615 "raid_level": "raid0", 00:19:29.615 "superblock": true, 00:19:29.615 "num_base_bdevs": 3, 00:19:29.615 "num_base_bdevs_discovered": 1, 00:19:29.615 "num_base_bdevs_operational": 3, 00:19:29.615 "base_bdevs_list": [ 00:19:29.615 { 00:19:29.615 "name": "BaseBdev1", 00:19:29.615 "uuid": "258219a9-91f5-4c3a-8ff5-abf6d5b40184", 00:19:29.615 "is_configured": true, 00:19:29.615 "data_offset": 2048, 00:19:29.615 "data_size": 63488 00:19:29.615 }, 00:19:29.615 { 00:19:29.615 "name": null, 00:19:29.615 "uuid": "738035ef-03b8-4fb4-bed5-51295906202a", 00:19:29.615 "is_configured": false, 00:19:29.615 "data_offset": 2048, 00:19:29.615 "data_size": 63488 00:19:29.615 }, 00:19:29.615 { 00:19:29.615 "name": null, 00:19:29.615 "uuid": "87c6221f-7675-4fa7-b890-bef0aa07fb8b", 00:19:29.615 "is_configured": false, 00:19:29.615 "data_offset": 2048, 00:19:29.615 "data_size": 63488 00:19:29.615 } 00:19:29.615 ] 00:19:29.615 }' 00:19:29.615 21:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:29.615 21:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.180 21:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:30.180 21:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.437 21:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:30.437 21:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:30.718 [2024-07-15 21:33:03.822889] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:30.718 21:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:30.718 21:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:30.718 21:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:30.718 21:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:30.718 21:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:30.718 21:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:30.718 21:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:30.718 21:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:30.718 21:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:30.718 21:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:30.718 21:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.718 21:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:30.718 21:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:30.718 "name": "Existed_Raid", 00:19:30.718 "uuid": "a16acbf1-fdb3-4336-9a60-2054379dd5da", 00:19:30.718 "strip_size_kb": 64, 00:19:30.718 "state": "configuring", 00:19:30.718 "raid_level": "raid0", 00:19:30.718 "superblock": true, 00:19:30.718 "num_base_bdevs": 3, 00:19:30.718 "num_base_bdevs_discovered": 2, 00:19:30.718 "num_base_bdevs_operational": 3, 00:19:30.718 "base_bdevs_list": [ 00:19:30.718 { 00:19:30.718 "name": "BaseBdev1", 00:19:30.718 "uuid": "258219a9-91f5-4c3a-8ff5-abf6d5b40184", 00:19:30.718 "is_configured": true, 00:19:30.718 "data_offset": 2048, 00:19:30.718 "data_size": 63488 00:19:30.718 }, 00:19:30.718 { 00:19:30.718 "name": null, 00:19:30.718 "uuid": "738035ef-03b8-4fb4-bed5-51295906202a", 00:19:30.718 "is_configured": false, 00:19:30.718 "data_offset": 2048, 00:19:30.718 "data_size": 63488 00:19:30.718 }, 00:19:30.718 { 00:19:30.718 "name": "BaseBdev3", 00:19:30.718 "uuid": "87c6221f-7675-4fa7-b890-bef0aa07fb8b", 00:19:30.718 "is_configured": true, 00:19:30.718 "data_offset": 2048, 00:19:30.718 "data_size": 63488 00:19:30.718 } 00:19:30.718 ] 00:19:30.718 }' 00:19:30.718 21:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:30.718 21:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.289 21:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.289 21:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:31.547 21:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:31.547 21:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:31.806 [2024-07-15 21:33:05.004888] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:31.806 21:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:31.806 21:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:31.806 21:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:31.806 21:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:31.806 21:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:31.806 21:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:31.806 21:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:31.806 21:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:31.806 21:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:31.806 21:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:31.806 21:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.806 21:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.064 21:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:32.064 "name": "Existed_Raid", 00:19:32.064 "uuid": "a16acbf1-fdb3-4336-9a60-2054379dd5da", 00:19:32.064 "strip_size_kb": 64, 00:19:32.064 "state": "configuring", 00:19:32.064 "raid_level": "raid0", 00:19:32.064 "superblock": true, 00:19:32.064 "num_base_bdevs": 3, 00:19:32.064 "num_base_bdevs_discovered": 1, 00:19:32.064 "num_base_bdevs_operational": 3, 00:19:32.064 "base_bdevs_list": [ 00:19:32.064 { 00:19:32.064 "name": null, 00:19:32.064 "uuid": "258219a9-91f5-4c3a-8ff5-abf6d5b40184", 00:19:32.064 "is_configured": false, 00:19:32.064 "data_offset": 2048, 00:19:32.064 "data_size": 63488 00:19:32.064 }, 00:19:32.064 { 00:19:32.064 "name": null, 00:19:32.064 "uuid": "738035ef-03b8-4fb4-bed5-51295906202a", 00:19:32.064 "is_configured": false, 00:19:32.064 "data_offset": 2048, 00:19:32.064 "data_size": 63488 00:19:32.064 }, 00:19:32.064 { 00:19:32.064 "name": "BaseBdev3", 00:19:32.064 "uuid": "87c6221f-7675-4fa7-b890-bef0aa07fb8b", 00:19:32.064 "is_configured": true, 00:19:32.064 "data_offset": 2048, 00:19:32.064 "data_size": 63488 00:19:32.064 } 00:19:32.064 ] 00:19:32.064 }' 00:19:32.064 21:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:32.064 21:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.632 21:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.632 21:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:32.890 21:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:32.890 21:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:33.150 [2024-07-15 21:33:06.299680] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:33.150 21:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:33.150 21:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:33.150 21:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:33.150 21:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:33.150 21:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:33.150 21:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:33.150 21:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:33.150 21:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:33.150 21:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:33.150 21:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:33.150 21:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.150 21:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:33.150 21:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:33.150 "name": "Existed_Raid", 00:19:33.150 "uuid": "a16acbf1-fdb3-4336-9a60-2054379dd5da", 00:19:33.150 "strip_size_kb": 64, 00:19:33.150 "state": "configuring", 00:19:33.150 "raid_level": "raid0", 00:19:33.150 "superblock": true, 00:19:33.150 "num_base_bdevs": 3, 00:19:33.150 "num_base_bdevs_discovered": 2, 00:19:33.150 "num_base_bdevs_operational": 3, 00:19:33.150 "base_bdevs_list": [ 00:19:33.150 { 00:19:33.150 "name": null, 00:19:33.150 "uuid": "258219a9-91f5-4c3a-8ff5-abf6d5b40184", 00:19:33.150 "is_configured": false, 00:19:33.150 "data_offset": 2048, 00:19:33.150 "data_size": 63488 00:19:33.150 }, 00:19:33.150 { 00:19:33.150 "name": "BaseBdev2", 00:19:33.150 "uuid": "738035ef-03b8-4fb4-bed5-51295906202a", 00:19:33.150 "is_configured": true, 00:19:33.150 "data_offset": 2048, 00:19:33.150 "data_size": 63488 00:19:33.150 }, 00:19:33.150 { 00:19:33.150 "name": "BaseBdev3", 00:19:33.150 "uuid": "87c6221f-7675-4fa7-b890-bef0aa07fb8b", 00:19:33.150 "is_configured": true, 00:19:33.150 "data_offset": 2048, 00:19:33.150 "data_size": 63488 00:19:33.150 } 00:19:33.150 ] 00:19:33.150 }' 00:19:33.150 21:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:33.150 21:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.086 21:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.086 21:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:34.086 21:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:34.086 21:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:34.086 21:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.345 21:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 258219a9-91f5-4c3a-8ff5-abf6d5b40184 00:19:34.345 [2024-07-15 21:33:07.684769] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:34.345 [2024-07-15 21:33:07.685041] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:19:34.345 [2024-07-15 21:33:07.685071] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:34.345 [2024-07-15 21:33:07.685204] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:19:34.345 NewBaseBdev 00:19:34.345 [2024-07-15 21:33:07.685547] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:19:34.345 [2024-07-15 21:33:07.685592] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008d80 00:19:34.345 [2024-07-15 21:33:07.685743] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:34.345 21:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:34.345 21:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:19:34.345 21:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:34.345 21:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:34.345 21:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:34.345 21:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:34.345 21:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:34.603 21:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:34.861 [ 00:19:34.861 { 00:19:34.861 "name": "NewBaseBdev", 00:19:34.861 "aliases": [ 00:19:34.861 "258219a9-91f5-4c3a-8ff5-abf6d5b40184" 00:19:34.861 ], 00:19:34.861 "product_name": "Malloc disk", 00:19:34.861 "block_size": 512, 00:19:34.861 "num_blocks": 65536, 00:19:34.861 "uuid": "258219a9-91f5-4c3a-8ff5-abf6d5b40184", 00:19:34.861 "assigned_rate_limits": { 00:19:34.861 "rw_ios_per_sec": 0, 00:19:34.861 "rw_mbytes_per_sec": 0, 00:19:34.861 "r_mbytes_per_sec": 0, 00:19:34.861 "w_mbytes_per_sec": 0 00:19:34.861 }, 00:19:34.861 "claimed": true, 00:19:34.861 "claim_type": "exclusive_write", 00:19:34.861 "zoned": false, 00:19:34.861 "supported_io_types": { 00:19:34.861 "read": true, 00:19:34.861 "write": true, 00:19:34.861 "unmap": true, 00:19:34.861 "flush": true, 00:19:34.861 "reset": true, 00:19:34.861 "nvme_admin": false, 00:19:34.861 "nvme_io": false, 00:19:34.861 "nvme_io_md": false, 00:19:34.861 "write_zeroes": true, 00:19:34.861 "zcopy": true, 00:19:34.861 "get_zone_info": false, 00:19:34.861 "zone_management": false, 00:19:34.861 "zone_append": false, 00:19:34.861 "compare": false, 00:19:34.861 "compare_and_write": false, 00:19:34.861 "abort": true, 00:19:34.861 "seek_hole": false, 00:19:34.861 "seek_data": false, 00:19:34.861 "copy": true, 00:19:34.861 "nvme_iov_md": false 00:19:34.861 }, 00:19:34.861 "memory_domains": [ 00:19:34.861 { 00:19:34.861 "dma_device_id": "system", 00:19:34.861 "dma_device_type": 1 00:19:34.861 }, 00:19:34.861 { 00:19:34.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.861 "dma_device_type": 2 00:19:34.861 } 00:19:34.861 ], 00:19:34.861 "driver_specific": {} 00:19:34.861 } 00:19:34.861 ] 00:19:34.861 21:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:34.861 21:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:19:34.861 21:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:34.861 21:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:34.861 21:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:34.861 21:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:34.861 21:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:34.861 21:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:34.861 21:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:34.861 21:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:34.861 21:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:34.862 21:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.862 21:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:35.120 21:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:35.120 "name": "Existed_Raid", 00:19:35.120 "uuid": "a16acbf1-fdb3-4336-9a60-2054379dd5da", 00:19:35.120 "strip_size_kb": 64, 00:19:35.120 "state": "online", 00:19:35.120 "raid_level": "raid0", 00:19:35.120 "superblock": true, 00:19:35.120 "num_base_bdevs": 3, 00:19:35.120 "num_base_bdevs_discovered": 3, 00:19:35.120 "num_base_bdevs_operational": 3, 00:19:35.120 "base_bdevs_list": [ 00:19:35.120 { 00:19:35.120 "name": "NewBaseBdev", 00:19:35.120 "uuid": "258219a9-91f5-4c3a-8ff5-abf6d5b40184", 00:19:35.120 "is_configured": true, 00:19:35.120 "data_offset": 2048, 00:19:35.120 "data_size": 63488 00:19:35.120 }, 00:19:35.120 { 00:19:35.120 "name": "BaseBdev2", 00:19:35.120 "uuid": "738035ef-03b8-4fb4-bed5-51295906202a", 00:19:35.120 "is_configured": true, 00:19:35.120 "data_offset": 2048, 00:19:35.120 "data_size": 63488 00:19:35.120 }, 00:19:35.120 { 00:19:35.120 "name": "BaseBdev3", 00:19:35.120 "uuid": "87c6221f-7675-4fa7-b890-bef0aa07fb8b", 00:19:35.120 "is_configured": true, 00:19:35.120 "data_offset": 2048, 00:19:35.120 "data_size": 63488 00:19:35.120 } 00:19:35.120 ] 00:19:35.120 }' 00:19:35.120 21:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:35.120 21:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.687 21:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:35.687 21:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:35.687 21:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:35.687 21:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:35.687 21:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:35.687 21:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:35.687 21:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:35.687 21:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:35.945 [2024-07-15 21:33:09.062654] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:35.945 21:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:35.945 "name": "Existed_Raid", 00:19:35.945 "aliases": [ 00:19:35.945 "a16acbf1-fdb3-4336-9a60-2054379dd5da" 00:19:35.945 ], 00:19:35.945 "product_name": "Raid Volume", 00:19:35.945 "block_size": 512, 00:19:35.945 "num_blocks": 190464, 00:19:35.945 "uuid": "a16acbf1-fdb3-4336-9a60-2054379dd5da", 00:19:35.945 "assigned_rate_limits": { 00:19:35.945 "rw_ios_per_sec": 0, 00:19:35.945 "rw_mbytes_per_sec": 0, 00:19:35.945 "r_mbytes_per_sec": 0, 00:19:35.945 "w_mbytes_per_sec": 0 00:19:35.945 }, 00:19:35.945 "claimed": false, 00:19:35.945 "zoned": false, 00:19:35.945 "supported_io_types": { 00:19:35.945 "read": true, 00:19:35.945 "write": true, 00:19:35.945 "unmap": true, 00:19:35.945 "flush": true, 00:19:35.945 "reset": true, 00:19:35.945 "nvme_admin": false, 00:19:35.945 "nvme_io": false, 00:19:35.945 "nvme_io_md": false, 00:19:35.945 "write_zeroes": true, 00:19:35.945 "zcopy": false, 00:19:35.945 "get_zone_info": false, 00:19:35.945 "zone_management": false, 00:19:35.945 "zone_append": false, 00:19:35.945 "compare": false, 00:19:35.945 "compare_and_write": false, 00:19:35.945 "abort": false, 00:19:35.945 "seek_hole": false, 00:19:35.945 "seek_data": false, 00:19:35.945 "copy": false, 00:19:35.945 "nvme_iov_md": false 00:19:35.945 }, 00:19:35.945 "memory_domains": [ 00:19:35.945 { 00:19:35.945 "dma_device_id": "system", 00:19:35.945 "dma_device_type": 1 00:19:35.945 }, 00:19:35.945 { 00:19:35.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.945 "dma_device_type": 2 00:19:35.945 }, 00:19:35.945 { 00:19:35.945 "dma_device_id": "system", 00:19:35.945 "dma_device_type": 1 00:19:35.945 }, 00:19:35.945 { 00:19:35.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.945 "dma_device_type": 2 00:19:35.945 }, 00:19:35.945 { 00:19:35.945 "dma_device_id": "system", 00:19:35.945 "dma_device_type": 1 00:19:35.945 }, 00:19:35.945 { 00:19:35.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.945 "dma_device_type": 2 00:19:35.945 } 00:19:35.945 ], 00:19:35.945 "driver_specific": { 00:19:35.945 "raid": { 00:19:35.945 "uuid": "a16acbf1-fdb3-4336-9a60-2054379dd5da", 00:19:35.945 "strip_size_kb": 64, 00:19:35.945 "state": "online", 00:19:35.945 "raid_level": "raid0", 00:19:35.945 "superblock": true, 00:19:35.945 "num_base_bdevs": 3, 00:19:35.945 "num_base_bdevs_discovered": 3, 00:19:35.945 "num_base_bdevs_operational": 3, 00:19:35.945 "base_bdevs_list": [ 00:19:35.946 { 00:19:35.946 "name": "NewBaseBdev", 00:19:35.946 "uuid": "258219a9-91f5-4c3a-8ff5-abf6d5b40184", 00:19:35.946 "is_configured": true, 00:19:35.946 "data_offset": 2048, 00:19:35.946 "data_size": 63488 00:19:35.946 }, 00:19:35.946 { 00:19:35.946 "name": "BaseBdev2", 00:19:35.946 "uuid": "738035ef-03b8-4fb4-bed5-51295906202a", 00:19:35.946 "is_configured": true, 00:19:35.946 "data_offset": 2048, 00:19:35.946 "data_size": 63488 00:19:35.946 }, 00:19:35.946 { 00:19:35.946 "name": "BaseBdev3", 00:19:35.946 "uuid": "87c6221f-7675-4fa7-b890-bef0aa07fb8b", 00:19:35.946 "is_configured": true, 00:19:35.946 "data_offset": 2048, 00:19:35.946 "data_size": 63488 00:19:35.946 } 00:19:35.946 ] 00:19:35.946 } 00:19:35.946 } 00:19:35.946 }' 00:19:35.946 21:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:35.946 21:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:35.946 BaseBdev2 00:19:35.946 BaseBdev3' 00:19:35.946 21:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:35.946 21:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:35.946 21:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:35.946 21:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:35.946 "name": "NewBaseBdev", 00:19:35.946 "aliases": [ 00:19:35.946 "258219a9-91f5-4c3a-8ff5-abf6d5b40184" 00:19:35.946 ], 00:19:35.946 "product_name": "Malloc disk", 00:19:35.946 "block_size": 512, 00:19:35.946 "num_blocks": 65536, 00:19:35.946 "uuid": "258219a9-91f5-4c3a-8ff5-abf6d5b40184", 00:19:35.946 "assigned_rate_limits": { 00:19:35.946 "rw_ios_per_sec": 0, 00:19:35.946 "rw_mbytes_per_sec": 0, 00:19:35.946 "r_mbytes_per_sec": 0, 00:19:35.946 "w_mbytes_per_sec": 0 00:19:35.946 }, 00:19:35.946 "claimed": true, 00:19:35.946 "claim_type": "exclusive_write", 00:19:35.946 "zoned": false, 00:19:35.946 "supported_io_types": { 00:19:35.946 "read": true, 00:19:35.946 "write": true, 00:19:35.946 "unmap": true, 00:19:35.946 "flush": true, 00:19:35.946 "reset": true, 00:19:35.946 "nvme_admin": false, 00:19:35.946 "nvme_io": false, 00:19:35.946 "nvme_io_md": false, 00:19:35.946 "write_zeroes": true, 00:19:35.946 "zcopy": true, 00:19:35.946 "get_zone_info": false, 00:19:35.946 "zone_management": false, 00:19:35.946 "zone_append": false, 00:19:35.946 "compare": false, 00:19:35.946 "compare_and_write": false, 00:19:35.946 "abort": true, 00:19:35.946 "seek_hole": false, 00:19:35.946 "seek_data": false, 00:19:35.946 "copy": true, 00:19:35.946 "nvme_iov_md": false 00:19:35.946 }, 00:19:35.946 "memory_domains": [ 00:19:35.946 { 00:19:35.946 "dma_device_id": "system", 00:19:35.946 "dma_device_type": 1 00:19:35.946 }, 00:19:35.946 { 00:19:35.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.946 "dma_device_type": 2 00:19:35.946 } 00:19:35.946 ], 00:19:35.946 "driver_specific": {} 00:19:35.946 }' 00:19:35.946 21:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:36.204 21:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:36.204 21:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:36.204 21:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:36.204 21:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:36.204 21:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:36.204 21:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:36.204 21:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:36.463 21:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:36.463 21:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:36.463 21:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:36.463 21:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:36.463 21:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:36.463 21:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:36.463 21:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:36.721 21:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:36.721 "name": "BaseBdev2", 00:19:36.721 "aliases": [ 00:19:36.721 "738035ef-03b8-4fb4-bed5-51295906202a" 00:19:36.721 ], 00:19:36.721 "product_name": "Malloc disk", 00:19:36.721 "block_size": 512, 00:19:36.721 "num_blocks": 65536, 00:19:36.721 "uuid": "738035ef-03b8-4fb4-bed5-51295906202a", 00:19:36.721 "assigned_rate_limits": { 00:19:36.721 "rw_ios_per_sec": 0, 00:19:36.721 "rw_mbytes_per_sec": 0, 00:19:36.721 "r_mbytes_per_sec": 0, 00:19:36.721 "w_mbytes_per_sec": 0 00:19:36.721 }, 00:19:36.721 "claimed": true, 00:19:36.721 "claim_type": "exclusive_write", 00:19:36.721 "zoned": false, 00:19:36.721 "supported_io_types": { 00:19:36.721 "read": true, 00:19:36.721 "write": true, 00:19:36.721 "unmap": true, 00:19:36.721 "flush": true, 00:19:36.721 "reset": true, 00:19:36.721 "nvme_admin": false, 00:19:36.721 "nvme_io": false, 00:19:36.721 "nvme_io_md": false, 00:19:36.721 "write_zeroes": true, 00:19:36.721 "zcopy": true, 00:19:36.721 "get_zone_info": false, 00:19:36.721 "zone_management": false, 00:19:36.721 "zone_append": false, 00:19:36.721 "compare": false, 00:19:36.721 "compare_and_write": false, 00:19:36.721 "abort": true, 00:19:36.721 "seek_hole": false, 00:19:36.721 "seek_data": false, 00:19:36.721 "copy": true, 00:19:36.721 "nvme_iov_md": false 00:19:36.721 }, 00:19:36.721 "memory_domains": [ 00:19:36.721 { 00:19:36.721 "dma_device_id": "system", 00:19:36.721 "dma_device_type": 1 00:19:36.721 }, 00:19:36.721 { 00:19:36.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.721 "dma_device_type": 2 00:19:36.721 } 00:19:36.721 ], 00:19:36.721 "driver_specific": {} 00:19:36.721 }' 00:19:36.721 21:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:36.721 21:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:36.721 21:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:36.721 21:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:36.980 21:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:36.980 21:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:36.980 21:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:36.980 21:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:36.980 21:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:36.980 21:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:36.980 21:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:37.278 21:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:37.278 21:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:37.278 21:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:37.278 21:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:37.278 21:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:37.278 "name": "BaseBdev3", 00:19:37.278 "aliases": [ 00:19:37.278 "87c6221f-7675-4fa7-b890-bef0aa07fb8b" 00:19:37.278 ], 00:19:37.278 "product_name": "Malloc disk", 00:19:37.278 "block_size": 512, 00:19:37.278 "num_blocks": 65536, 00:19:37.278 "uuid": "87c6221f-7675-4fa7-b890-bef0aa07fb8b", 00:19:37.278 "assigned_rate_limits": { 00:19:37.278 "rw_ios_per_sec": 0, 00:19:37.278 "rw_mbytes_per_sec": 0, 00:19:37.278 "r_mbytes_per_sec": 0, 00:19:37.278 "w_mbytes_per_sec": 0 00:19:37.278 }, 00:19:37.278 "claimed": true, 00:19:37.278 "claim_type": "exclusive_write", 00:19:37.278 "zoned": false, 00:19:37.278 "supported_io_types": { 00:19:37.278 "read": true, 00:19:37.278 "write": true, 00:19:37.278 "unmap": true, 00:19:37.278 "flush": true, 00:19:37.278 "reset": true, 00:19:37.278 "nvme_admin": false, 00:19:37.278 "nvme_io": false, 00:19:37.278 "nvme_io_md": false, 00:19:37.278 "write_zeroes": true, 00:19:37.278 "zcopy": true, 00:19:37.278 "get_zone_info": false, 00:19:37.278 "zone_management": false, 00:19:37.278 "zone_append": false, 00:19:37.278 "compare": false, 00:19:37.278 "compare_and_write": false, 00:19:37.278 "abort": true, 00:19:37.278 "seek_hole": false, 00:19:37.278 "seek_data": false, 00:19:37.278 "copy": true, 00:19:37.278 "nvme_iov_md": false 00:19:37.278 }, 00:19:37.278 "memory_domains": [ 00:19:37.278 { 00:19:37.278 "dma_device_id": "system", 00:19:37.278 "dma_device_type": 1 00:19:37.278 }, 00:19:37.278 { 00:19:37.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.278 "dma_device_type": 2 00:19:37.278 } 00:19:37.278 ], 00:19:37.278 "driver_specific": {} 00:19:37.278 }' 00:19:37.278 21:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:37.278 21:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:37.537 21:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:37.537 21:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:37.537 21:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:37.537 21:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:37.537 21:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:37.537 21:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:37.537 21:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:37.537 21:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:37.795 21:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:37.795 21:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:37.795 21:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:38.052 [2024-07-15 21:33:11.170599] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:38.052 [2024-07-15 21:33:11.170696] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:38.052 [2024-07-15 21:33:11.170779] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:38.052 [2024-07-15 21:33:11.170843] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:38.052 [2024-07-15 21:33:11.170863] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name Existed_Raid, state offline 00:19:38.052 21:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 127206 00:19:38.052 21:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 127206 ']' 00:19:38.052 21:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 127206 00:19:38.052 21:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:19:38.052 21:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:38.052 21:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 127206 00:19:38.052 killing process with pid 127206 00:19:38.052 21:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:38.052 21:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:38.052 21:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 127206' 00:19:38.052 21:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 127206 00:19:38.052 21:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 127206 00:19:38.052 [2024-07-15 21:33:11.211104] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:38.309 [2024-07-15 21:33:11.481887] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:39.685 ************************************ 00:19:39.685 END TEST raid_state_function_test_sb 00:19:39.685 ************************************ 00:19:39.685 21:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:19:39.685 00:19:39.685 real 0m26.483s 00:19:39.685 user 0m48.764s 00:19:39.685 sys 0m3.462s 00:19:39.685 21:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:39.685 21:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.685 21:33:12 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:39.685 21:33:12 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:19:39.685 21:33:12 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:39.685 21:33:12 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:39.685 21:33:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:39.685 ************************************ 00:19:39.685 START TEST raid_superblock_test 00:19:39.685 ************************************ 00:19:39.685 21:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 3 00:19:39.685 21:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:19:39.685 21:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:19:39.685 21:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:19:39.685 21:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:19:39.685 21:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:19:39.685 21:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:19:39.685 21:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:19:39.685 21:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:19:39.685 21:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:19:39.685 21:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:19:39.685 21:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:19:39.685 21:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:19:39.685 21:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:19:39.685 21:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:19:39.685 21:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:19:39.685 21:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:19:39.685 21:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=128199 00:19:39.685 21:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:39.685 21:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 128199 /var/tmp/spdk-raid.sock 00:19:39.685 21:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 128199 ']' 00:19:39.685 21:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:39.685 21:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:39.685 21:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:39.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:39.685 21:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:39.685 21:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.685 [2024-07-15 21:33:12.776664] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:19:39.685 [2024-07-15 21:33:12.776880] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128199 ] 00:19:39.685 [2024-07-15 21:33:12.936148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.944 [2024-07-15 21:33:13.118425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.202 [2024-07-15 21:33:13.320340] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:40.460 21:33:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:40.460 21:33:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:19:40.460 21:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:19:40.460 21:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:40.460 21:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:19:40.460 21:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:19:40.460 21:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:40.460 21:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:40.460 21:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:40.460 21:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:40.460 21:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:40.460 malloc1 00:19:40.460 21:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:40.718 [2024-07-15 21:33:14.004495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:40.718 [2024-07-15 21:33:14.004695] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.718 [2024-07-15 21:33:14.004757] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:19:40.718 [2024-07-15 21:33:14.004793] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.718 [2024-07-15 21:33:14.006852] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.718 [2024-07-15 21:33:14.006929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:40.718 pt1 00:19:40.718 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:40.718 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:40.718 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:19:40.718 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:19:40.718 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:40.718 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:40.718 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:40.718 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:40.718 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:40.976 malloc2 00:19:40.976 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:41.235 [2024-07-15 21:33:14.429098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:41.235 [2024-07-15 21:33:14.429332] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:41.235 [2024-07-15 21:33:14.429386] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:19:41.235 [2024-07-15 21:33:14.429428] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:41.235 [2024-07-15 21:33:14.431668] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:41.235 [2024-07-15 21:33:14.431755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:41.235 pt2 00:19:41.235 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:41.235 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:41.235 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:19:41.235 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:19:41.235 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:41.235 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:41.235 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:41.235 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:41.235 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:41.494 malloc3 00:19:41.494 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:41.494 [2024-07-15 21:33:14.785493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:41.494 [2024-07-15 21:33:14.785682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:41.494 [2024-07-15 21:33:14.785731] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:19:41.494 [2024-07-15 21:33:14.785774] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:41.494 [2024-07-15 21:33:14.787993] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:41.494 [2024-07-15 21:33:14.788081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:41.494 pt3 00:19:41.494 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:41.494 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:41.494 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:19:41.846 [2024-07-15 21:33:14.961277] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:41.846 [2024-07-15 21:33:14.963420] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:41.846 [2024-07-15 21:33:14.963539] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:41.846 [2024-07-15 21:33:14.963733] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:19:41.846 [2024-07-15 21:33:14.963768] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:41.846 [2024-07-15 21:33:14.963936] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:19:41.846 [2024-07-15 21:33:14.964327] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:19:41.846 [2024-07-15 21:33:14.964367] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:19:41.846 [2024-07-15 21:33:14.964540] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:41.846 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:19:41.846 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:41.846 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:41.846 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:41.846 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:41.846 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:41.846 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:41.846 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:41.846 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:41.846 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:41.846 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.847 21:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.847 21:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:41.847 "name": "raid_bdev1", 00:19:41.847 "uuid": "1ed24fdc-dc99-4e16-9605-fcf42ac4a71b", 00:19:41.847 "strip_size_kb": 64, 00:19:41.847 "state": "online", 00:19:41.847 "raid_level": "raid0", 00:19:41.847 "superblock": true, 00:19:41.847 "num_base_bdevs": 3, 00:19:41.847 "num_base_bdevs_discovered": 3, 00:19:41.847 "num_base_bdevs_operational": 3, 00:19:41.847 "base_bdevs_list": [ 00:19:41.847 { 00:19:41.847 "name": "pt1", 00:19:41.847 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:41.847 "is_configured": true, 00:19:41.847 "data_offset": 2048, 00:19:41.847 "data_size": 63488 00:19:41.847 }, 00:19:41.847 { 00:19:41.847 "name": "pt2", 00:19:41.847 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:41.847 "is_configured": true, 00:19:41.847 "data_offset": 2048, 00:19:41.847 "data_size": 63488 00:19:41.847 }, 00:19:41.847 { 00:19:41.847 "name": "pt3", 00:19:41.847 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:41.847 "is_configured": true, 00:19:41.847 "data_offset": 2048, 00:19:41.847 "data_size": 63488 00:19:41.847 } 00:19:41.847 ] 00:19:41.847 }' 00:19:41.847 21:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:41.847 21:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.428 21:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:19:42.428 21:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:42.428 21:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:42.428 21:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:42.429 21:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:42.429 21:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:42.429 21:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:42.429 21:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:42.686 [2024-07-15 21:33:15.907823] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:42.686 21:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:42.686 "name": "raid_bdev1", 00:19:42.686 "aliases": [ 00:19:42.686 "1ed24fdc-dc99-4e16-9605-fcf42ac4a71b" 00:19:42.686 ], 00:19:42.686 "product_name": "Raid Volume", 00:19:42.686 "block_size": 512, 00:19:42.686 "num_blocks": 190464, 00:19:42.686 "uuid": "1ed24fdc-dc99-4e16-9605-fcf42ac4a71b", 00:19:42.686 "assigned_rate_limits": { 00:19:42.686 "rw_ios_per_sec": 0, 00:19:42.686 "rw_mbytes_per_sec": 0, 00:19:42.686 "r_mbytes_per_sec": 0, 00:19:42.686 "w_mbytes_per_sec": 0 00:19:42.686 }, 00:19:42.686 "claimed": false, 00:19:42.686 "zoned": false, 00:19:42.686 "supported_io_types": { 00:19:42.686 "read": true, 00:19:42.686 "write": true, 00:19:42.686 "unmap": true, 00:19:42.686 "flush": true, 00:19:42.686 "reset": true, 00:19:42.686 "nvme_admin": false, 00:19:42.686 "nvme_io": false, 00:19:42.686 "nvme_io_md": false, 00:19:42.686 "write_zeroes": true, 00:19:42.686 "zcopy": false, 00:19:42.686 "get_zone_info": false, 00:19:42.686 "zone_management": false, 00:19:42.686 "zone_append": false, 00:19:42.686 "compare": false, 00:19:42.686 "compare_and_write": false, 00:19:42.686 "abort": false, 00:19:42.686 "seek_hole": false, 00:19:42.686 "seek_data": false, 00:19:42.686 "copy": false, 00:19:42.686 "nvme_iov_md": false 00:19:42.686 }, 00:19:42.686 "memory_domains": [ 00:19:42.686 { 00:19:42.686 "dma_device_id": "system", 00:19:42.686 "dma_device_type": 1 00:19:42.686 }, 00:19:42.686 { 00:19:42.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.686 "dma_device_type": 2 00:19:42.686 }, 00:19:42.686 { 00:19:42.686 "dma_device_id": "system", 00:19:42.686 "dma_device_type": 1 00:19:42.686 }, 00:19:42.686 { 00:19:42.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.686 "dma_device_type": 2 00:19:42.686 }, 00:19:42.686 { 00:19:42.686 "dma_device_id": "system", 00:19:42.686 "dma_device_type": 1 00:19:42.686 }, 00:19:42.686 { 00:19:42.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.686 "dma_device_type": 2 00:19:42.686 } 00:19:42.686 ], 00:19:42.686 "driver_specific": { 00:19:42.686 "raid": { 00:19:42.686 "uuid": "1ed24fdc-dc99-4e16-9605-fcf42ac4a71b", 00:19:42.686 "strip_size_kb": 64, 00:19:42.687 "state": "online", 00:19:42.687 "raid_level": "raid0", 00:19:42.687 "superblock": true, 00:19:42.687 "num_base_bdevs": 3, 00:19:42.687 "num_base_bdevs_discovered": 3, 00:19:42.687 "num_base_bdevs_operational": 3, 00:19:42.687 "base_bdevs_list": [ 00:19:42.687 { 00:19:42.687 "name": "pt1", 00:19:42.687 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:42.687 "is_configured": true, 00:19:42.687 "data_offset": 2048, 00:19:42.687 "data_size": 63488 00:19:42.687 }, 00:19:42.687 { 00:19:42.687 "name": "pt2", 00:19:42.687 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:42.687 "is_configured": true, 00:19:42.687 "data_offset": 2048, 00:19:42.687 "data_size": 63488 00:19:42.687 }, 00:19:42.687 { 00:19:42.687 "name": "pt3", 00:19:42.687 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:42.687 "is_configured": true, 00:19:42.687 "data_offset": 2048, 00:19:42.687 "data_size": 63488 00:19:42.687 } 00:19:42.687 ] 00:19:42.687 } 00:19:42.687 } 00:19:42.687 }' 00:19:42.687 21:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:42.687 21:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:42.687 pt2 00:19:42.687 pt3' 00:19:42.687 21:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:42.687 21:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:42.687 21:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:42.944 21:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:42.944 "name": "pt1", 00:19:42.944 "aliases": [ 00:19:42.944 "00000000-0000-0000-0000-000000000001" 00:19:42.944 ], 00:19:42.944 "product_name": "passthru", 00:19:42.944 "block_size": 512, 00:19:42.944 "num_blocks": 65536, 00:19:42.944 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:42.944 "assigned_rate_limits": { 00:19:42.944 "rw_ios_per_sec": 0, 00:19:42.944 "rw_mbytes_per_sec": 0, 00:19:42.944 "r_mbytes_per_sec": 0, 00:19:42.944 "w_mbytes_per_sec": 0 00:19:42.944 }, 00:19:42.944 "claimed": true, 00:19:42.944 "claim_type": "exclusive_write", 00:19:42.944 "zoned": false, 00:19:42.944 "supported_io_types": { 00:19:42.944 "read": true, 00:19:42.944 "write": true, 00:19:42.944 "unmap": true, 00:19:42.944 "flush": true, 00:19:42.944 "reset": true, 00:19:42.944 "nvme_admin": false, 00:19:42.944 "nvme_io": false, 00:19:42.944 "nvme_io_md": false, 00:19:42.944 "write_zeroes": true, 00:19:42.944 "zcopy": true, 00:19:42.945 "get_zone_info": false, 00:19:42.945 "zone_management": false, 00:19:42.945 "zone_append": false, 00:19:42.945 "compare": false, 00:19:42.945 "compare_and_write": false, 00:19:42.945 "abort": true, 00:19:42.945 "seek_hole": false, 00:19:42.945 "seek_data": false, 00:19:42.945 "copy": true, 00:19:42.945 "nvme_iov_md": false 00:19:42.945 }, 00:19:42.945 "memory_domains": [ 00:19:42.945 { 00:19:42.945 "dma_device_id": "system", 00:19:42.945 "dma_device_type": 1 00:19:42.945 }, 00:19:42.945 { 00:19:42.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.945 "dma_device_type": 2 00:19:42.945 } 00:19:42.945 ], 00:19:42.945 "driver_specific": { 00:19:42.945 "passthru": { 00:19:42.945 "name": "pt1", 00:19:42.945 "base_bdev_name": "malloc1" 00:19:42.945 } 00:19:42.945 } 00:19:42.945 }' 00:19:42.945 21:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:42.945 21:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:42.945 21:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:42.945 21:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:43.202 21:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:43.202 21:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:43.202 21:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:43.202 21:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:43.202 21:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:43.202 21:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:43.202 21:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:43.202 21:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:43.202 21:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:43.202 21:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:43.202 21:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:43.458 21:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:43.458 "name": "pt2", 00:19:43.458 "aliases": [ 00:19:43.458 "00000000-0000-0000-0000-000000000002" 00:19:43.458 ], 00:19:43.458 "product_name": "passthru", 00:19:43.458 "block_size": 512, 00:19:43.458 "num_blocks": 65536, 00:19:43.458 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:43.458 "assigned_rate_limits": { 00:19:43.458 "rw_ios_per_sec": 0, 00:19:43.458 "rw_mbytes_per_sec": 0, 00:19:43.458 "r_mbytes_per_sec": 0, 00:19:43.458 "w_mbytes_per_sec": 0 00:19:43.458 }, 00:19:43.459 "claimed": true, 00:19:43.459 "claim_type": "exclusive_write", 00:19:43.459 "zoned": false, 00:19:43.459 "supported_io_types": { 00:19:43.459 "read": true, 00:19:43.459 "write": true, 00:19:43.459 "unmap": true, 00:19:43.459 "flush": true, 00:19:43.459 "reset": true, 00:19:43.459 "nvme_admin": false, 00:19:43.459 "nvme_io": false, 00:19:43.459 "nvme_io_md": false, 00:19:43.459 "write_zeroes": true, 00:19:43.459 "zcopy": true, 00:19:43.459 "get_zone_info": false, 00:19:43.459 "zone_management": false, 00:19:43.459 "zone_append": false, 00:19:43.459 "compare": false, 00:19:43.459 "compare_and_write": false, 00:19:43.459 "abort": true, 00:19:43.459 "seek_hole": false, 00:19:43.459 "seek_data": false, 00:19:43.459 "copy": true, 00:19:43.459 "nvme_iov_md": false 00:19:43.459 }, 00:19:43.459 "memory_domains": [ 00:19:43.459 { 00:19:43.459 "dma_device_id": "system", 00:19:43.459 "dma_device_type": 1 00:19:43.459 }, 00:19:43.459 { 00:19:43.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:43.459 "dma_device_type": 2 00:19:43.459 } 00:19:43.459 ], 00:19:43.459 "driver_specific": { 00:19:43.459 "passthru": { 00:19:43.459 "name": "pt2", 00:19:43.459 "base_bdev_name": "malloc2" 00:19:43.459 } 00:19:43.459 } 00:19:43.459 }' 00:19:43.459 21:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:43.459 21:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:43.715 21:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:43.715 21:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:43.715 21:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:43.715 21:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:43.715 21:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:43.715 21:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:43.715 21:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:43.715 21:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:43.973 21:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:43.973 21:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:43.973 21:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:43.973 21:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:43.973 21:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:43.973 21:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:43.973 "name": "pt3", 00:19:43.973 "aliases": [ 00:19:43.973 "00000000-0000-0000-0000-000000000003" 00:19:43.973 ], 00:19:43.973 "product_name": "passthru", 00:19:43.973 "block_size": 512, 00:19:43.973 "num_blocks": 65536, 00:19:43.973 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:43.974 "assigned_rate_limits": { 00:19:43.974 "rw_ios_per_sec": 0, 00:19:43.974 "rw_mbytes_per_sec": 0, 00:19:43.974 "r_mbytes_per_sec": 0, 00:19:43.974 "w_mbytes_per_sec": 0 00:19:43.974 }, 00:19:43.974 "claimed": true, 00:19:43.974 "claim_type": "exclusive_write", 00:19:43.974 "zoned": false, 00:19:43.974 "supported_io_types": { 00:19:43.974 "read": true, 00:19:43.974 "write": true, 00:19:43.974 "unmap": true, 00:19:43.974 "flush": true, 00:19:43.974 "reset": true, 00:19:43.974 "nvme_admin": false, 00:19:43.974 "nvme_io": false, 00:19:43.974 "nvme_io_md": false, 00:19:43.974 "write_zeroes": true, 00:19:43.974 "zcopy": true, 00:19:43.974 "get_zone_info": false, 00:19:43.974 "zone_management": false, 00:19:43.974 "zone_append": false, 00:19:43.974 "compare": false, 00:19:43.974 "compare_and_write": false, 00:19:43.974 "abort": true, 00:19:43.974 "seek_hole": false, 00:19:43.974 "seek_data": false, 00:19:43.974 "copy": true, 00:19:43.974 "nvme_iov_md": false 00:19:43.974 }, 00:19:43.974 "memory_domains": [ 00:19:43.974 { 00:19:43.974 "dma_device_id": "system", 00:19:43.974 "dma_device_type": 1 00:19:43.974 }, 00:19:43.974 { 00:19:43.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:43.974 "dma_device_type": 2 00:19:43.974 } 00:19:43.974 ], 00:19:43.974 "driver_specific": { 00:19:43.974 "passthru": { 00:19:43.974 "name": "pt3", 00:19:43.974 "base_bdev_name": "malloc3" 00:19:43.974 } 00:19:43.974 } 00:19:43.974 }' 00:19:43.974 21:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:44.232 21:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:44.232 21:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:44.232 21:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:44.232 21:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:44.232 21:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:44.232 21:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:44.232 21:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:44.490 21:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:44.490 21:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:44.490 21:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:44.490 21:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:44.490 21:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:19:44.490 21:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:44.747 [2024-07-15 21:33:17.904275] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:44.747 21:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=1ed24fdc-dc99-4e16-9605-fcf42ac4a71b 00:19:44.747 21:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 1ed24fdc-dc99-4e16-9605-fcf42ac4a71b ']' 00:19:44.747 21:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:44.747 [2024-07-15 21:33:18.091681] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:44.747 [2024-07-15 21:33:18.091783] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:44.747 [2024-07-15 21:33:18.091908] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:44.747 [2024-07-15 21:33:18.092007] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:44.747 [2024-07-15 21:33:18.092039] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:19:44.747 21:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.747 21:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:19:45.004 21:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:19:45.004 21:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:19:45.004 21:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:45.004 21:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:45.314 21:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:45.314 21:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:45.314 21:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:45.314 21:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:45.576 21:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:45.576 21:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:45.835 21:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:19:45.835 21:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:45.835 21:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:19:45.835 21:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:45.835 21:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:45.835 21:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:45.835 21:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:45.835 21:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:45.835 21:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:45.835 21:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:45.835 21:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:45.835 21:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:45.835 21:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:46.093 [2024-07-15 21:33:19.253583] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:46.093 [2024-07-15 21:33:19.255623] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:46.093 [2024-07-15 21:33:19.255724] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:46.093 [2024-07-15 21:33:19.255793] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:46.093 [2024-07-15 21:33:19.255926] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:46.093 [2024-07-15 21:33:19.255982] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:46.093 [2024-07-15 21:33:19.256033] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:46.093 [2024-07-15 21:33:19.256063] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state configuring 00:19:46.093 request: 00:19:46.093 { 00:19:46.093 "name": "raid_bdev1", 00:19:46.093 "raid_level": "raid0", 00:19:46.093 "base_bdevs": [ 00:19:46.093 "malloc1", 00:19:46.094 "malloc2", 00:19:46.094 "malloc3" 00:19:46.094 ], 00:19:46.094 "strip_size_kb": 64, 00:19:46.094 "superblock": false, 00:19:46.094 "method": "bdev_raid_create", 00:19:46.094 "req_id": 1 00:19:46.094 } 00:19:46.094 Got JSON-RPC error response 00:19:46.094 response: 00:19:46.094 { 00:19:46.094 "code": -17, 00:19:46.094 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:46.094 } 00:19:46.094 21:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:19:46.094 21:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:46.094 21:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:46.094 21:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:46.094 21:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.094 21:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:19:46.094 21:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:19:46.094 21:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:19:46.094 21:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:46.351 [2024-07-15 21:33:19.604885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:46.351 [2024-07-15 21:33:19.605068] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.351 [2024-07-15 21:33:19.605118] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:46.351 [2024-07-15 21:33:19.605167] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.351 [2024-07-15 21:33:19.607447] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.351 [2024-07-15 21:33:19.607527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:46.351 [2024-07-15 21:33:19.607672] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:46.352 [2024-07-15 21:33:19.607737] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:46.352 pt1 00:19:46.352 21:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:19:46.352 21:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:46.352 21:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:46.352 21:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:46.352 21:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:46.352 21:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:46.352 21:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:46.352 21:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:46.352 21:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:46.352 21:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:46.352 21:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.352 21:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.609 21:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:46.609 "name": "raid_bdev1", 00:19:46.609 "uuid": "1ed24fdc-dc99-4e16-9605-fcf42ac4a71b", 00:19:46.609 "strip_size_kb": 64, 00:19:46.609 "state": "configuring", 00:19:46.609 "raid_level": "raid0", 00:19:46.609 "superblock": true, 00:19:46.609 "num_base_bdevs": 3, 00:19:46.609 "num_base_bdevs_discovered": 1, 00:19:46.609 "num_base_bdevs_operational": 3, 00:19:46.609 "base_bdevs_list": [ 00:19:46.609 { 00:19:46.609 "name": "pt1", 00:19:46.609 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:46.609 "is_configured": true, 00:19:46.609 "data_offset": 2048, 00:19:46.609 "data_size": 63488 00:19:46.609 }, 00:19:46.609 { 00:19:46.609 "name": null, 00:19:46.609 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:46.609 "is_configured": false, 00:19:46.609 "data_offset": 2048, 00:19:46.609 "data_size": 63488 00:19:46.609 }, 00:19:46.609 { 00:19:46.609 "name": null, 00:19:46.609 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:46.609 "is_configured": false, 00:19:46.609 "data_offset": 2048, 00:19:46.609 "data_size": 63488 00:19:46.609 } 00:19:46.609 ] 00:19:46.609 }' 00:19:46.609 21:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:46.609 21:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.177 21:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:19:47.177 21:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:47.437 [2024-07-15 21:33:20.563219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:47.437 [2024-07-15 21:33:20.563376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:47.437 [2024-07-15 21:33:20.563429] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:47.437 [2024-07-15 21:33:20.563479] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:47.437 [2024-07-15 21:33:20.563970] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:47.437 [2024-07-15 21:33:20.564040] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:47.437 [2024-07-15 21:33:20.564176] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:47.437 [2024-07-15 21:33:20.564226] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:47.437 pt2 00:19:47.437 21:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:47.437 [2024-07-15 21:33:20.798845] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:47.696 21:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:19:47.696 21:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:47.696 21:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:47.696 21:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:47.696 21:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:47.696 21:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:47.696 21:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:47.696 21:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:47.696 21:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:47.696 21:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:47.696 21:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.696 21:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.696 21:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:47.696 "name": "raid_bdev1", 00:19:47.696 "uuid": "1ed24fdc-dc99-4e16-9605-fcf42ac4a71b", 00:19:47.696 "strip_size_kb": 64, 00:19:47.696 "state": "configuring", 00:19:47.696 "raid_level": "raid0", 00:19:47.696 "superblock": true, 00:19:47.696 "num_base_bdevs": 3, 00:19:47.696 "num_base_bdevs_discovered": 1, 00:19:47.696 "num_base_bdevs_operational": 3, 00:19:47.696 "base_bdevs_list": [ 00:19:47.696 { 00:19:47.696 "name": "pt1", 00:19:47.696 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:47.696 "is_configured": true, 00:19:47.696 "data_offset": 2048, 00:19:47.696 "data_size": 63488 00:19:47.696 }, 00:19:47.696 { 00:19:47.696 "name": null, 00:19:47.696 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:47.696 "is_configured": false, 00:19:47.696 "data_offset": 2048, 00:19:47.696 "data_size": 63488 00:19:47.696 }, 00:19:47.696 { 00:19:47.696 "name": null, 00:19:47.696 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:47.696 "is_configured": false, 00:19:47.696 "data_offset": 2048, 00:19:47.696 "data_size": 63488 00:19:47.696 } 00:19:47.696 ] 00:19:47.696 }' 00:19:47.696 21:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:47.696 21:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.265 21:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:19:48.265 21:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:48.265 21:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:48.524 [2024-07-15 21:33:21.757128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:48.524 [2024-07-15 21:33:21.757269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:48.524 [2024-07-15 21:33:21.757319] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:19:48.524 [2024-07-15 21:33:21.757373] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:48.524 [2024-07-15 21:33:21.757800] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:48.524 [2024-07-15 21:33:21.757873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:48.524 [2024-07-15 21:33:21.758002] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:48.524 [2024-07-15 21:33:21.758044] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:48.524 pt2 00:19:48.524 21:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:19:48.524 21:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:48.524 21:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:48.784 [2024-07-15 21:33:21.912865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:48.784 [2024-07-15 21:33:21.913000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:48.784 [2024-07-15 21:33:21.913039] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:48.784 [2024-07-15 21:33:21.913076] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:48.784 [2024-07-15 21:33:21.913568] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:48.784 [2024-07-15 21:33:21.913628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:48.784 [2024-07-15 21:33:21.913749] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:48.784 [2024-07-15 21:33:21.913794] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:48.784 [2024-07-15 21:33:21.913931] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:19:48.784 [2024-07-15 21:33:21.913966] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:48.784 [2024-07-15 21:33:21.914085] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:48.784 [2024-07-15 21:33:21.914386] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:19:48.784 [2024-07-15 21:33:21.914426] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:19:48.784 [2024-07-15 21:33:21.914575] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:48.784 pt3 00:19:48.784 21:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:19:48.784 21:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:48.784 21:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:19:48.784 21:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:48.784 21:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:48.784 21:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:48.784 21:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:48.784 21:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:48.784 21:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:48.784 21:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:48.784 21:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:48.784 21:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:48.784 21:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.784 21:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.784 21:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:48.784 "name": "raid_bdev1", 00:19:48.784 "uuid": "1ed24fdc-dc99-4e16-9605-fcf42ac4a71b", 00:19:48.784 "strip_size_kb": 64, 00:19:48.784 "state": "online", 00:19:48.784 "raid_level": "raid0", 00:19:48.784 "superblock": true, 00:19:48.784 "num_base_bdevs": 3, 00:19:48.784 "num_base_bdevs_discovered": 3, 00:19:48.784 "num_base_bdevs_operational": 3, 00:19:48.784 "base_bdevs_list": [ 00:19:48.784 { 00:19:48.784 "name": "pt1", 00:19:48.784 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:48.784 "is_configured": true, 00:19:48.784 "data_offset": 2048, 00:19:48.784 "data_size": 63488 00:19:48.784 }, 00:19:48.784 { 00:19:48.784 "name": "pt2", 00:19:48.784 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:48.784 "is_configured": true, 00:19:48.784 "data_offset": 2048, 00:19:48.784 "data_size": 63488 00:19:48.784 }, 00:19:48.784 { 00:19:48.784 "name": "pt3", 00:19:48.784 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:48.784 "is_configured": true, 00:19:48.784 "data_offset": 2048, 00:19:48.784 "data_size": 63488 00:19:48.784 } 00:19:48.784 ] 00:19:48.784 }' 00:19:48.784 21:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:48.784 21:33:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.353 21:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:19:49.353 21:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:49.353 21:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:49.353 21:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:49.353 21:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:49.353 21:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:49.353 21:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:49.353 21:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:49.613 [2024-07-15 21:33:22.875385] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:49.613 21:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:49.613 "name": "raid_bdev1", 00:19:49.613 "aliases": [ 00:19:49.613 "1ed24fdc-dc99-4e16-9605-fcf42ac4a71b" 00:19:49.613 ], 00:19:49.613 "product_name": "Raid Volume", 00:19:49.613 "block_size": 512, 00:19:49.613 "num_blocks": 190464, 00:19:49.613 "uuid": "1ed24fdc-dc99-4e16-9605-fcf42ac4a71b", 00:19:49.613 "assigned_rate_limits": { 00:19:49.613 "rw_ios_per_sec": 0, 00:19:49.613 "rw_mbytes_per_sec": 0, 00:19:49.613 "r_mbytes_per_sec": 0, 00:19:49.613 "w_mbytes_per_sec": 0 00:19:49.613 }, 00:19:49.613 "claimed": false, 00:19:49.613 "zoned": false, 00:19:49.613 "supported_io_types": { 00:19:49.613 "read": true, 00:19:49.613 "write": true, 00:19:49.613 "unmap": true, 00:19:49.613 "flush": true, 00:19:49.613 "reset": true, 00:19:49.613 "nvme_admin": false, 00:19:49.613 "nvme_io": false, 00:19:49.613 "nvme_io_md": false, 00:19:49.613 "write_zeroes": true, 00:19:49.613 "zcopy": false, 00:19:49.613 "get_zone_info": false, 00:19:49.613 "zone_management": false, 00:19:49.613 "zone_append": false, 00:19:49.613 "compare": false, 00:19:49.613 "compare_and_write": false, 00:19:49.613 "abort": false, 00:19:49.613 "seek_hole": false, 00:19:49.613 "seek_data": false, 00:19:49.613 "copy": false, 00:19:49.613 "nvme_iov_md": false 00:19:49.613 }, 00:19:49.613 "memory_domains": [ 00:19:49.613 { 00:19:49.613 "dma_device_id": "system", 00:19:49.613 "dma_device_type": 1 00:19:49.613 }, 00:19:49.613 { 00:19:49.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.613 "dma_device_type": 2 00:19:49.613 }, 00:19:49.613 { 00:19:49.613 "dma_device_id": "system", 00:19:49.613 "dma_device_type": 1 00:19:49.613 }, 00:19:49.613 { 00:19:49.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.613 "dma_device_type": 2 00:19:49.613 }, 00:19:49.613 { 00:19:49.613 "dma_device_id": "system", 00:19:49.613 "dma_device_type": 1 00:19:49.613 }, 00:19:49.613 { 00:19:49.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.613 "dma_device_type": 2 00:19:49.613 } 00:19:49.613 ], 00:19:49.613 "driver_specific": { 00:19:49.613 "raid": { 00:19:49.613 "uuid": "1ed24fdc-dc99-4e16-9605-fcf42ac4a71b", 00:19:49.613 "strip_size_kb": 64, 00:19:49.613 "state": "online", 00:19:49.613 "raid_level": "raid0", 00:19:49.613 "superblock": true, 00:19:49.613 "num_base_bdevs": 3, 00:19:49.613 "num_base_bdevs_discovered": 3, 00:19:49.613 "num_base_bdevs_operational": 3, 00:19:49.613 "base_bdevs_list": [ 00:19:49.613 { 00:19:49.613 "name": "pt1", 00:19:49.613 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:49.613 "is_configured": true, 00:19:49.613 "data_offset": 2048, 00:19:49.613 "data_size": 63488 00:19:49.613 }, 00:19:49.613 { 00:19:49.613 "name": "pt2", 00:19:49.613 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:49.613 "is_configured": true, 00:19:49.613 "data_offset": 2048, 00:19:49.613 "data_size": 63488 00:19:49.613 }, 00:19:49.613 { 00:19:49.613 "name": "pt3", 00:19:49.613 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:49.613 "is_configured": true, 00:19:49.613 "data_offset": 2048, 00:19:49.613 "data_size": 63488 00:19:49.613 } 00:19:49.613 ] 00:19:49.613 } 00:19:49.613 } 00:19:49.613 }' 00:19:49.613 21:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:49.613 21:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:49.613 pt2 00:19:49.613 pt3' 00:19:49.613 21:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:49.613 21:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:49.613 21:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:49.877 21:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:49.878 "name": "pt1", 00:19:49.878 "aliases": [ 00:19:49.878 "00000000-0000-0000-0000-000000000001" 00:19:49.878 ], 00:19:49.878 "product_name": "passthru", 00:19:49.878 "block_size": 512, 00:19:49.878 "num_blocks": 65536, 00:19:49.878 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:49.878 "assigned_rate_limits": { 00:19:49.878 "rw_ios_per_sec": 0, 00:19:49.878 "rw_mbytes_per_sec": 0, 00:19:49.878 "r_mbytes_per_sec": 0, 00:19:49.878 "w_mbytes_per_sec": 0 00:19:49.878 }, 00:19:49.878 "claimed": true, 00:19:49.878 "claim_type": "exclusive_write", 00:19:49.878 "zoned": false, 00:19:49.878 "supported_io_types": { 00:19:49.878 "read": true, 00:19:49.878 "write": true, 00:19:49.878 "unmap": true, 00:19:49.878 "flush": true, 00:19:49.878 "reset": true, 00:19:49.878 "nvme_admin": false, 00:19:49.878 "nvme_io": false, 00:19:49.878 "nvme_io_md": false, 00:19:49.878 "write_zeroes": true, 00:19:49.878 "zcopy": true, 00:19:49.878 "get_zone_info": false, 00:19:49.878 "zone_management": false, 00:19:49.878 "zone_append": false, 00:19:49.878 "compare": false, 00:19:49.878 "compare_and_write": false, 00:19:49.878 "abort": true, 00:19:49.878 "seek_hole": false, 00:19:49.878 "seek_data": false, 00:19:49.878 "copy": true, 00:19:49.878 "nvme_iov_md": false 00:19:49.878 }, 00:19:49.878 "memory_domains": [ 00:19:49.878 { 00:19:49.878 "dma_device_id": "system", 00:19:49.878 "dma_device_type": 1 00:19:49.878 }, 00:19:49.878 { 00:19:49.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.878 "dma_device_type": 2 00:19:49.878 } 00:19:49.878 ], 00:19:49.878 "driver_specific": { 00:19:49.878 "passthru": { 00:19:49.878 "name": "pt1", 00:19:49.878 "base_bdev_name": "malloc1" 00:19:49.878 } 00:19:49.878 } 00:19:49.878 }' 00:19:49.878 21:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:49.878 21:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:49.878 21:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:49.878 21:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:50.142 21:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:50.142 21:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:50.142 21:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:50.142 21:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:50.142 21:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:50.142 21:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:50.142 21:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:50.402 21:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:50.402 21:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:50.402 21:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:50.402 21:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:50.402 21:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:50.402 "name": "pt2", 00:19:50.402 "aliases": [ 00:19:50.402 "00000000-0000-0000-0000-000000000002" 00:19:50.402 ], 00:19:50.402 "product_name": "passthru", 00:19:50.402 "block_size": 512, 00:19:50.402 "num_blocks": 65536, 00:19:50.402 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:50.402 "assigned_rate_limits": { 00:19:50.402 "rw_ios_per_sec": 0, 00:19:50.402 "rw_mbytes_per_sec": 0, 00:19:50.402 "r_mbytes_per_sec": 0, 00:19:50.402 "w_mbytes_per_sec": 0 00:19:50.402 }, 00:19:50.402 "claimed": true, 00:19:50.402 "claim_type": "exclusive_write", 00:19:50.402 "zoned": false, 00:19:50.402 "supported_io_types": { 00:19:50.402 "read": true, 00:19:50.402 "write": true, 00:19:50.402 "unmap": true, 00:19:50.402 "flush": true, 00:19:50.402 "reset": true, 00:19:50.402 "nvme_admin": false, 00:19:50.402 "nvme_io": false, 00:19:50.402 "nvme_io_md": false, 00:19:50.402 "write_zeroes": true, 00:19:50.402 "zcopy": true, 00:19:50.402 "get_zone_info": false, 00:19:50.402 "zone_management": false, 00:19:50.402 "zone_append": false, 00:19:50.402 "compare": false, 00:19:50.402 "compare_and_write": false, 00:19:50.402 "abort": true, 00:19:50.402 "seek_hole": false, 00:19:50.402 "seek_data": false, 00:19:50.402 "copy": true, 00:19:50.402 "nvme_iov_md": false 00:19:50.402 }, 00:19:50.402 "memory_domains": [ 00:19:50.402 { 00:19:50.402 "dma_device_id": "system", 00:19:50.402 "dma_device_type": 1 00:19:50.402 }, 00:19:50.402 { 00:19:50.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.402 "dma_device_type": 2 00:19:50.402 } 00:19:50.402 ], 00:19:50.402 "driver_specific": { 00:19:50.402 "passthru": { 00:19:50.402 "name": "pt2", 00:19:50.402 "base_bdev_name": "malloc2" 00:19:50.402 } 00:19:50.402 } 00:19:50.402 }' 00:19:50.402 21:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:50.402 21:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:50.661 21:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:50.661 21:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:50.661 21:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:50.661 21:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:50.661 21:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:50.661 21:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:50.661 21:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:50.661 21:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:50.920 21:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:50.920 21:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:50.920 21:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:50.920 21:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:50.920 21:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:51.179 21:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:51.179 "name": "pt3", 00:19:51.179 "aliases": [ 00:19:51.179 "00000000-0000-0000-0000-000000000003" 00:19:51.179 ], 00:19:51.179 "product_name": "passthru", 00:19:51.179 "block_size": 512, 00:19:51.179 "num_blocks": 65536, 00:19:51.179 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:51.179 "assigned_rate_limits": { 00:19:51.179 "rw_ios_per_sec": 0, 00:19:51.179 "rw_mbytes_per_sec": 0, 00:19:51.179 "r_mbytes_per_sec": 0, 00:19:51.179 "w_mbytes_per_sec": 0 00:19:51.179 }, 00:19:51.179 "claimed": true, 00:19:51.179 "claim_type": "exclusive_write", 00:19:51.179 "zoned": false, 00:19:51.179 "supported_io_types": { 00:19:51.179 "read": true, 00:19:51.179 "write": true, 00:19:51.179 "unmap": true, 00:19:51.179 "flush": true, 00:19:51.179 "reset": true, 00:19:51.179 "nvme_admin": false, 00:19:51.179 "nvme_io": false, 00:19:51.179 "nvme_io_md": false, 00:19:51.179 "write_zeroes": true, 00:19:51.179 "zcopy": true, 00:19:51.179 "get_zone_info": false, 00:19:51.179 "zone_management": false, 00:19:51.179 "zone_append": false, 00:19:51.179 "compare": false, 00:19:51.179 "compare_and_write": false, 00:19:51.179 "abort": true, 00:19:51.179 "seek_hole": false, 00:19:51.179 "seek_data": false, 00:19:51.179 "copy": true, 00:19:51.179 "nvme_iov_md": false 00:19:51.179 }, 00:19:51.179 "memory_domains": [ 00:19:51.179 { 00:19:51.179 "dma_device_id": "system", 00:19:51.179 "dma_device_type": 1 00:19:51.179 }, 00:19:51.179 { 00:19:51.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:51.179 "dma_device_type": 2 00:19:51.179 } 00:19:51.179 ], 00:19:51.179 "driver_specific": { 00:19:51.179 "passthru": { 00:19:51.179 "name": "pt3", 00:19:51.179 "base_bdev_name": "malloc3" 00:19:51.179 } 00:19:51.179 } 00:19:51.179 }' 00:19:51.179 21:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:51.179 21:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:51.179 21:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:51.179 21:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:51.179 21:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:51.179 21:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:51.179 21:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:51.438 21:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:51.438 21:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:51.438 21:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:51.438 21:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:51.438 21:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:51.438 21:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:51.438 21:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:19:51.698 [2024-07-15 21:33:24.839972] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:51.698 21:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 1ed24fdc-dc99-4e16-9605-fcf42ac4a71b '!=' 1ed24fdc-dc99-4e16-9605-fcf42ac4a71b ']' 00:19:51.698 21:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:19:51.698 21:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:51.698 21:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:51.698 21:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 128199 00:19:51.698 21:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 128199 ']' 00:19:51.698 21:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 128199 00:19:51.698 21:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:19:51.698 21:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:51.698 21:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 128199 00:19:51.698 21:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:51.698 21:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:51.698 21:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 128199' 00:19:51.698 killing process with pid 128199 00:19:51.698 21:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 128199 00:19:51.698 [2024-07-15 21:33:24.875833] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:51.698 21:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 128199 00:19:51.698 [2024-07-15 21:33:24.875937] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:51.698 [2024-07-15 21:33:24.876006] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:51.698 [2024-07-15 21:33:24.876032] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:19:51.958 [2024-07-15 21:33:25.150537] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:53.339 ************************************ 00:19:53.339 END TEST raid_superblock_test 00:19:53.339 ************************************ 00:19:53.339 21:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:19:53.339 00:19:53.339 real 0m13.598s 00:19:53.339 user 0m24.231s 00:19:53.339 sys 0m1.602s 00:19:53.339 21:33:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:53.339 21:33:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.339 21:33:26 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:53.339 21:33:26 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:19:53.339 21:33:26 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:53.339 21:33:26 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:53.339 21:33:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:53.339 ************************************ 00:19:53.339 START TEST raid_read_error_test 00:19:53.339 ************************************ 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 read 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.dlkJvr65tb 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=128686 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 128686 /var/tmp/spdk-raid.sock 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 128686 ']' 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:53.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:53.339 21:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.339 [2024-07-15 21:33:26.452749] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:19:53.339 [2024-07-15 21:33:26.452950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128686 ] 00:19:53.339 [2024-07-15 21:33:26.611117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.599 [2024-07-15 21:33:26.851991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.857 [2024-07-15 21:33:27.077422] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:54.115 21:33:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:54.115 21:33:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:19:54.115 21:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:54.115 21:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:54.115 BaseBdev1_malloc 00:19:54.115 21:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:54.373 true 00:19:54.373 21:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:54.632 [2024-07-15 21:33:27.821774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:54.632 [2024-07-15 21:33:27.821989] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.632 [2024-07-15 21:33:27.822041] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:54.632 [2024-07-15 21:33:27.822077] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.632 [2024-07-15 21:33:27.824470] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.632 [2024-07-15 21:33:27.824549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:54.632 BaseBdev1 00:19:54.632 21:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:54.632 21:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:54.892 BaseBdev2_malloc 00:19:54.892 21:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:54.892 true 00:19:54.892 21:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:55.151 [2024-07-15 21:33:28.421072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:55.151 [2024-07-15 21:33:28.421313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.151 [2024-07-15 21:33:28.421369] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:19:55.151 [2024-07-15 21:33:28.421407] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.151 [2024-07-15 21:33:28.423797] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.151 [2024-07-15 21:33:28.423897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:55.151 BaseBdev2 00:19:55.151 21:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:55.151 21:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:55.409 BaseBdev3_malloc 00:19:55.409 21:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:55.668 true 00:19:55.668 21:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:55.668 [2024-07-15 21:33:29.018673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:55.668 [2024-07-15 21:33:29.018878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.668 [2024-07-15 21:33:29.018929] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:55.668 [2024-07-15 21:33:29.018969] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.668 [2024-07-15 21:33:29.021246] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.668 [2024-07-15 21:33:29.021341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:55.668 BaseBdev3 00:19:55.668 21:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:19:55.926 [2024-07-15 21:33:29.202466] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:55.926 [2024-07-15 21:33:29.204496] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:55.926 [2024-07-15 21:33:29.204612] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:55.926 [2024-07-15 21:33:29.204835] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:19:55.926 [2024-07-15 21:33:29.204860] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:55.926 [2024-07-15 21:33:29.205039] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:19:55.926 [2024-07-15 21:33:29.205417] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:19:55.926 [2024-07-15 21:33:29.205457] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:19:55.926 [2024-07-15 21:33:29.205648] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:55.926 21:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:19:55.926 21:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:55.926 21:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:55.926 21:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:55.926 21:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:55.926 21:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:55.926 21:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:55.926 21:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:55.926 21:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:55.926 21:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:55.926 21:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.926 21:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.184 21:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:56.184 "name": "raid_bdev1", 00:19:56.184 "uuid": "07a81b3b-679c-46fd-b92a-a5d2d2192995", 00:19:56.184 "strip_size_kb": 64, 00:19:56.184 "state": "online", 00:19:56.184 "raid_level": "raid0", 00:19:56.184 "superblock": true, 00:19:56.184 "num_base_bdevs": 3, 00:19:56.184 "num_base_bdevs_discovered": 3, 00:19:56.184 "num_base_bdevs_operational": 3, 00:19:56.184 "base_bdevs_list": [ 00:19:56.184 { 00:19:56.184 "name": "BaseBdev1", 00:19:56.184 "uuid": "104c218f-005e-5a75-a8d5-b0e1158b4c18", 00:19:56.184 "is_configured": true, 00:19:56.184 "data_offset": 2048, 00:19:56.184 "data_size": 63488 00:19:56.184 }, 00:19:56.184 { 00:19:56.184 "name": "BaseBdev2", 00:19:56.184 "uuid": "ccad703c-39b0-56c0-a8c9-c0052079ff66", 00:19:56.184 "is_configured": true, 00:19:56.184 "data_offset": 2048, 00:19:56.184 "data_size": 63488 00:19:56.184 }, 00:19:56.184 { 00:19:56.184 "name": "BaseBdev3", 00:19:56.184 "uuid": "5c6a437a-5e29-5847-b727-3efae3c431cc", 00:19:56.184 "is_configured": true, 00:19:56.184 "data_offset": 2048, 00:19:56.184 "data_size": 63488 00:19:56.184 } 00:19:56.184 ] 00:19:56.184 }' 00:19:56.184 21:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:56.184 21:33:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.749 21:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:19:56.749 21:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:56.749 [2024-07-15 21:33:30.046176] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:19:57.684 21:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:57.943 21:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:19:57.943 21:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:19:57.943 21:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:19:57.943 21:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:19:57.943 21:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:57.943 21:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:57.943 21:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:57.943 21:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:57.943 21:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:57.943 21:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:57.943 21:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:57.943 21:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:57.943 21:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:57.943 21:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.943 21:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.202 21:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:58.202 "name": "raid_bdev1", 00:19:58.202 "uuid": "07a81b3b-679c-46fd-b92a-a5d2d2192995", 00:19:58.202 "strip_size_kb": 64, 00:19:58.202 "state": "online", 00:19:58.202 "raid_level": "raid0", 00:19:58.202 "superblock": true, 00:19:58.202 "num_base_bdevs": 3, 00:19:58.202 "num_base_bdevs_discovered": 3, 00:19:58.202 "num_base_bdevs_operational": 3, 00:19:58.202 "base_bdevs_list": [ 00:19:58.202 { 00:19:58.202 "name": "BaseBdev1", 00:19:58.202 "uuid": "104c218f-005e-5a75-a8d5-b0e1158b4c18", 00:19:58.202 "is_configured": true, 00:19:58.202 "data_offset": 2048, 00:19:58.202 "data_size": 63488 00:19:58.202 }, 00:19:58.202 { 00:19:58.202 "name": "BaseBdev2", 00:19:58.202 "uuid": "ccad703c-39b0-56c0-a8c9-c0052079ff66", 00:19:58.202 "is_configured": true, 00:19:58.202 "data_offset": 2048, 00:19:58.202 "data_size": 63488 00:19:58.202 }, 00:19:58.202 { 00:19:58.202 "name": "BaseBdev3", 00:19:58.202 "uuid": "5c6a437a-5e29-5847-b727-3efae3c431cc", 00:19:58.202 "is_configured": true, 00:19:58.202 "data_offset": 2048, 00:19:58.202 "data_size": 63488 00:19:58.202 } 00:19:58.202 ] 00:19:58.202 }' 00:19:58.202 21:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:58.202 21:33:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.771 21:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:58.771 [2024-07-15 21:33:32.085299] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:58.771 [2024-07-15 21:33:32.085437] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:58.771 [2024-07-15 21:33:32.087879] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:58.771 [2024-07-15 21:33:32.087961] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:58.771 [2024-07-15 21:33:32.088009] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:58.771 [2024-07-15 21:33:32.088035] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:19:58.771 0 00:19:58.771 21:33:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 128686 00:19:58.771 21:33:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 128686 ']' 00:19:58.771 21:33:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 128686 00:19:58.771 21:33:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:19:58.771 21:33:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:58.771 21:33:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 128686 00:19:58.771 killing process with pid 128686 00:19:58.771 21:33:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:58.771 21:33:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:58.771 21:33:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 128686' 00:19:58.771 21:33:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 128686 00:19:58.771 21:33:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 128686 00:19:58.771 [2024-07-15 21:33:32.143273] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:59.030 [2024-07-15 21:33:32.378042] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:00.406 21:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.dlkJvr65tb 00:20:00.406 21:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:20:00.406 21:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:20:00.406 ************************************ 00:20:00.406 END TEST raid_read_error_test 00:20:00.406 ************************************ 00:20:00.406 21:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.49 00:20:00.406 21:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:20:00.406 21:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:00.406 21:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:00.406 21:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.49 != \0\.\0\0 ]] 00:20:00.406 00:20:00.406 real 0m7.383s 00:20:00.406 user 0m10.713s 00:20:00.406 sys 0m0.891s 00:20:00.406 21:33:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:00.406 21:33:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.665 21:33:33 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:00.665 21:33:33 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:20:00.665 21:33:33 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:00.665 21:33:33 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:00.665 21:33:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:00.665 ************************************ 00:20:00.665 START TEST raid_write_error_test 00:20:00.665 ************************************ 00:20:00.665 21:33:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 write 00:20:00.665 21:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:20:00.665 21:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:20:00.665 21:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:20:00.665 21:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:00.665 21:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:20:00.665 21:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:00.665 21:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:20:00.665 21:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:00.665 21:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:00.665 21:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:20:00.665 21:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:00.665 21:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:00.665 21:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:20:00.665 21:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:00.665 21:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:00.665 21:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:20:00.665 21:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:20:00.665 21:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:20:00.665 21:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:20:00.665 21:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:20:00.665 21:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:20:00.665 21:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:20:00.665 21:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:20:00.665 21:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:20:00.665 21:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:20:00.665 21:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.QjO4o3hlfU 00:20:00.665 21:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=128904 00:20:00.665 21:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 128904 /var/tmp/spdk-raid.sock 00:20:00.666 21:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:00.666 21:33:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 128904 ']' 00:20:00.666 21:33:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:00.666 21:33:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:00.666 21:33:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:00.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:00.666 21:33:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:00.666 21:33:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.666 [2024-07-15 21:33:33.914404] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:20:00.666 [2024-07-15 21:33:33.914604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128904 ] 00:20:00.924 [2024-07-15 21:33:34.077427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.182 [2024-07-15 21:33:34.311076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.182 [2024-07-15 21:33:34.531857] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:01.440 21:33:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:01.440 21:33:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:20:01.440 21:33:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:01.440 21:33:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:01.698 BaseBdev1_malloc 00:20:01.698 21:33:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:01.956 true 00:20:01.956 21:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:01.956 [2024-07-15 21:33:35.290395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:01.956 [2024-07-15 21:33:35.290581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.956 [2024-07-15 21:33:35.290632] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:01.956 [2024-07-15 21:33:35.290668] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.956 [2024-07-15 21:33:35.292860] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.956 [2024-07-15 21:33:35.292939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:01.956 BaseBdev1 00:20:01.957 21:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:01.957 21:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:02.220 BaseBdev2_malloc 00:20:02.221 21:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:02.485 true 00:20:02.485 21:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:02.744 [2024-07-15 21:33:35.916993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:02.744 [2024-07-15 21:33:35.917192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.744 [2024-07-15 21:33:35.917241] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:20:02.744 [2024-07-15 21:33:35.917278] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.744 [2024-07-15 21:33:35.919373] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.744 [2024-07-15 21:33:35.919445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:02.744 BaseBdev2 00:20:02.744 21:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:02.744 21:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:03.002 BaseBdev3_malloc 00:20:03.002 21:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:20:03.002 true 00:20:03.003 21:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:03.261 [2024-07-15 21:33:36.504182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:03.261 [2024-07-15 21:33:36.504412] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.261 [2024-07-15 21:33:36.504466] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:03.261 [2024-07-15 21:33:36.504512] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.261 [2024-07-15 21:33:36.506952] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.261 [2024-07-15 21:33:36.507037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:03.261 BaseBdev3 00:20:03.261 21:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:20:03.519 [2024-07-15 21:33:36.683889] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:03.520 [2024-07-15 21:33:36.685659] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:03.520 [2024-07-15 21:33:36.685771] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:03.520 [2024-07-15 21:33:36.685983] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:20:03.520 [2024-07-15 21:33:36.686021] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:03.520 [2024-07-15 21:33:36.686189] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:20:03.520 [2024-07-15 21:33:36.686538] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:20:03.520 [2024-07-15 21:33:36.686577] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:20:03.520 [2024-07-15 21:33:36.686757] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:03.520 21:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:03.520 21:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:03.520 21:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:03.520 21:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:03.520 21:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:03.520 21:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:03.520 21:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:03.520 21:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:03.520 21:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:03.520 21:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:03.520 21:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.520 21:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.520 21:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:03.520 "name": "raid_bdev1", 00:20:03.520 "uuid": "16ef2b21-4bed-48b7-97fa-520890e7b706", 00:20:03.520 "strip_size_kb": 64, 00:20:03.520 "state": "online", 00:20:03.520 "raid_level": "raid0", 00:20:03.520 "superblock": true, 00:20:03.520 "num_base_bdevs": 3, 00:20:03.520 "num_base_bdevs_discovered": 3, 00:20:03.520 "num_base_bdevs_operational": 3, 00:20:03.520 "base_bdevs_list": [ 00:20:03.520 { 00:20:03.520 "name": "BaseBdev1", 00:20:03.520 "uuid": "ac6e7e3d-9266-52f2-93ce-4e9528e54058", 00:20:03.520 "is_configured": true, 00:20:03.520 "data_offset": 2048, 00:20:03.520 "data_size": 63488 00:20:03.520 }, 00:20:03.520 { 00:20:03.520 "name": "BaseBdev2", 00:20:03.520 "uuid": "786b547e-ded6-5178-8454-d2cde00515f9", 00:20:03.520 "is_configured": true, 00:20:03.520 "data_offset": 2048, 00:20:03.520 "data_size": 63488 00:20:03.520 }, 00:20:03.520 { 00:20:03.520 "name": "BaseBdev3", 00:20:03.520 "uuid": "aa157639-af9e-5cee-9a4a-7e2b049a98d4", 00:20:03.520 "is_configured": true, 00:20:03.520 "data_offset": 2048, 00:20:03.520 "data_size": 63488 00:20:03.520 } 00:20:03.520 ] 00:20:03.520 }' 00:20:03.520 21:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:03.520 21:33:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.455 21:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:20:04.455 21:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:04.455 [2024-07-15 21:33:37.575699] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:20:05.390 21:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:20:05.390 21:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:20:05.390 21:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:20:05.390 21:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:20:05.390 21:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:05.390 21:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:05.390 21:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:05.390 21:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:05.390 21:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:05.390 21:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:05.390 21:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:05.390 21:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:05.390 21:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:05.390 21:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:05.390 21:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.390 21:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.648 21:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:05.648 "name": "raid_bdev1", 00:20:05.648 "uuid": "16ef2b21-4bed-48b7-97fa-520890e7b706", 00:20:05.648 "strip_size_kb": 64, 00:20:05.648 "state": "online", 00:20:05.648 "raid_level": "raid0", 00:20:05.648 "superblock": true, 00:20:05.648 "num_base_bdevs": 3, 00:20:05.648 "num_base_bdevs_discovered": 3, 00:20:05.648 "num_base_bdevs_operational": 3, 00:20:05.648 "base_bdevs_list": [ 00:20:05.648 { 00:20:05.648 "name": "BaseBdev1", 00:20:05.648 "uuid": "ac6e7e3d-9266-52f2-93ce-4e9528e54058", 00:20:05.648 "is_configured": true, 00:20:05.648 "data_offset": 2048, 00:20:05.648 "data_size": 63488 00:20:05.648 }, 00:20:05.648 { 00:20:05.648 "name": "BaseBdev2", 00:20:05.648 "uuid": "786b547e-ded6-5178-8454-d2cde00515f9", 00:20:05.648 "is_configured": true, 00:20:05.648 "data_offset": 2048, 00:20:05.648 "data_size": 63488 00:20:05.648 }, 00:20:05.648 { 00:20:05.648 "name": "BaseBdev3", 00:20:05.648 "uuid": "aa157639-af9e-5cee-9a4a-7e2b049a98d4", 00:20:05.648 "is_configured": true, 00:20:05.648 "data_offset": 2048, 00:20:05.648 "data_size": 63488 00:20:05.648 } 00:20:05.648 ] 00:20:05.648 }' 00:20:05.648 21:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:05.648 21:33:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.214 21:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:06.473 [2024-07-15 21:33:39.590116] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:06.473 [2024-07-15 21:33:39.590255] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:06.473 [2024-07-15 21:33:39.592830] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:06.473 [2024-07-15 21:33:39.592915] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:06.473 [2024-07-15 21:33:39.592961] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:06.473 [2024-07-15 21:33:39.592984] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:20:06.473 0 00:20:06.473 21:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 128904 00:20:06.473 21:33:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 128904 ']' 00:20:06.473 21:33:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 128904 00:20:06.473 21:33:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:20:06.473 21:33:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:06.473 21:33:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 128904 00:20:06.473 killing process with pid 128904 00:20:06.473 21:33:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:06.473 21:33:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:06.473 21:33:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 128904' 00:20:06.473 21:33:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 128904 00:20:06.473 21:33:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 128904 00:20:06.473 [2024-07-15 21:33:39.632113] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:06.731 [2024-07-15 21:33:39.867218] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:08.107 21:33:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.QjO4o3hlfU 00:20:08.107 21:33:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:20:08.107 21:33:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:20:08.107 ************************************ 00:20:08.107 END TEST raid_write_error_test 00:20:08.107 ************************************ 00:20:08.107 21:33:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.50 00:20:08.107 21:33:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:20:08.107 21:33:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:08.107 21:33:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:08.107 21:33:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.50 != \0\.\0\0 ]] 00:20:08.107 00:20:08.107 real 0m7.411s 00:20:08.107 user 0m10.761s 00:20:08.107 sys 0m0.891s 00:20:08.107 21:33:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:08.107 21:33:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.107 21:33:41 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:08.107 21:33:41 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:20:08.107 21:33:41 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:20:08.107 21:33:41 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:08.107 21:33:41 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:08.107 21:33:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:08.107 ************************************ 00:20:08.107 START TEST raid_state_function_test 00:20:08.107 ************************************ 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 false 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=129112 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 129112' 00:20:08.107 Process raid pid: 129112 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 129112 /var/tmp/spdk-raid.sock 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 129112 ']' 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:08.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:08.107 21:33:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.107 [2024-07-15 21:33:41.388057] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:20:08.107 [2024-07-15 21:33:41.388249] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.365 [2024-07-15 21:33:41.549831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.625 [2024-07-15 21:33:41.794425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.886 [2024-07-15 21:33:42.012611] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:08.886 21:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:08.886 21:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:20:08.886 21:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:09.143 [2024-07-15 21:33:42.336649] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:09.143 [2024-07-15 21:33:42.336839] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:09.143 [2024-07-15 21:33:42.336872] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:09.143 [2024-07-15 21:33:42.336910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:09.143 [2024-07-15 21:33:42.336926] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:09.143 [2024-07-15 21:33:42.336947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:09.143 21:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:09.143 21:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:09.143 21:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:09.143 21:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:09.143 21:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:09.143 21:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:09.143 21:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:09.143 21:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:09.143 21:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:09.143 21:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:09.143 21:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.143 21:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:09.400 21:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:09.400 "name": "Existed_Raid", 00:20:09.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.400 "strip_size_kb": 64, 00:20:09.400 "state": "configuring", 00:20:09.400 "raid_level": "concat", 00:20:09.400 "superblock": false, 00:20:09.400 "num_base_bdevs": 3, 00:20:09.400 "num_base_bdevs_discovered": 0, 00:20:09.400 "num_base_bdevs_operational": 3, 00:20:09.400 "base_bdevs_list": [ 00:20:09.400 { 00:20:09.400 "name": "BaseBdev1", 00:20:09.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.400 "is_configured": false, 00:20:09.400 "data_offset": 0, 00:20:09.400 "data_size": 0 00:20:09.400 }, 00:20:09.400 { 00:20:09.400 "name": "BaseBdev2", 00:20:09.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.400 "is_configured": false, 00:20:09.400 "data_offset": 0, 00:20:09.400 "data_size": 0 00:20:09.400 }, 00:20:09.400 { 00:20:09.400 "name": "BaseBdev3", 00:20:09.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.400 "is_configured": false, 00:20:09.400 "data_offset": 0, 00:20:09.400 "data_size": 0 00:20:09.400 } 00:20:09.400 ] 00:20:09.400 }' 00:20:09.400 21:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:09.400 21:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.968 21:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:09.968 [2024-07-15 21:33:43.310900] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:09.968 [2024-07-15 21:33:43.311044] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:20:09.968 21:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:10.225 [2024-07-15 21:33:43.490618] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:10.225 [2024-07-15 21:33:43.490762] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:10.225 [2024-07-15 21:33:43.490791] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:10.225 [2024-07-15 21:33:43.490818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:10.225 [2024-07-15 21:33:43.490833] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:10.225 [2024-07-15 21:33:43.490872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:10.225 21:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:10.481 [2024-07-15 21:33:43.711194] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:10.481 BaseBdev1 00:20:10.481 21:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:10.481 21:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:10.481 21:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:10.481 21:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:10.481 21:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:10.481 21:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:10.481 21:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:10.738 21:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:10.738 [ 00:20:10.738 { 00:20:10.738 "name": "BaseBdev1", 00:20:10.738 "aliases": [ 00:20:10.738 "76c4a6d7-50f3-4024-abe5-734d409585f8" 00:20:10.738 ], 00:20:10.738 "product_name": "Malloc disk", 00:20:10.738 "block_size": 512, 00:20:10.738 "num_blocks": 65536, 00:20:10.738 "uuid": "76c4a6d7-50f3-4024-abe5-734d409585f8", 00:20:10.738 "assigned_rate_limits": { 00:20:10.738 "rw_ios_per_sec": 0, 00:20:10.738 "rw_mbytes_per_sec": 0, 00:20:10.738 "r_mbytes_per_sec": 0, 00:20:10.738 "w_mbytes_per_sec": 0 00:20:10.738 }, 00:20:10.738 "claimed": true, 00:20:10.738 "claim_type": "exclusive_write", 00:20:10.738 "zoned": false, 00:20:10.738 "supported_io_types": { 00:20:10.738 "read": true, 00:20:10.738 "write": true, 00:20:10.738 "unmap": true, 00:20:10.738 "flush": true, 00:20:10.738 "reset": true, 00:20:10.738 "nvme_admin": false, 00:20:10.738 "nvme_io": false, 00:20:10.738 "nvme_io_md": false, 00:20:10.738 "write_zeroes": true, 00:20:10.738 "zcopy": true, 00:20:10.738 "get_zone_info": false, 00:20:10.738 "zone_management": false, 00:20:10.738 "zone_append": false, 00:20:10.738 "compare": false, 00:20:10.738 "compare_and_write": false, 00:20:10.738 "abort": true, 00:20:10.738 "seek_hole": false, 00:20:10.738 "seek_data": false, 00:20:10.738 "copy": true, 00:20:10.738 "nvme_iov_md": false 00:20:10.738 }, 00:20:10.738 "memory_domains": [ 00:20:10.738 { 00:20:10.738 "dma_device_id": "system", 00:20:10.738 "dma_device_type": 1 00:20:10.738 }, 00:20:10.738 { 00:20:10.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.738 "dma_device_type": 2 00:20:10.738 } 00:20:10.738 ], 00:20:10.738 "driver_specific": {} 00:20:10.738 } 00:20:10.738 ] 00:20:10.738 21:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:10.738 21:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:10.738 21:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:10.738 21:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:10.738 21:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:10.738 21:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:10.738 21:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:10.738 21:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:10.738 21:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:10.738 21:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:10.738 21:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:10.738 21:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.738 21:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:10.996 21:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:10.996 "name": "Existed_Raid", 00:20:10.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.996 "strip_size_kb": 64, 00:20:10.996 "state": "configuring", 00:20:10.996 "raid_level": "concat", 00:20:10.996 "superblock": false, 00:20:10.996 "num_base_bdevs": 3, 00:20:10.996 "num_base_bdevs_discovered": 1, 00:20:10.996 "num_base_bdevs_operational": 3, 00:20:10.996 "base_bdevs_list": [ 00:20:10.996 { 00:20:10.996 "name": "BaseBdev1", 00:20:10.996 "uuid": "76c4a6d7-50f3-4024-abe5-734d409585f8", 00:20:10.996 "is_configured": true, 00:20:10.996 "data_offset": 0, 00:20:10.996 "data_size": 65536 00:20:10.996 }, 00:20:10.996 { 00:20:10.996 "name": "BaseBdev2", 00:20:10.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.996 "is_configured": false, 00:20:10.996 "data_offset": 0, 00:20:10.996 "data_size": 0 00:20:10.996 }, 00:20:10.996 { 00:20:10.996 "name": "BaseBdev3", 00:20:10.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.996 "is_configured": false, 00:20:10.996 "data_offset": 0, 00:20:10.996 "data_size": 0 00:20:10.996 } 00:20:10.996 ] 00:20:10.996 }' 00:20:10.996 21:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:10.996 21:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.563 21:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:11.821 [2024-07-15 21:33:44.993221] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:11.821 [2024-07-15 21:33:44.993358] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:20:11.821 21:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:11.821 [2024-07-15 21:33:45.188924] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:11.821 [2024-07-15 21:33:45.190691] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:11.821 [2024-07-15 21:33:45.190793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:11.821 [2024-07-15 21:33:45.190824] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:11.821 [2024-07-15 21:33:45.190867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:12.079 21:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:12.079 21:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:12.079 21:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:12.079 21:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:12.079 21:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:12.079 21:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:12.079 21:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:12.079 21:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:12.079 21:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:12.079 21:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:12.079 21:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:12.079 21:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:12.079 21:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.079 21:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:12.079 21:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:12.079 "name": "Existed_Raid", 00:20:12.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.079 "strip_size_kb": 64, 00:20:12.079 "state": "configuring", 00:20:12.079 "raid_level": "concat", 00:20:12.079 "superblock": false, 00:20:12.079 "num_base_bdevs": 3, 00:20:12.079 "num_base_bdevs_discovered": 1, 00:20:12.079 "num_base_bdevs_operational": 3, 00:20:12.079 "base_bdevs_list": [ 00:20:12.079 { 00:20:12.079 "name": "BaseBdev1", 00:20:12.079 "uuid": "76c4a6d7-50f3-4024-abe5-734d409585f8", 00:20:12.079 "is_configured": true, 00:20:12.079 "data_offset": 0, 00:20:12.079 "data_size": 65536 00:20:12.079 }, 00:20:12.079 { 00:20:12.079 "name": "BaseBdev2", 00:20:12.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.079 "is_configured": false, 00:20:12.079 "data_offset": 0, 00:20:12.079 "data_size": 0 00:20:12.079 }, 00:20:12.079 { 00:20:12.079 "name": "BaseBdev3", 00:20:12.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.079 "is_configured": false, 00:20:12.079 "data_offset": 0, 00:20:12.079 "data_size": 0 00:20:12.079 } 00:20:12.079 ] 00:20:12.079 }' 00:20:12.079 21:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:12.079 21:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.647 21:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:12.941 [2024-07-15 21:33:46.198621] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:12.941 BaseBdev2 00:20:12.941 21:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:12.941 21:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:12.941 21:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:12.941 21:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:12.941 21:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:12.941 21:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:12.941 21:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:13.199 21:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:13.458 [ 00:20:13.458 { 00:20:13.458 "name": "BaseBdev2", 00:20:13.458 "aliases": [ 00:20:13.458 "8e115226-0342-498f-a2b7-6909ac2a8100" 00:20:13.458 ], 00:20:13.458 "product_name": "Malloc disk", 00:20:13.458 "block_size": 512, 00:20:13.458 "num_blocks": 65536, 00:20:13.458 "uuid": "8e115226-0342-498f-a2b7-6909ac2a8100", 00:20:13.458 "assigned_rate_limits": { 00:20:13.458 "rw_ios_per_sec": 0, 00:20:13.458 "rw_mbytes_per_sec": 0, 00:20:13.458 "r_mbytes_per_sec": 0, 00:20:13.458 "w_mbytes_per_sec": 0 00:20:13.458 }, 00:20:13.458 "claimed": true, 00:20:13.458 "claim_type": "exclusive_write", 00:20:13.458 "zoned": false, 00:20:13.458 "supported_io_types": { 00:20:13.458 "read": true, 00:20:13.458 "write": true, 00:20:13.458 "unmap": true, 00:20:13.458 "flush": true, 00:20:13.458 "reset": true, 00:20:13.458 "nvme_admin": false, 00:20:13.458 "nvme_io": false, 00:20:13.458 "nvme_io_md": false, 00:20:13.458 "write_zeroes": true, 00:20:13.458 "zcopy": true, 00:20:13.458 "get_zone_info": false, 00:20:13.458 "zone_management": false, 00:20:13.458 "zone_append": false, 00:20:13.458 "compare": false, 00:20:13.458 "compare_and_write": false, 00:20:13.458 "abort": true, 00:20:13.458 "seek_hole": false, 00:20:13.458 "seek_data": false, 00:20:13.458 "copy": true, 00:20:13.458 "nvme_iov_md": false 00:20:13.458 }, 00:20:13.458 "memory_domains": [ 00:20:13.458 { 00:20:13.458 "dma_device_id": "system", 00:20:13.458 "dma_device_type": 1 00:20:13.458 }, 00:20:13.458 { 00:20:13.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.458 "dma_device_type": 2 00:20:13.458 } 00:20:13.458 ], 00:20:13.458 "driver_specific": {} 00:20:13.458 } 00:20:13.458 ] 00:20:13.458 21:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:13.458 21:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:13.458 21:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:13.458 21:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:13.458 21:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:13.458 21:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:13.458 21:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:13.458 21:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:13.458 21:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:13.458 21:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:13.458 21:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:13.458 21:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:13.458 21:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:13.458 21:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.458 21:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:13.458 21:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:13.458 "name": "Existed_Raid", 00:20:13.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.458 "strip_size_kb": 64, 00:20:13.458 "state": "configuring", 00:20:13.458 "raid_level": "concat", 00:20:13.458 "superblock": false, 00:20:13.458 "num_base_bdevs": 3, 00:20:13.458 "num_base_bdevs_discovered": 2, 00:20:13.458 "num_base_bdevs_operational": 3, 00:20:13.458 "base_bdevs_list": [ 00:20:13.458 { 00:20:13.458 "name": "BaseBdev1", 00:20:13.458 "uuid": "76c4a6d7-50f3-4024-abe5-734d409585f8", 00:20:13.458 "is_configured": true, 00:20:13.458 "data_offset": 0, 00:20:13.458 "data_size": 65536 00:20:13.458 }, 00:20:13.458 { 00:20:13.458 "name": "BaseBdev2", 00:20:13.458 "uuid": "8e115226-0342-498f-a2b7-6909ac2a8100", 00:20:13.458 "is_configured": true, 00:20:13.458 "data_offset": 0, 00:20:13.458 "data_size": 65536 00:20:13.458 }, 00:20:13.458 { 00:20:13.458 "name": "BaseBdev3", 00:20:13.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.458 "is_configured": false, 00:20:13.458 "data_offset": 0, 00:20:13.458 "data_size": 0 00:20:13.458 } 00:20:13.458 ] 00:20:13.458 }' 00:20:13.458 21:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:13.458 21:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.026 21:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:14.285 [2024-07-15 21:33:47.547405] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:14.285 [2024-07-15 21:33:47.547500] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:20:14.285 [2024-07-15 21:33:47.547521] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:14.285 [2024-07-15 21:33:47.547659] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:20:14.285 [2024-07-15 21:33:47.547995] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:20:14.285 [2024-07-15 21:33:47.548037] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:20:14.285 [2024-07-15 21:33:47.548283] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:14.285 BaseBdev3 00:20:14.285 21:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:14.285 21:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:14.285 21:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:14.285 21:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:14.285 21:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:14.285 21:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:14.285 21:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:14.544 21:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:14.803 [ 00:20:14.803 { 00:20:14.803 "name": "BaseBdev3", 00:20:14.803 "aliases": [ 00:20:14.803 "8729cdf0-5724-41c8-a9d3-7ce5d0501872" 00:20:14.803 ], 00:20:14.803 "product_name": "Malloc disk", 00:20:14.803 "block_size": 512, 00:20:14.803 "num_blocks": 65536, 00:20:14.803 "uuid": "8729cdf0-5724-41c8-a9d3-7ce5d0501872", 00:20:14.803 "assigned_rate_limits": { 00:20:14.803 "rw_ios_per_sec": 0, 00:20:14.803 "rw_mbytes_per_sec": 0, 00:20:14.803 "r_mbytes_per_sec": 0, 00:20:14.803 "w_mbytes_per_sec": 0 00:20:14.803 }, 00:20:14.803 "claimed": true, 00:20:14.803 "claim_type": "exclusive_write", 00:20:14.803 "zoned": false, 00:20:14.803 "supported_io_types": { 00:20:14.803 "read": true, 00:20:14.803 "write": true, 00:20:14.803 "unmap": true, 00:20:14.803 "flush": true, 00:20:14.803 "reset": true, 00:20:14.803 "nvme_admin": false, 00:20:14.803 "nvme_io": false, 00:20:14.803 "nvme_io_md": false, 00:20:14.803 "write_zeroes": true, 00:20:14.803 "zcopy": true, 00:20:14.803 "get_zone_info": false, 00:20:14.803 "zone_management": false, 00:20:14.803 "zone_append": false, 00:20:14.803 "compare": false, 00:20:14.803 "compare_and_write": false, 00:20:14.803 "abort": true, 00:20:14.803 "seek_hole": false, 00:20:14.803 "seek_data": false, 00:20:14.803 "copy": true, 00:20:14.803 "nvme_iov_md": false 00:20:14.803 }, 00:20:14.803 "memory_domains": [ 00:20:14.803 { 00:20:14.803 "dma_device_id": "system", 00:20:14.803 "dma_device_type": 1 00:20:14.803 }, 00:20:14.803 { 00:20:14.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:14.803 "dma_device_type": 2 00:20:14.803 } 00:20:14.803 ], 00:20:14.803 "driver_specific": {} 00:20:14.803 } 00:20:14.803 ] 00:20:14.803 21:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:14.803 21:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:14.803 21:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:14.803 21:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:20:14.803 21:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:14.803 21:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:14.803 21:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:14.803 21:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:14.803 21:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:14.803 21:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:14.803 21:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:14.803 21:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:14.803 21:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:14.803 21:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.803 21:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:14.803 21:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:14.803 "name": "Existed_Raid", 00:20:14.803 "uuid": "b5183fac-04b5-424e-a2d0-fd2dff2fba96", 00:20:14.803 "strip_size_kb": 64, 00:20:14.803 "state": "online", 00:20:14.803 "raid_level": "concat", 00:20:14.803 "superblock": false, 00:20:14.803 "num_base_bdevs": 3, 00:20:14.803 "num_base_bdevs_discovered": 3, 00:20:14.803 "num_base_bdevs_operational": 3, 00:20:14.803 "base_bdevs_list": [ 00:20:14.803 { 00:20:14.803 "name": "BaseBdev1", 00:20:14.803 "uuid": "76c4a6d7-50f3-4024-abe5-734d409585f8", 00:20:14.803 "is_configured": true, 00:20:14.803 "data_offset": 0, 00:20:14.803 "data_size": 65536 00:20:14.803 }, 00:20:14.803 { 00:20:14.803 "name": "BaseBdev2", 00:20:14.803 "uuid": "8e115226-0342-498f-a2b7-6909ac2a8100", 00:20:14.803 "is_configured": true, 00:20:14.803 "data_offset": 0, 00:20:14.803 "data_size": 65536 00:20:14.803 }, 00:20:14.803 { 00:20:14.803 "name": "BaseBdev3", 00:20:14.803 "uuid": "8729cdf0-5724-41c8-a9d3-7ce5d0501872", 00:20:14.803 "is_configured": true, 00:20:14.803 "data_offset": 0, 00:20:14.803 "data_size": 65536 00:20:14.803 } 00:20:14.803 ] 00:20:14.803 }' 00:20:14.803 21:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:14.803 21:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.739 21:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:15.739 21:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:15.739 21:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:15.739 21:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:15.739 21:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:15.739 21:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:15.739 21:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:15.739 21:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:15.740 [2024-07-15 21:33:48.917248] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:15.740 21:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:15.740 "name": "Existed_Raid", 00:20:15.740 "aliases": [ 00:20:15.740 "b5183fac-04b5-424e-a2d0-fd2dff2fba96" 00:20:15.740 ], 00:20:15.740 "product_name": "Raid Volume", 00:20:15.740 "block_size": 512, 00:20:15.740 "num_blocks": 196608, 00:20:15.740 "uuid": "b5183fac-04b5-424e-a2d0-fd2dff2fba96", 00:20:15.740 "assigned_rate_limits": { 00:20:15.740 "rw_ios_per_sec": 0, 00:20:15.740 "rw_mbytes_per_sec": 0, 00:20:15.740 "r_mbytes_per_sec": 0, 00:20:15.740 "w_mbytes_per_sec": 0 00:20:15.740 }, 00:20:15.740 "claimed": false, 00:20:15.740 "zoned": false, 00:20:15.740 "supported_io_types": { 00:20:15.740 "read": true, 00:20:15.740 "write": true, 00:20:15.740 "unmap": true, 00:20:15.740 "flush": true, 00:20:15.740 "reset": true, 00:20:15.740 "nvme_admin": false, 00:20:15.740 "nvme_io": false, 00:20:15.740 "nvme_io_md": false, 00:20:15.740 "write_zeroes": true, 00:20:15.740 "zcopy": false, 00:20:15.740 "get_zone_info": false, 00:20:15.740 "zone_management": false, 00:20:15.740 "zone_append": false, 00:20:15.740 "compare": false, 00:20:15.740 "compare_and_write": false, 00:20:15.740 "abort": false, 00:20:15.740 "seek_hole": false, 00:20:15.740 "seek_data": false, 00:20:15.740 "copy": false, 00:20:15.740 "nvme_iov_md": false 00:20:15.740 }, 00:20:15.740 "memory_domains": [ 00:20:15.740 { 00:20:15.740 "dma_device_id": "system", 00:20:15.740 "dma_device_type": 1 00:20:15.740 }, 00:20:15.740 { 00:20:15.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.740 "dma_device_type": 2 00:20:15.740 }, 00:20:15.740 { 00:20:15.740 "dma_device_id": "system", 00:20:15.740 "dma_device_type": 1 00:20:15.740 }, 00:20:15.740 { 00:20:15.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.740 "dma_device_type": 2 00:20:15.740 }, 00:20:15.740 { 00:20:15.740 "dma_device_id": "system", 00:20:15.740 "dma_device_type": 1 00:20:15.740 }, 00:20:15.740 { 00:20:15.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.740 "dma_device_type": 2 00:20:15.740 } 00:20:15.740 ], 00:20:15.740 "driver_specific": { 00:20:15.740 "raid": { 00:20:15.740 "uuid": "b5183fac-04b5-424e-a2d0-fd2dff2fba96", 00:20:15.740 "strip_size_kb": 64, 00:20:15.740 "state": "online", 00:20:15.740 "raid_level": "concat", 00:20:15.740 "superblock": false, 00:20:15.740 "num_base_bdevs": 3, 00:20:15.740 "num_base_bdevs_discovered": 3, 00:20:15.740 "num_base_bdevs_operational": 3, 00:20:15.740 "base_bdevs_list": [ 00:20:15.740 { 00:20:15.740 "name": "BaseBdev1", 00:20:15.740 "uuid": "76c4a6d7-50f3-4024-abe5-734d409585f8", 00:20:15.740 "is_configured": true, 00:20:15.740 "data_offset": 0, 00:20:15.740 "data_size": 65536 00:20:15.740 }, 00:20:15.740 { 00:20:15.740 "name": "BaseBdev2", 00:20:15.740 "uuid": "8e115226-0342-498f-a2b7-6909ac2a8100", 00:20:15.740 "is_configured": true, 00:20:15.740 "data_offset": 0, 00:20:15.740 "data_size": 65536 00:20:15.740 }, 00:20:15.740 { 00:20:15.740 "name": "BaseBdev3", 00:20:15.740 "uuid": "8729cdf0-5724-41c8-a9d3-7ce5d0501872", 00:20:15.740 "is_configured": true, 00:20:15.740 "data_offset": 0, 00:20:15.740 "data_size": 65536 00:20:15.740 } 00:20:15.740 ] 00:20:15.740 } 00:20:15.740 } 00:20:15.740 }' 00:20:15.740 21:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:15.740 21:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:15.740 BaseBdev2 00:20:15.740 BaseBdev3' 00:20:15.740 21:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:15.740 21:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:15.740 21:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:16.002 21:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:16.002 "name": "BaseBdev1", 00:20:16.002 "aliases": [ 00:20:16.002 "76c4a6d7-50f3-4024-abe5-734d409585f8" 00:20:16.002 ], 00:20:16.002 "product_name": "Malloc disk", 00:20:16.002 "block_size": 512, 00:20:16.002 "num_blocks": 65536, 00:20:16.002 "uuid": "76c4a6d7-50f3-4024-abe5-734d409585f8", 00:20:16.002 "assigned_rate_limits": { 00:20:16.002 "rw_ios_per_sec": 0, 00:20:16.002 "rw_mbytes_per_sec": 0, 00:20:16.002 "r_mbytes_per_sec": 0, 00:20:16.002 "w_mbytes_per_sec": 0 00:20:16.002 }, 00:20:16.002 "claimed": true, 00:20:16.002 "claim_type": "exclusive_write", 00:20:16.002 "zoned": false, 00:20:16.002 "supported_io_types": { 00:20:16.002 "read": true, 00:20:16.002 "write": true, 00:20:16.002 "unmap": true, 00:20:16.002 "flush": true, 00:20:16.002 "reset": true, 00:20:16.002 "nvme_admin": false, 00:20:16.002 "nvme_io": false, 00:20:16.002 "nvme_io_md": false, 00:20:16.002 "write_zeroes": true, 00:20:16.002 "zcopy": true, 00:20:16.002 "get_zone_info": false, 00:20:16.002 "zone_management": false, 00:20:16.002 "zone_append": false, 00:20:16.002 "compare": false, 00:20:16.002 "compare_and_write": false, 00:20:16.002 "abort": true, 00:20:16.002 "seek_hole": false, 00:20:16.002 "seek_data": false, 00:20:16.002 "copy": true, 00:20:16.002 "nvme_iov_md": false 00:20:16.002 }, 00:20:16.002 "memory_domains": [ 00:20:16.002 { 00:20:16.002 "dma_device_id": "system", 00:20:16.002 "dma_device_type": 1 00:20:16.002 }, 00:20:16.002 { 00:20:16.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.002 "dma_device_type": 2 00:20:16.002 } 00:20:16.002 ], 00:20:16.002 "driver_specific": {} 00:20:16.002 }' 00:20:16.002 21:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:16.002 21:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:16.002 21:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:16.002 21:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:16.002 21:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:16.002 21:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:16.002 21:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:16.261 21:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:16.261 21:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:16.261 21:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:16.261 21:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:16.261 21:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:16.261 21:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:16.261 21:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:16.261 21:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:16.519 21:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:16.519 "name": "BaseBdev2", 00:20:16.519 "aliases": [ 00:20:16.519 "8e115226-0342-498f-a2b7-6909ac2a8100" 00:20:16.519 ], 00:20:16.519 "product_name": "Malloc disk", 00:20:16.519 "block_size": 512, 00:20:16.519 "num_blocks": 65536, 00:20:16.519 "uuid": "8e115226-0342-498f-a2b7-6909ac2a8100", 00:20:16.519 "assigned_rate_limits": { 00:20:16.519 "rw_ios_per_sec": 0, 00:20:16.519 "rw_mbytes_per_sec": 0, 00:20:16.519 "r_mbytes_per_sec": 0, 00:20:16.519 "w_mbytes_per_sec": 0 00:20:16.519 }, 00:20:16.519 "claimed": true, 00:20:16.519 "claim_type": "exclusive_write", 00:20:16.519 "zoned": false, 00:20:16.519 "supported_io_types": { 00:20:16.519 "read": true, 00:20:16.519 "write": true, 00:20:16.519 "unmap": true, 00:20:16.519 "flush": true, 00:20:16.519 "reset": true, 00:20:16.519 "nvme_admin": false, 00:20:16.519 "nvme_io": false, 00:20:16.519 "nvme_io_md": false, 00:20:16.519 "write_zeroes": true, 00:20:16.519 "zcopy": true, 00:20:16.519 "get_zone_info": false, 00:20:16.519 "zone_management": false, 00:20:16.519 "zone_append": false, 00:20:16.519 "compare": false, 00:20:16.519 "compare_and_write": false, 00:20:16.519 "abort": true, 00:20:16.519 "seek_hole": false, 00:20:16.519 "seek_data": false, 00:20:16.519 "copy": true, 00:20:16.519 "nvme_iov_md": false 00:20:16.519 }, 00:20:16.519 "memory_domains": [ 00:20:16.519 { 00:20:16.519 "dma_device_id": "system", 00:20:16.519 "dma_device_type": 1 00:20:16.519 }, 00:20:16.519 { 00:20:16.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.519 "dma_device_type": 2 00:20:16.519 } 00:20:16.519 ], 00:20:16.519 "driver_specific": {} 00:20:16.519 }' 00:20:16.519 21:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:16.519 21:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:16.519 21:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:16.519 21:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:16.519 21:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:16.777 21:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:16.777 21:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:16.777 21:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:16.777 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:16.777 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:16.777 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:16.777 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:16.777 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:16.777 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:16.777 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:17.035 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:17.035 "name": "BaseBdev3", 00:20:17.035 "aliases": [ 00:20:17.035 "8729cdf0-5724-41c8-a9d3-7ce5d0501872" 00:20:17.035 ], 00:20:17.035 "product_name": "Malloc disk", 00:20:17.035 "block_size": 512, 00:20:17.035 "num_blocks": 65536, 00:20:17.035 "uuid": "8729cdf0-5724-41c8-a9d3-7ce5d0501872", 00:20:17.035 "assigned_rate_limits": { 00:20:17.035 "rw_ios_per_sec": 0, 00:20:17.035 "rw_mbytes_per_sec": 0, 00:20:17.035 "r_mbytes_per_sec": 0, 00:20:17.035 "w_mbytes_per_sec": 0 00:20:17.035 }, 00:20:17.035 "claimed": true, 00:20:17.035 "claim_type": "exclusive_write", 00:20:17.035 "zoned": false, 00:20:17.035 "supported_io_types": { 00:20:17.035 "read": true, 00:20:17.035 "write": true, 00:20:17.035 "unmap": true, 00:20:17.035 "flush": true, 00:20:17.035 "reset": true, 00:20:17.035 "nvme_admin": false, 00:20:17.035 "nvme_io": false, 00:20:17.035 "nvme_io_md": false, 00:20:17.035 "write_zeroes": true, 00:20:17.035 "zcopy": true, 00:20:17.035 "get_zone_info": false, 00:20:17.035 "zone_management": false, 00:20:17.035 "zone_append": false, 00:20:17.035 "compare": false, 00:20:17.035 "compare_and_write": false, 00:20:17.035 "abort": true, 00:20:17.035 "seek_hole": false, 00:20:17.035 "seek_data": false, 00:20:17.035 "copy": true, 00:20:17.035 "nvme_iov_md": false 00:20:17.035 }, 00:20:17.035 "memory_domains": [ 00:20:17.035 { 00:20:17.035 "dma_device_id": "system", 00:20:17.035 "dma_device_type": 1 00:20:17.035 }, 00:20:17.035 { 00:20:17.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.035 "dma_device_type": 2 00:20:17.035 } 00:20:17.035 ], 00:20:17.035 "driver_specific": {} 00:20:17.035 }' 00:20:17.035 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:17.035 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:17.035 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:17.035 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:17.294 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:17.294 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:17.294 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:17.294 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:17.294 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:17.294 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:17.294 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:17.553 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:17.553 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:17.553 [2024-07-15 21:33:50.857608] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:17.553 [2024-07-15 21:33:50.857709] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:17.553 [2024-07-15 21:33:50.857782] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:17.812 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:17.812 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:20:17.812 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:17.812 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:17.812 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:20:17.812 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:20:17.812 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:17.812 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:20:17.812 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:17.812 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:17.812 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:17.812 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:17.812 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:17.812 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:17.812 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:17.812 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.812 21:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:17.812 21:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:17.812 "name": "Existed_Raid", 00:20:17.812 "uuid": "b5183fac-04b5-424e-a2d0-fd2dff2fba96", 00:20:17.812 "strip_size_kb": 64, 00:20:17.812 "state": "offline", 00:20:17.812 "raid_level": "concat", 00:20:17.812 "superblock": false, 00:20:17.812 "num_base_bdevs": 3, 00:20:17.812 "num_base_bdevs_discovered": 2, 00:20:17.812 "num_base_bdevs_operational": 2, 00:20:17.812 "base_bdevs_list": [ 00:20:17.812 { 00:20:17.812 "name": null, 00:20:17.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.812 "is_configured": false, 00:20:17.812 "data_offset": 0, 00:20:17.812 "data_size": 65536 00:20:17.812 }, 00:20:17.812 { 00:20:17.812 "name": "BaseBdev2", 00:20:17.812 "uuid": "8e115226-0342-498f-a2b7-6909ac2a8100", 00:20:17.812 "is_configured": true, 00:20:17.812 "data_offset": 0, 00:20:17.812 "data_size": 65536 00:20:17.812 }, 00:20:17.812 { 00:20:17.812 "name": "BaseBdev3", 00:20:17.812 "uuid": "8729cdf0-5724-41c8-a9d3-7ce5d0501872", 00:20:17.812 "is_configured": true, 00:20:17.812 "data_offset": 0, 00:20:17.812 "data_size": 65536 00:20:17.812 } 00:20:17.812 ] 00:20:17.812 }' 00:20:17.812 21:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:17.812 21:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.750 21:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:18.750 21:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:18.750 21:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.750 21:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:18.750 21:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:18.750 21:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:18.750 21:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:18.750 [2024-07-15 21:33:52.120366] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:19.008 21:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:19.008 21:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:19.008 21:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.008 21:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:19.267 21:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:19.267 21:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:19.267 21:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:19.267 [2024-07-15 21:33:52.577889] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:19.267 [2024-07-15 21:33:52.578053] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:20:19.525 21:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:19.525 21:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:19.525 21:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.525 21:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:19.525 21:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:19.525 21:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:19.525 21:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:20:19.525 21:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:19.525 21:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:19.525 21:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:19.782 BaseBdev2 00:20:19.782 21:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:19.782 21:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:19.782 21:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:19.782 21:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:19.782 21:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:19.782 21:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:19.782 21:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:20.041 21:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:20.299 [ 00:20:20.299 { 00:20:20.299 "name": "BaseBdev2", 00:20:20.299 "aliases": [ 00:20:20.299 "0ea8a6be-9487-46d7-9bc3-3dfb6be28d60" 00:20:20.299 ], 00:20:20.299 "product_name": "Malloc disk", 00:20:20.299 "block_size": 512, 00:20:20.299 "num_blocks": 65536, 00:20:20.299 "uuid": "0ea8a6be-9487-46d7-9bc3-3dfb6be28d60", 00:20:20.299 "assigned_rate_limits": { 00:20:20.299 "rw_ios_per_sec": 0, 00:20:20.299 "rw_mbytes_per_sec": 0, 00:20:20.299 "r_mbytes_per_sec": 0, 00:20:20.299 "w_mbytes_per_sec": 0 00:20:20.299 }, 00:20:20.299 "claimed": false, 00:20:20.299 "zoned": false, 00:20:20.299 "supported_io_types": { 00:20:20.299 "read": true, 00:20:20.299 "write": true, 00:20:20.299 "unmap": true, 00:20:20.299 "flush": true, 00:20:20.299 "reset": true, 00:20:20.299 "nvme_admin": false, 00:20:20.299 "nvme_io": false, 00:20:20.299 "nvme_io_md": false, 00:20:20.299 "write_zeroes": true, 00:20:20.299 "zcopy": true, 00:20:20.299 "get_zone_info": false, 00:20:20.299 "zone_management": false, 00:20:20.299 "zone_append": false, 00:20:20.299 "compare": false, 00:20:20.299 "compare_and_write": false, 00:20:20.299 "abort": true, 00:20:20.299 "seek_hole": false, 00:20:20.299 "seek_data": false, 00:20:20.299 "copy": true, 00:20:20.299 "nvme_iov_md": false 00:20:20.299 }, 00:20:20.299 "memory_domains": [ 00:20:20.299 { 00:20:20.299 "dma_device_id": "system", 00:20:20.299 "dma_device_type": 1 00:20:20.299 }, 00:20:20.299 { 00:20:20.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:20.299 "dma_device_type": 2 00:20:20.299 } 00:20:20.299 ], 00:20:20.299 "driver_specific": {} 00:20:20.299 } 00:20:20.299 ] 00:20:20.299 21:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:20.299 21:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:20.299 21:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:20.299 21:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:20.299 BaseBdev3 00:20:20.299 21:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:20.299 21:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:20.299 21:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:20.299 21:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:20.299 21:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:20.299 21:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:20.299 21:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:20.557 21:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:20.814 [ 00:20:20.815 { 00:20:20.815 "name": "BaseBdev3", 00:20:20.815 "aliases": [ 00:20:20.815 "813c4f9d-3ca2-49f9-a21d-90a0f9cc56e3" 00:20:20.815 ], 00:20:20.815 "product_name": "Malloc disk", 00:20:20.815 "block_size": 512, 00:20:20.815 "num_blocks": 65536, 00:20:20.815 "uuid": "813c4f9d-3ca2-49f9-a21d-90a0f9cc56e3", 00:20:20.815 "assigned_rate_limits": { 00:20:20.815 "rw_ios_per_sec": 0, 00:20:20.815 "rw_mbytes_per_sec": 0, 00:20:20.815 "r_mbytes_per_sec": 0, 00:20:20.815 "w_mbytes_per_sec": 0 00:20:20.815 }, 00:20:20.815 "claimed": false, 00:20:20.815 "zoned": false, 00:20:20.815 "supported_io_types": { 00:20:20.815 "read": true, 00:20:20.815 "write": true, 00:20:20.815 "unmap": true, 00:20:20.815 "flush": true, 00:20:20.815 "reset": true, 00:20:20.815 "nvme_admin": false, 00:20:20.815 "nvme_io": false, 00:20:20.815 "nvme_io_md": false, 00:20:20.815 "write_zeroes": true, 00:20:20.815 "zcopy": true, 00:20:20.815 "get_zone_info": false, 00:20:20.815 "zone_management": false, 00:20:20.815 "zone_append": false, 00:20:20.815 "compare": false, 00:20:20.815 "compare_and_write": false, 00:20:20.815 "abort": true, 00:20:20.815 "seek_hole": false, 00:20:20.815 "seek_data": false, 00:20:20.815 "copy": true, 00:20:20.815 "nvme_iov_md": false 00:20:20.815 }, 00:20:20.815 "memory_domains": [ 00:20:20.815 { 00:20:20.815 "dma_device_id": "system", 00:20:20.815 "dma_device_type": 1 00:20:20.815 }, 00:20:20.815 { 00:20:20.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:20.815 "dma_device_type": 2 00:20:20.815 } 00:20:20.815 ], 00:20:20.815 "driver_specific": {} 00:20:20.815 } 00:20:20.815 ] 00:20:20.815 21:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:20.815 21:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:20.815 21:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:20.815 21:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:20.815 [2024-07-15 21:33:54.166873] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:20.815 [2024-07-15 21:33:54.167045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:20.815 [2024-07-15 21:33:54.167107] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:20.815 [2024-07-15 21:33:54.169044] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:21.073 21:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:21.073 21:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:21.073 21:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:21.073 21:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:21.073 21:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:21.073 21:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:21.073 21:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:21.073 21:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:21.073 21:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:21.073 21:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:21.073 21:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.073 21:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:21.073 21:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:21.073 "name": "Existed_Raid", 00:20:21.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.073 "strip_size_kb": 64, 00:20:21.073 "state": "configuring", 00:20:21.073 "raid_level": "concat", 00:20:21.073 "superblock": false, 00:20:21.073 "num_base_bdevs": 3, 00:20:21.073 "num_base_bdevs_discovered": 2, 00:20:21.073 "num_base_bdevs_operational": 3, 00:20:21.073 "base_bdevs_list": [ 00:20:21.073 { 00:20:21.073 "name": "BaseBdev1", 00:20:21.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.073 "is_configured": false, 00:20:21.073 "data_offset": 0, 00:20:21.073 "data_size": 0 00:20:21.073 }, 00:20:21.073 { 00:20:21.073 "name": "BaseBdev2", 00:20:21.073 "uuid": "0ea8a6be-9487-46d7-9bc3-3dfb6be28d60", 00:20:21.073 "is_configured": true, 00:20:21.073 "data_offset": 0, 00:20:21.073 "data_size": 65536 00:20:21.073 }, 00:20:21.073 { 00:20:21.073 "name": "BaseBdev3", 00:20:21.073 "uuid": "813c4f9d-3ca2-49f9-a21d-90a0f9cc56e3", 00:20:21.073 "is_configured": true, 00:20:21.073 "data_offset": 0, 00:20:21.073 "data_size": 65536 00:20:21.073 } 00:20:21.073 ] 00:20:21.073 }' 00:20:21.073 21:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:21.073 21:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.639 21:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:21.897 [2024-07-15 21:33:55.117196] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:21.897 21:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:21.897 21:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:21.897 21:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:21.897 21:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:21.897 21:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:21.897 21:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:21.897 21:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:21.897 21:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:21.897 21:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:21.897 21:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:21.897 21:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.897 21:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:22.156 21:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:22.156 "name": "Existed_Raid", 00:20:22.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.156 "strip_size_kb": 64, 00:20:22.156 "state": "configuring", 00:20:22.156 "raid_level": "concat", 00:20:22.156 "superblock": false, 00:20:22.156 "num_base_bdevs": 3, 00:20:22.156 "num_base_bdevs_discovered": 1, 00:20:22.156 "num_base_bdevs_operational": 3, 00:20:22.156 "base_bdevs_list": [ 00:20:22.156 { 00:20:22.156 "name": "BaseBdev1", 00:20:22.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.156 "is_configured": false, 00:20:22.156 "data_offset": 0, 00:20:22.156 "data_size": 0 00:20:22.156 }, 00:20:22.156 { 00:20:22.156 "name": null, 00:20:22.156 "uuid": "0ea8a6be-9487-46d7-9bc3-3dfb6be28d60", 00:20:22.156 "is_configured": false, 00:20:22.156 "data_offset": 0, 00:20:22.156 "data_size": 65536 00:20:22.156 }, 00:20:22.156 { 00:20:22.156 "name": "BaseBdev3", 00:20:22.156 "uuid": "813c4f9d-3ca2-49f9-a21d-90a0f9cc56e3", 00:20:22.156 "is_configured": true, 00:20:22.156 "data_offset": 0, 00:20:22.156 "data_size": 65536 00:20:22.156 } 00:20:22.156 ] 00:20:22.156 }' 00:20:22.156 21:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:22.156 21:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.773 21:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.773 21:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:22.773 21:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:22.773 21:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:23.030 [2024-07-15 21:33:56.312542] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:23.030 BaseBdev1 00:20:23.030 21:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:23.030 21:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:23.031 21:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:23.031 21:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:23.031 21:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:23.031 21:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:23.031 21:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:23.289 21:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:23.547 [ 00:20:23.547 { 00:20:23.547 "name": "BaseBdev1", 00:20:23.547 "aliases": [ 00:20:23.547 "03cef819-ae07-46f9-9941-5542e364d854" 00:20:23.547 ], 00:20:23.547 "product_name": "Malloc disk", 00:20:23.547 "block_size": 512, 00:20:23.547 "num_blocks": 65536, 00:20:23.547 "uuid": "03cef819-ae07-46f9-9941-5542e364d854", 00:20:23.547 "assigned_rate_limits": { 00:20:23.547 "rw_ios_per_sec": 0, 00:20:23.547 "rw_mbytes_per_sec": 0, 00:20:23.547 "r_mbytes_per_sec": 0, 00:20:23.547 "w_mbytes_per_sec": 0 00:20:23.547 }, 00:20:23.547 "claimed": true, 00:20:23.547 "claim_type": "exclusive_write", 00:20:23.547 "zoned": false, 00:20:23.547 "supported_io_types": { 00:20:23.547 "read": true, 00:20:23.547 "write": true, 00:20:23.547 "unmap": true, 00:20:23.547 "flush": true, 00:20:23.547 "reset": true, 00:20:23.547 "nvme_admin": false, 00:20:23.547 "nvme_io": false, 00:20:23.547 "nvme_io_md": false, 00:20:23.547 "write_zeroes": true, 00:20:23.547 "zcopy": true, 00:20:23.547 "get_zone_info": false, 00:20:23.547 "zone_management": false, 00:20:23.547 "zone_append": false, 00:20:23.547 "compare": false, 00:20:23.547 "compare_and_write": false, 00:20:23.547 "abort": true, 00:20:23.547 "seek_hole": false, 00:20:23.547 "seek_data": false, 00:20:23.547 "copy": true, 00:20:23.547 "nvme_iov_md": false 00:20:23.547 }, 00:20:23.547 "memory_domains": [ 00:20:23.547 { 00:20:23.547 "dma_device_id": "system", 00:20:23.547 "dma_device_type": 1 00:20:23.547 }, 00:20:23.547 { 00:20:23.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:23.547 "dma_device_type": 2 00:20:23.547 } 00:20:23.547 ], 00:20:23.547 "driver_specific": {} 00:20:23.547 } 00:20:23.547 ] 00:20:23.547 21:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:23.547 21:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:23.547 21:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:23.547 21:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:23.547 21:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:23.547 21:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:23.547 21:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:23.547 21:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:23.547 21:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:23.547 21:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:23.547 21:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:23.547 21:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.547 21:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:23.547 21:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:23.547 "name": "Existed_Raid", 00:20:23.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.547 "strip_size_kb": 64, 00:20:23.547 "state": "configuring", 00:20:23.547 "raid_level": "concat", 00:20:23.547 "superblock": false, 00:20:23.547 "num_base_bdevs": 3, 00:20:23.547 "num_base_bdevs_discovered": 2, 00:20:23.547 "num_base_bdevs_operational": 3, 00:20:23.547 "base_bdevs_list": [ 00:20:23.547 { 00:20:23.547 "name": "BaseBdev1", 00:20:23.547 "uuid": "03cef819-ae07-46f9-9941-5542e364d854", 00:20:23.547 "is_configured": true, 00:20:23.547 "data_offset": 0, 00:20:23.547 "data_size": 65536 00:20:23.547 }, 00:20:23.547 { 00:20:23.548 "name": null, 00:20:23.548 "uuid": "0ea8a6be-9487-46d7-9bc3-3dfb6be28d60", 00:20:23.548 "is_configured": false, 00:20:23.548 "data_offset": 0, 00:20:23.548 "data_size": 65536 00:20:23.548 }, 00:20:23.548 { 00:20:23.548 "name": "BaseBdev3", 00:20:23.548 "uuid": "813c4f9d-3ca2-49f9-a21d-90a0f9cc56e3", 00:20:23.548 "is_configured": true, 00:20:23.548 "data_offset": 0, 00:20:23.548 "data_size": 65536 00:20:23.548 } 00:20:23.548 ] 00:20:23.548 }' 00:20:23.548 21:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:23.548 21:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.114 21:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.114 21:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:24.372 21:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:24.372 21:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:24.631 [2024-07-15 21:33:57.857915] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:24.631 21:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:24.631 21:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:24.631 21:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:24.631 21:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:24.631 21:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:24.631 21:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:24.631 21:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:24.631 21:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:24.631 21:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:24.631 21:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:24.631 21:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.631 21:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:24.889 21:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:24.889 "name": "Existed_Raid", 00:20:24.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.889 "strip_size_kb": 64, 00:20:24.889 "state": "configuring", 00:20:24.889 "raid_level": "concat", 00:20:24.889 "superblock": false, 00:20:24.889 "num_base_bdevs": 3, 00:20:24.889 "num_base_bdevs_discovered": 1, 00:20:24.889 "num_base_bdevs_operational": 3, 00:20:24.889 "base_bdevs_list": [ 00:20:24.889 { 00:20:24.889 "name": "BaseBdev1", 00:20:24.889 "uuid": "03cef819-ae07-46f9-9941-5542e364d854", 00:20:24.889 "is_configured": true, 00:20:24.889 "data_offset": 0, 00:20:24.889 "data_size": 65536 00:20:24.889 }, 00:20:24.889 { 00:20:24.889 "name": null, 00:20:24.889 "uuid": "0ea8a6be-9487-46d7-9bc3-3dfb6be28d60", 00:20:24.889 "is_configured": false, 00:20:24.889 "data_offset": 0, 00:20:24.889 "data_size": 65536 00:20:24.889 }, 00:20:24.889 { 00:20:24.889 "name": null, 00:20:24.889 "uuid": "813c4f9d-3ca2-49f9-a21d-90a0f9cc56e3", 00:20:24.889 "is_configured": false, 00:20:24.889 "data_offset": 0, 00:20:24.889 "data_size": 65536 00:20:24.889 } 00:20:24.889 ] 00:20:24.889 }' 00:20:24.889 21:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:24.889 21:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.479 21:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.479 21:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:25.737 21:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:25.737 21:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:25.737 [2024-07-15 21:33:59.079980] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:25.737 21:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:25.737 21:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:25.737 21:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:25.737 21:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:25.737 21:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:25.737 21:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:25.737 21:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:25.737 21:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:25.737 21:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:25.737 21:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:25.737 21:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.737 21:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:25.995 21:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:25.995 "name": "Existed_Raid", 00:20:25.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.995 "strip_size_kb": 64, 00:20:25.995 "state": "configuring", 00:20:25.995 "raid_level": "concat", 00:20:25.995 "superblock": false, 00:20:25.995 "num_base_bdevs": 3, 00:20:25.995 "num_base_bdevs_discovered": 2, 00:20:25.995 "num_base_bdevs_operational": 3, 00:20:25.995 "base_bdevs_list": [ 00:20:25.995 { 00:20:25.995 "name": "BaseBdev1", 00:20:25.995 "uuid": "03cef819-ae07-46f9-9941-5542e364d854", 00:20:25.995 "is_configured": true, 00:20:25.995 "data_offset": 0, 00:20:25.995 "data_size": 65536 00:20:25.995 }, 00:20:25.995 { 00:20:25.995 "name": null, 00:20:25.995 "uuid": "0ea8a6be-9487-46d7-9bc3-3dfb6be28d60", 00:20:25.995 "is_configured": false, 00:20:25.995 "data_offset": 0, 00:20:25.995 "data_size": 65536 00:20:25.995 }, 00:20:25.995 { 00:20:25.995 "name": "BaseBdev3", 00:20:25.995 "uuid": "813c4f9d-3ca2-49f9-a21d-90a0f9cc56e3", 00:20:25.995 "is_configured": true, 00:20:25.995 "data_offset": 0, 00:20:25.995 "data_size": 65536 00:20:25.995 } 00:20:25.995 ] 00:20:25.995 }' 00:20:25.995 21:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:25.995 21:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.563 21:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.563 21:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:26.822 21:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:26.822 21:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:27.080 [2024-07-15 21:34:00.259938] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:27.080 21:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:27.080 21:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:27.080 21:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:27.080 21:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:27.080 21:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:27.080 21:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:27.080 21:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:27.080 21:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:27.080 21:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:27.080 21:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:27.080 21:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.080 21:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:27.337 21:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:27.337 "name": "Existed_Raid", 00:20:27.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.337 "strip_size_kb": 64, 00:20:27.337 "state": "configuring", 00:20:27.337 "raid_level": "concat", 00:20:27.337 "superblock": false, 00:20:27.337 "num_base_bdevs": 3, 00:20:27.337 "num_base_bdevs_discovered": 1, 00:20:27.337 "num_base_bdevs_operational": 3, 00:20:27.337 "base_bdevs_list": [ 00:20:27.337 { 00:20:27.337 "name": null, 00:20:27.337 "uuid": "03cef819-ae07-46f9-9941-5542e364d854", 00:20:27.337 "is_configured": false, 00:20:27.337 "data_offset": 0, 00:20:27.337 "data_size": 65536 00:20:27.337 }, 00:20:27.337 { 00:20:27.337 "name": null, 00:20:27.337 "uuid": "0ea8a6be-9487-46d7-9bc3-3dfb6be28d60", 00:20:27.337 "is_configured": false, 00:20:27.337 "data_offset": 0, 00:20:27.337 "data_size": 65536 00:20:27.337 }, 00:20:27.337 { 00:20:27.337 "name": "BaseBdev3", 00:20:27.337 "uuid": "813c4f9d-3ca2-49f9-a21d-90a0f9cc56e3", 00:20:27.337 "is_configured": true, 00:20:27.337 "data_offset": 0, 00:20:27.337 "data_size": 65536 00:20:27.337 } 00:20:27.337 ] 00:20:27.337 }' 00:20:27.337 21:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:27.337 21:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.902 21:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.902 21:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:28.160 21:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:28.160 21:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:28.417 [2024-07-15 21:34:01.545663] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:28.418 21:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:28.418 21:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:28.418 21:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:28.418 21:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:28.418 21:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:28.418 21:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:28.418 21:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:28.418 21:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:28.418 21:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:28.418 21:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:28.418 21:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.418 21:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:28.418 21:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:28.418 "name": "Existed_Raid", 00:20:28.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.418 "strip_size_kb": 64, 00:20:28.418 "state": "configuring", 00:20:28.418 "raid_level": "concat", 00:20:28.418 "superblock": false, 00:20:28.418 "num_base_bdevs": 3, 00:20:28.418 "num_base_bdevs_discovered": 2, 00:20:28.418 "num_base_bdevs_operational": 3, 00:20:28.418 "base_bdevs_list": [ 00:20:28.418 { 00:20:28.418 "name": null, 00:20:28.418 "uuid": "03cef819-ae07-46f9-9941-5542e364d854", 00:20:28.418 "is_configured": false, 00:20:28.418 "data_offset": 0, 00:20:28.418 "data_size": 65536 00:20:28.418 }, 00:20:28.418 { 00:20:28.418 "name": "BaseBdev2", 00:20:28.418 "uuid": "0ea8a6be-9487-46d7-9bc3-3dfb6be28d60", 00:20:28.418 "is_configured": true, 00:20:28.418 "data_offset": 0, 00:20:28.418 "data_size": 65536 00:20:28.418 }, 00:20:28.418 { 00:20:28.418 "name": "BaseBdev3", 00:20:28.418 "uuid": "813c4f9d-3ca2-49f9-a21d-90a0f9cc56e3", 00:20:28.418 "is_configured": true, 00:20:28.418 "data_offset": 0, 00:20:28.418 "data_size": 65536 00:20:28.418 } 00:20:28.418 ] 00:20:28.418 }' 00:20:28.418 21:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:28.418 21:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.000 21:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.000 21:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:29.258 21:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:29.258 21:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.258 21:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:29.518 21:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 03cef819-ae07-46f9-9941-5542e364d854 00:20:29.778 [2024-07-15 21:34:02.970120] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:29.778 [2024-07-15 21:34:02.970247] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:20:29.778 [2024-07-15 21:34:02.970266] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:29.778 [2024-07-15 21:34:02.970419] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:20:29.778 [2024-07-15 21:34:02.970705] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:20:29.778 [2024-07-15 21:34:02.970747] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008d80 00:20:29.778 [2024-07-15 21:34:02.970992] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:29.778 NewBaseBdev 00:20:29.778 21:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:29.778 21:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:20:29.778 21:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:29.778 21:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:29.778 21:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:29.778 21:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:29.778 21:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:30.037 21:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:30.037 [ 00:20:30.037 { 00:20:30.037 "name": "NewBaseBdev", 00:20:30.038 "aliases": [ 00:20:30.038 "03cef819-ae07-46f9-9941-5542e364d854" 00:20:30.038 ], 00:20:30.038 "product_name": "Malloc disk", 00:20:30.038 "block_size": 512, 00:20:30.038 "num_blocks": 65536, 00:20:30.038 "uuid": "03cef819-ae07-46f9-9941-5542e364d854", 00:20:30.038 "assigned_rate_limits": { 00:20:30.038 "rw_ios_per_sec": 0, 00:20:30.038 "rw_mbytes_per_sec": 0, 00:20:30.038 "r_mbytes_per_sec": 0, 00:20:30.038 "w_mbytes_per_sec": 0 00:20:30.038 }, 00:20:30.038 "claimed": true, 00:20:30.038 "claim_type": "exclusive_write", 00:20:30.038 "zoned": false, 00:20:30.038 "supported_io_types": { 00:20:30.038 "read": true, 00:20:30.038 "write": true, 00:20:30.038 "unmap": true, 00:20:30.038 "flush": true, 00:20:30.038 "reset": true, 00:20:30.038 "nvme_admin": false, 00:20:30.038 "nvme_io": false, 00:20:30.038 "nvme_io_md": false, 00:20:30.038 "write_zeroes": true, 00:20:30.038 "zcopy": true, 00:20:30.038 "get_zone_info": false, 00:20:30.038 "zone_management": false, 00:20:30.038 "zone_append": false, 00:20:30.038 "compare": false, 00:20:30.038 "compare_and_write": false, 00:20:30.038 "abort": true, 00:20:30.038 "seek_hole": false, 00:20:30.038 "seek_data": false, 00:20:30.038 "copy": true, 00:20:30.038 "nvme_iov_md": false 00:20:30.038 }, 00:20:30.038 "memory_domains": [ 00:20:30.038 { 00:20:30.038 "dma_device_id": "system", 00:20:30.038 "dma_device_type": 1 00:20:30.038 }, 00:20:30.038 { 00:20:30.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.038 "dma_device_type": 2 00:20:30.038 } 00:20:30.038 ], 00:20:30.038 "driver_specific": {} 00:20:30.038 } 00:20:30.038 ] 00:20:30.038 21:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:30.038 21:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:20:30.038 21:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:30.038 21:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:30.038 21:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:30.038 21:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:30.038 21:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:30.038 21:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:30.038 21:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:30.038 21:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:30.038 21:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:30.038 21:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.038 21:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:30.298 21:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:30.298 "name": "Existed_Raid", 00:20:30.298 "uuid": "fe46d23d-5d9a-4efe-bd33-3ab885d09931", 00:20:30.298 "strip_size_kb": 64, 00:20:30.298 "state": "online", 00:20:30.298 "raid_level": "concat", 00:20:30.298 "superblock": false, 00:20:30.298 "num_base_bdevs": 3, 00:20:30.298 "num_base_bdevs_discovered": 3, 00:20:30.298 "num_base_bdevs_operational": 3, 00:20:30.298 "base_bdevs_list": [ 00:20:30.298 { 00:20:30.298 "name": "NewBaseBdev", 00:20:30.298 "uuid": "03cef819-ae07-46f9-9941-5542e364d854", 00:20:30.298 "is_configured": true, 00:20:30.298 "data_offset": 0, 00:20:30.298 "data_size": 65536 00:20:30.298 }, 00:20:30.298 { 00:20:30.298 "name": "BaseBdev2", 00:20:30.298 "uuid": "0ea8a6be-9487-46d7-9bc3-3dfb6be28d60", 00:20:30.298 "is_configured": true, 00:20:30.298 "data_offset": 0, 00:20:30.298 "data_size": 65536 00:20:30.298 }, 00:20:30.298 { 00:20:30.298 "name": "BaseBdev3", 00:20:30.298 "uuid": "813c4f9d-3ca2-49f9-a21d-90a0f9cc56e3", 00:20:30.298 "is_configured": true, 00:20:30.298 "data_offset": 0, 00:20:30.298 "data_size": 65536 00:20:30.298 } 00:20:30.298 ] 00:20:30.298 }' 00:20:30.298 21:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:30.298 21:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.867 21:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:30.867 21:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:30.867 21:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:30.867 21:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:30.867 21:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:30.867 21:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:30.867 21:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:30.867 21:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:31.127 [2024-07-15 21:34:04.316145] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:31.127 21:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:31.127 "name": "Existed_Raid", 00:20:31.127 "aliases": [ 00:20:31.127 "fe46d23d-5d9a-4efe-bd33-3ab885d09931" 00:20:31.127 ], 00:20:31.127 "product_name": "Raid Volume", 00:20:31.127 "block_size": 512, 00:20:31.127 "num_blocks": 196608, 00:20:31.127 "uuid": "fe46d23d-5d9a-4efe-bd33-3ab885d09931", 00:20:31.127 "assigned_rate_limits": { 00:20:31.127 "rw_ios_per_sec": 0, 00:20:31.127 "rw_mbytes_per_sec": 0, 00:20:31.127 "r_mbytes_per_sec": 0, 00:20:31.127 "w_mbytes_per_sec": 0 00:20:31.127 }, 00:20:31.127 "claimed": false, 00:20:31.127 "zoned": false, 00:20:31.127 "supported_io_types": { 00:20:31.127 "read": true, 00:20:31.127 "write": true, 00:20:31.127 "unmap": true, 00:20:31.127 "flush": true, 00:20:31.127 "reset": true, 00:20:31.127 "nvme_admin": false, 00:20:31.127 "nvme_io": false, 00:20:31.127 "nvme_io_md": false, 00:20:31.127 "write_zeroes": true, 00:20:31.127 "zcopy": false, 00:20:31.127 "get_zone_info": false, 00:20:31.127 "zone_management": false, 00:20:31.127 "zone_append": false, 00:20:31.127 "compare": false, 00:20:31.127 "compare_and_write": false, 00:20:31.127 "abort": false, 00:20:31.127 "seek_hole": false, 00:20:31.127 "seek_data": false, 00:20:31.127 "copy": false, 00:20:31.127 "nvme_iov_md": false 00:20:31.127 }, 00:20:31.127 "memory_domains": [ 00:20:31.127 { 00:20:31.127 "dma_device_id": "system", 00:20:31.127 "dma_device_type": 1 00:20:31.127 }, 00:20:31.127 { 00:20:31.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.127 "dma_device_type": 2 00:20:31.127 }, 00:20:31.127 { 00:20:31.127 "dma_device_id": "system", 00:20:31.127 "dma_device_type": 1 00:20:31.127 }, 00:20:31.127 { 00:20:31.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.127 "dma_device_type": 2 00:20:31.127 }, 00:20:31.127 { 00:20:31.127 "dma_device_id": "system", 00:20:31.127 "dma_device_type": 1 00:20:31.127 }, 00:20:31.127 { 00:20:31.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.127 "dma_device_type": 2 00:20:31.127 } 00:20:31.127 ], 00:20:31.127 "driver_specific": { 00:20:31.127 "raid": { 00:20:31.127 "uuid": "fe46d23d-5d9a-4efe-bd33-3ab885d09931", 00:20:31.127 "strip_size_kb": 64, 00:20:31.127 "state": "online", 00:20:31.127 "raid_level": "concat", 00:20:31.127 "superblock": false, 00:20:31.127 "num_base_bdevs": 3, 00:20:31.127 "num_base_bdevs_discovered": 3, 00:20:31.127 "num_base_bdevs_operational": 3, 00:20:31.127 "base_bdevs_list": [ 00:20:31.127 { 00:20:31.127 "name": "NewBaseBdev", 00:20:31.127 "uuid": "03cef819-ae07-46f9-9941-5542e364d854", 00:20:31.127 "is_configured": true, 00:20:31.127 "data_offset": 0, 00:20:31.127 "data_size": 65536 00:20:31.127 }, 00:20:31.127 { 00:20:31.127 "name": "BaseBdev2", 00:20:31.127 "uuid": "0ea8a6be-9487-46d7-9bc3-3dfb6be28d60", 00:20:31.127 "is_configured": true, 00:20:31.127 "data_offset": 0, 00:20:31.127 "data_size": 65536 00:20:31.127 }, 00:20:31.127 { 00:20:31.127 "name": "BaseBdev3", 00:20:31.127 "uuid": "813c4f9d-3ca2-49f9-a21d-90a0f9cc56e3", 00:20:31.127 "is_configured": true, 00:20:31.127 "data_offset": 0, 00:20:31.127 "data_size": 65536 00:20:31.127 } 00:20:31.127 ] 00:20:31.127 } 00:20:31.127 } 00:20:31.127 }' 00:20:31.127 21:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:31.127 21:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:31.127 BaseBdev2 00:20:31.127 BaseBdev3' 00:20:31.127 21:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:31.127 21:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:31.127 21:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:31.386 21:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:31.386 "name": "NewBaseBdev", 00:20:31.386 "aliases": [ 00:20:31.386 "03cef819-ae07-46f9-9941-5542e364d854" 00:20:31.386 ], 00:20:31.386 "product_name": "Malloc disk", 00:20:31.386 "block_size": 512, 00:20:31.386 "num_blocks": 65536, 00:20:31.386 "uuid": "03cef819-ae07-46f9-9941-5542e364d854", 00:20:31.386 "assigned_rate_limits": { 00:20:31.386 "rw_ios_per_sec": 0, 00:20:31.386 "rw_mbytes_per_sec": 0, 00:20:31.386 "r_mbytes_per_sec": 0, 00:20:31.386 "w_mbytes_per_sec": 0 00:20:31.386 }, 00:20:31.386 "claimed": true, 00:20:31.386 "claim_type": "exclusive_write", 00:20:31.386 "zoned": false, 00:20:31.386 "supported_io_types": { 00:20:31.386 "read": true, 00:20:31.386 "write": true, 00:20:31.386 "unmap": true, 00:20:31.386 "flush": true, 00:20:31.386 "reset": true, 00:20:31.386 "nvme_admin": false, 00:20:31.386 "nvme_io": false, 00:20:31.386 "nvme_io_md": false, 00:20:31.386 "write_zeroes": true, 00:20:31.386 "zcopy": true, 00:20:31.386 "get_zone_info": false, 00:20:31.386 "zone_management": false, 00:20:31.386 "zone_append": false, 00:20:31.386 "compare": false, 00:20:31.386 "compare_and_write": false, 00:20:31.386 "abort": true, 00:20:31.386 "seek_hole": false, 00:20:31.386 "seek_data": false, 00:20:31.386 "copy": true, 00:20:31.386 "nvme_iov_md": false 00:20:31.386 }, 00:20:31.386 "memory_domains": [ 00:20:31.386 { 00:20:31.386 "dma_device_id": "system", 00:20:31.386 "dma_device_type": 1 00:20:31.386 }, 00:20:31.386 { 00:20:31.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.386 "dma_device_type": 2 00:20:31.386 } 00:20:31.386 ], 00:20:31.386 "driver_specific": {} 00:20:31.386 }' 00:20:31.386 21:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:31.386 21:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:31.386 21:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:31.386 21:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:31.386 21:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:31.645 21:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:31.645 21:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:31.645 21:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:31.645 21:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:31.645 21:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:31.645 21:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:31.904 21:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:31.904 21:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:31.905 21:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:31.905 21:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:31.905 21:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:31.905 "name": "BaseBdev2", 00:20:31.905 "aliases": [ 00:20:31.905 "0ea8a6be-9487-46d7-9bc3-3dfb6be28d60" 00:20:31.905 ], 00:20:31.905 "product_name": "Malloc disk", 00:20:31.905 "block_size": 512, 00:20:31.905 "num_blocks": 65536, 00:20:31.905 "uuid": "0ea8a6be-9487-46d7-9bc3-3dfb6be28d60", 00:20:31.905 "assigned_rate_limits": { 00:20:31.905 "rw_ios_per_sec": 0, 00:20:31.905 "rw_mbytes_per_sec": 0, 00:20:31.905 "r_mbytes_per_sec": 0, 00:20:31.905 "w_mbytes_per_sec": 0 00:20:31.905 }, 00:20:31.905 "claimed": true, 00:20:31.905 "claim_type": "exclusive_write", 00:20:31.905 "zoned": false, 00:20:31.905 "supported_io_types": { 00:20:31.905 "read": true, 00:20:31.905 "write": true, 00:20:31.905 "unmap": true, 00:20:31.905 "flush": true, 00:20:31.905 "reset": true, 00:20:31.905 "nvme_admin": false, 00:20:31.905 "nvme_io": false, 00:20:31.905 "nvme_io_md": false, 00:20:31.905 "write_zeroes": true, 00:20:31.905 "zcopy": true, 00:20:31.905 "get_zone_info": false, 00:20:31.905 "zone_management": false, 00:20:31.905 "zone_append": false, 00:20:31.905 "compare": false, 00:20:31.905 "compare_and_write": false, 00:20:31.905 "abort": true, 00:20:31.905 "seek_hole": false, 00:20:31.905 "seek_data": false, 00:20:31.905 "copy": true, 00:20:31.905 "nvme_iov_md": false 00:20:31.905 }, 00:20:31.905 "memory_domains": [ 00:20:31.905 { 00:20:31.905 "dma_device_id": "system", 00:20:31.905 "dma_device_type": 1 00:20:31.905 }, 00:20:31.905 { 00:20:31.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.905 "dma_device_type": 2 00:20:31.905 } 00:20:31.905 ], 00:20:31.905 "driver_specific": {} 00:20:31.905 }' 00:20:31.905 21:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:31.905 21:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:32.164 21:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:32.164 21:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:32.164 21:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:32.164 21:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:32.164 21:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:32.164 21:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:32.164 21:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:32.164 21:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:32.423 21:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:32.424 21:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:32.424 21:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:32.424 21:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:32.424 21:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:32.683 21:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:32.683 "name": "BaseBdev3", 00:20:32.683 "aliases": [ 00:20:32.683 "813c4f9d-3ca2-49f9-a21d-90a0f9cc56e3" 00:20:32.683 ], 00:20:32.683 "product_name": "Malloc disk", 00:20:32.683 "block_size": 512, 00:20:32.683 "num_blocks": 65536, 00:20:32.683 "uuid": "813c4f9d-3ca2-49f9-a21d-90a0f9cc56e3", 00:20:32.683 "assigned_rate_limits": { 00:20:32.683 "rw_ios_per_sec": 0, 00:20:32.683 "rw_mbytes_per_sec": 0, 00:20:32.683 "r_mbytes_per_sec": 0, 00:20:32.683 "w_mbytes_per_sec": 0 00:20:32.683 }, 00:20:32.683 "claimed": true, 00:20:32.683 "claim_type": "exclusive_write", 00:20:32.683 "zoned": false, 00:20:32.683 "supported_io_types": { 00:20:32.683 "read": true, 00:20:32.683 "write": true, 00:20:32.683 "unmap": true, 00:20:32.683 "flush": true, 00:20:32.683 "reset": true, 00:20:32.683 "nvme_admin": false, 00:20:32.683 "nvme_io": false, 00:20:32.683 "nvme_io_md": false, 00:20:32.683 "write_zeroes": true, 00:20:32.683 "zcopy": true, 00:20:32.683 "get_zone_info": false, 00:20:32.683 "zone_management": false, 00:20:32.683 "zone_append": false, 00:20:32.683 "compare": false, 00:20:32.683 "compare_and_write": false, 00:20:32.683 "abort": true, 00:20:32.683 "seek_hole": false, 00:20:32.683 "seek_data": false, 00:20:32.683 "copy": true, 00:20:32.683 "nvme_iov_md": false 00:20:32.683 }, 00:20:32.683 "memory_domains": [ 00:20:32.683 { 00:20:32.683 "dma_device_id": "system", 00:20:32.683 "dma_device_type": 1 00:20:32.683 }, 00:20:32.683 { 00:20:32.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:32.683 "dma_device_type": 2 00:20:32.683 } 00:20:32.683 ], 00:20:32.683 "driver_specific": {} 00:20:32.683 }' 00:20:32.683 21:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:32.683 21:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:32.683 21:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:32.683 21:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:32.683 21:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:32.683 21:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:32.683 21:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:32.942 21:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:32.942 21:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:32.942 21:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:32.942 21:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:32.942 21:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:32.942 21:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:33.202 [2024-07-15 21:34:06.440229] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:33.202 [2024-07-15 21:34:06.440325] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:33.202 [2024-07-15 21:34:06.440411] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:33.202 [2024-07-15 21:34:06.440475] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:33.202 [2024-07-15 21:34:06.440493] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name Existed_Raid, state offline 00:20:33.202 21:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 129112 00:20:33.202 21:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 129112 ']' 00:20:33.202 21:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 129112 00:20:33.202 21:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:20:33.202 21:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:33.202 21:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 129112 00:20:33.202 killing process with pid 129112 00:20:33.202 21:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:33.202 21:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:33.202 21:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 129112' 00:20:33.202 21:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 129112 00:20:33.202 21:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 129112 00:20:33.202 [2024-07-15 21:34:06.480011] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:33.463 [2024-07-15 21:34:06.748718] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:34.845 ************************************ 00:20:34.845 END TEST raid_state_function_test 00:20:34.845 ************************************ 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:20:34.845 00:20:34.845 real 0m26.583s 00:20:34.845 user 0m49.011s 00:20:34.845 sys 0m3.495s 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.845 21:34:07 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:34.845 21:34:07 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:20:34.845 21:34:07 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:34.845 21:34:07 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:34.845 21:34:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:34.845 ************************************ 00:20:34.845 START TEST raid_state_function_test_sb 00:20:34.845 ************************************ 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 true 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=130088 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 130088' 00:20:34.845 Process raid pid: 130088 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 130088 /var/tmp/spdk-raid.sock 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 130088 ']' 00:20:34.845 21:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:34.846 21:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:34.846 21:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:34.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:34.846 21:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:34.846 21:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.846 [2024-07-15 21:34:08.055488] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:20:34.846 [2024-07-15 21:34:08.055725] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.846 [2024-07-15 21:34:08.214681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.105 [2024-07-15 21:34:08.458216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.364 [2024-07-15 21:34:08.684782] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:35.623 21:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:35.623 21:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:20:35.623 21:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:35.881 [2024-07-15 21:34:09.011771] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:35.881 [2024-07-15 21:34:09.011964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:35.881 [2024-07-15 21:34:09.011996] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:35.881 [2024-07-15 21:34:09.012034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:35.881 [2024-07-15 21:34:09.012050] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:35.881 [2024-07-15 21:34:09.012072] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:35.881 21:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:35.881 21:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:35.881 21:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:35.881 21:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:35.881 21:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:35.881 21:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:35.881 21:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:35.881 21:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:35.881 21:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:35.881 21:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:35.881 21:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.881 21:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:35.881 21:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:35.881 "name": "Existed_Raid", 00:20:35.881 "uuid": "f79358b6-57d6-49bd-ab38-ca36181531e0", 00:20:35.881 "strip_size_kb": 64, 00:20:35.881 "state": "configuring", 00:20:35.881 "raid_level": "concat", 00:20:35.881 "superblock": true, 00:20:35.881 "num_base_bdevs": 3, 00:20:35.881 "num_base_bdevs_discovered": 0, 00:20:35.881 "num_base_bdevs_operational": 3, 00:20:35.881 "base_bdevs_list": [ 00:20:35.881 { 00:20:35.881 "name": "BaseBdev1", 00:20:35.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.881 "is_configured": false, 00:20:35.881 "data_offset": 0, 00:20:35.881 "data_size": 0 00:20:35.881 }, 00:20:35.881 { 00:20:35.881 "name": "BaseBdev2", 00:20:35.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.881 "is_configured": false, 00:20:35.881 "data_offset": 0, 00:20:35.881 "data_size": 0 00:20:35.881 }, 00:20:35.881 { 00:20:35.881 "name": "BaseBdev3", 00:20:35.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.881 "is_configured": false, 00:20:35.881 "data_offset": 0, 00:20:35.881 "data_size": 0 00:20:35.881 } 00:20:35.881 ] 00:20:35.881 }' 00:20:35.881 21:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:35.881 21:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:36.447 21:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:36.705 [2024-07-15 21:34:09.965907] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:36.705 [2024-07-15 21:34:09.966053] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:20:36.705 21:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:36.963 [2024-07-15 21:34:10.153652] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:36.964 [2024-07-15 21:34:10.153812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:36.964 [2024-07-15 21:34:10.153844] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:36.964 [2024-07-15 21:34:10.153872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:36.964 [2024-07-15 21:34:10.153900] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:36.964 [2024-07-15 21:34:10.153931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:36.964 21:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:37.222 [2024-07-15 21:34:10.382658] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:37.222 BaseBdev1 00:20:37.222 21:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:37.222 21:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:37.222 21:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:37.222 21:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:37.222 21:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:37.222 21:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:37.222 21:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:37.222 21:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:37.481 [ 00:20:37.481 { 00:20:37.481 "name": "BaseBdev1", 00:20:37.481 "aliases": [ 00:20:37.481 "d920d912-39d6-4368-a779-6a5bf3a46d1f" 00:20:37.481 ], 00:20:37.481 "product_name": "Malloc disk", 00:20:37.481 "block_size": 512, 00:20:37.481 "num_blocks": 65536, 00:20:37.481 "uuid": "d920d912-39d6-4368-a779-6a5bf3a46d1f", 00:20:37.481 "assigned_rate_limits": { 00:20:37.481 "rw_ios_per_sec": 0, 00:20:37.481 "rw_mbytes_per_sec": 0, 00:20:37.481 "r_mbytes_per_sec": 0, 00:20:37.481 "w_mbytes_per_sec": 0 00:20:37.481 }, 00:20:37.481 "claimed": true, 00:20:37.481 "claim_type": "exclusive_write", 00:20:37.481 "zoned": false, 00:20:37.481 "supported_io_types": { 00:20:37.481 "read": true, 00:20:37.481 "write": true, 00:20:37.481 "unmap": true, 00:20:37.481 "flush": true, 00:20:37.481 "reset": true, 00:20:37.481 "nvme_admin": false, 00:20:37.481 "nvme_io": false, 00:20:37.481 "nvme_io_md": false, 00:20:37.481 "write_zeroes": true, 00:20:37.481 "zcopy": true, 00:20:37.481 "get_zone_info": false, 00:20:37.481 "zone_management": false, 00:20:37.481 "zone_append": false, 00:20:37.481 "compare": false, 00:20:37.481 "compare_and_write": false, 00:20:37.481 "abort": true, 00:20:37.481 "seek_hole": false, 00:20:37.481 "seek_data": false, 00:20:37.481 "copy": true, 00:20:37.481 "nvme_iov_md": false 00:20:37.481 }, 00:20:37.481 "memory_domains": [ 00:20:37.481 { 00:20:37.481 "dma_device_id": "system", 00:20:37.481 "dma_device_type": 1 00:20:37.481 }, 00:20:37.481 { 00:20:37.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:37.481 "dma_device_type": 2 00:20:37.481 } 00:20:37.481 ], 00:20:37.481 "driver_specific": {} 00:20:37.481 } 00:20:37.481 ] 00:20:37.481 21:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:37.481 21:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:37.481 21:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:37.481 21:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:37.481 21:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:37.481 21:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:37.481 21:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:37.481 21:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:37.481 21:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:37.481 21:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:37.481 21:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:37.481 21:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:37.481 21:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.740 21:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:37.740 "name": "Existed_Raid", 00:20:37.740 "uuid": "d7aa97c8-7266-4923-86f7-15c59451f29a", 00:20:37.740 "strip_size_kb": 64, 00:20:37.740 "state": "configuring", 00:20:37.740 "raid_level": "concat", 00:20:37.740 "superblock": true, 00:20:37.740 "num_base_bdevs": 3, 00:20:37.740 "num_base_bdevs_discovered": 1, 00:20:37.740 "num_base_bdevs_operational": 3, 00:20:37.740 "base_bdevs_list": [ 00:20:37.740 { 00:20:37.740 "name": "BaseBdev1", 00:20:37.740 "uuid": "d920d912-39d6-4368-a779-6a5bf3a46d1f", 00:20:37.740 "is_configured": true, 00:20:37.740 "data_offset": 2048, 00:20:37.740 "data_size": 63488 00:20:37.740 }, 00:20:37.740 { 00:20:37.740 "name": "BaseBdev2", 00:20:37.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.740 "is_configured": false, 00:20:37.740 "data_offset": 0, 00:20:37.740 "data_size": 0 00:20:37.740 }, 00:20:37.740 { 00:20:37.740 "name": "BaseBdev3", 00:20:37.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.740 "is_configured": false, 00:20:37.740 "data_offset": 0, 00:20:37.740 "data_size": 0 00:20:37.740 } 00:20:37.740 ] 00:20:37.740 }' 00:20:37.740 21:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:37.740 21:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.306 21:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:38.585 [2024-07-15 21:34:11.788308] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:38.585 [2024-07-15 21:34:11.788488] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:20:38.585 21:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:38.861 [2024-07-15 21:34:11.976073] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:38.861 [2024-07-15 21:34:11.978423] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:38.861 [2024-07-15 21:34:11.978554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:38.861 [2024-07-15 21:34:11.978586] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:38.861 [2024-07-15 21:34:11.978650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:38.861 21:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:38.861 21:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:38.861 21:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:38.861 21:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:38.861 21:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:38.861 21:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:38.861 21:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:38.861 21:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:38.861 21:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:38.861 21:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:38.861 21:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:38.861 21:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:38.861 21:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.861 21:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:38.861 21:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:38.861 "name": "Existed_Raid", 00:20:38.861 "uuid": "0cc5d8f7-2b9d-4514-9db0-1f0ece8ff7e5", 00:20:38.861 "strip_size_kb": 64, 00:20:38.861 "state": "configuring", 00:20:38.861 "raid_level": "concat", 00:20:38.862 "superblock": true, 00:20:38.862 "num_base_bdevs": 3, 00:20:38.862 "num_base_bdevs_discovered": 1, 00:20:38.862 "num_base_bdevs_operational": 3, 00:20:38.862 "base_bdevs_list": [ 00:20:38.862 { 00:20:38.862 "name": "BaseBdev1", 00:20:38.862 "uuid": "d920d912-39d6-4368-a779-6a5bf3a46d1f", 00:20:38.862 "is_configured": true, 00:20:38.862 "data_offset": 2048, 00:20:38.862 "data_size": 63488 00:20:38.862 }, 00:20:38.862 { 00:20:38.862 "name": "BaseBdev2", 00:20:38.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.862 "is_configured": false, 00:20:38.862 "data_offset": 0, 00:20:38.862 "data_size": 0 00:20:38.862 }, 00:20:38.862 { 00:20:38.862 "name": "BaseBdev3", 00:20:38.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.862 "is_configured": false, 00:20:38.862 "data_offset": 0, 00:20:38.862 "data_size": 0 00:20:38.862 } 00:20:38.862 ] 00:20:38.862 }' 00:20:38.862 21:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:38.862 21:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.428 21:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:39.686 [2024-07-15 21:34:12.995740] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:39.686 BaseBdev2 00:20:39.686 21:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:39.686 21:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:39.686 21:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:39.686 21:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:39.686 21:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:39.686 21:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:39.686 21:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:39.946 21:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:40.203 [ 00:20:40.203 { 00:20:40.203 "name": "BaseBdev2", 00:20:40.203 "aliases": [ 00:20:40.203 "8e1788d4-a7a4-4f63-96ef-74aa25c1f72a" 00:20:40.203 ], 00:20:40.203 "product_name": "Malloc disk", 00:20:40.203 "block_size": 512, 00:20:40.203 "num_blocks": 65536, 00:20:40.203 "uuid": "8e1788d4-a7a4-4f63-96ef-74aa25c1f72a", 00:20:40.203 "assigned_rate_limits": { 00:20:40.203 "rw_ios_per_sec": 0, 00:20:40.203 "rw_mbytes_per_sec": 0, 00:20:40.203 "r_mbytes_per_sec": 0, 00:20:40.203 "w_mbytes_per_sec": 0 00:20:40.203 }, 00:20:40.203 "claimed": true, 00:20:40.203 "claim_type": "exclusive_write", 00:20:40.203 "zoned": false, 00:20:40.204 "supported_io_types": { 00:20:40.204 "read": true, 00:20:40.204 "write": true, 00:20:40.204 "unmap": true, 00:20:40.204 "flush": true, 00:20:40.204 "reset": true, 00:20:40.204 "nvme_admin": false, 00:20:40.204 "nvme_io": false, 00:20:40.204 "nvme_io_md": false, 00:20:40.204 "write_zeroes": true, 00:20:40.204 "zcopy": true, 00:20:40.204 "get_zone_info": false, 00:20:40.204 "zone_management": false, 00:20:40.204 "zone_append": false, 00:20:40.204 "compare": false, 00:20:40.204 "compare_and_write": false, 00:20:40.204 "abort": true, 00:20:40.204 "seek_hole": false, 00:20:40.204 "seek_data": false, 00:20:40.204 "copy": true, 00:20:40.204 "nvme_iov_md": false 00:20:40.204 }, 00:20:40.204 "memory_domains": [ 00:20:40.204 { 00:20:40.204 "dma_device_id": "system", 00:20:40.204 "dma_device_type": 1 00:20:40.204 }, 00:20:40.204 { 00:20:40.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.204 "dma_device_type": 2 00:20:40.204 } 00:20:40.204 ], 00:20:40.204 "driver_specific": {} 00:20:40.204 } 00:20:40.204 ] 00:20:40.204 21:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:40.204 21:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:40.204 21:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:40.204 21:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:40.204 21:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:40.204 21:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:40.204 21:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:40.204 21:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:40.204 21:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:40.204 21:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:40.204 21:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:40.204 21:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:40.204 21:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:40.204 21:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.204 21:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:40.461 21:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:40.461 "name": "Existed_Raid", 00:20:40.461 "uuid": "0cc5d8f7-2b9d-4514-9db0-1f0ece8ff7e5", 00:20:40.461 "strip_size_kb": 64, 00:20:40.461 "state": "configuring", 00:20:40.461 "raid_level": "concat", 00:20:40.461 "superblock": true, 00:20:40.461 "num_base_bdevs": 3, 00:20:40.461 "num_base_bdevs_discovered": 2, 00:20:40.461 "num_base_bdevs_operational": 3, 00:20:40.461 "base_bdevs_list": [ 00:20:40.461 { 00:20:40.461 "name": "BaseBdev1", 00:20:40.461 "uuid": "d920d912-39d6-4368-a779-6a5bf3a46d1f", 00:20:40.461 "is_configured": true, 00:20:40.461 "data_offset": 2048, 00:20:40.461 "data_size": 63488 00:20:40.461 }, 00:20:40.461 { 00:20:40.461 "name": "BaseBdev2", 00:20:40.461 "uuid": "8e1788d4-a7a4-4f63-96ef-74aa25c1f72a", 00:20:40.461 "is_configured": true, 00:20:40.461 "data_offset": 2048, 00:20:40.461 "data_size": 63488 00:20:40.461 }, 00:20:40.461 { 00:20:40.461 "name": "BaseBdev3", 00:20:40.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.461 "is_configured": false, 00:20:40.461 "data_offset": 0, 00:20:40.461 "data_size": 0 00:20:40.461 } 00:20:40.461 ] 00:20:40.461 }' 00:20:40.461 21:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:40.461 21:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.027 21:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:41.285 [2024-07-15 21:34:14.446706] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:41.285 [2024-07-15 21:34:14.447052] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:20:41.285 [2024-07-15 21:34:14.447088] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:41.285 [2024-07-15 21:34:14.447241] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:20:41.285 BaseBdev3 00:20:41.285 [2024-07-15 21:34:14.447580] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:20:41.285 [2024-07-15 21:34:14.447591] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:20:41.285 [2024-07-15 21:34:14.447740] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:41.285 21:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:41.285 21:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:41.285 21:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:41.285 21:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:41.285 21:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:41.285 21:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:41.285 21:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:41.285 21:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:41.542 [ 00:20:41.542 { 00:20:41.542 "name": "BaseBdev3", 00:20:41.542 "aliases": [ 00:20:41.542 "0bddba61-5c27-4700-8870-ba84d1683880" 00:20:41.542 ], 00:20:41.542 "product_name": "Malloc disk", 00:20:41.542 "block_size": 512, 00:20:41.542 "num_blocks": 65536, 00:20:41.542 "uuid": "0bddba61-5c27-4700-8870-ba84d1683880", 00:20:41.542 "assigned_rate_limits": { 00:20:41.542 "rw_ios_per_sec": 0, 00:20:41.542 "rw_mbytes_per_sec": 0, 00:20:41.542 "r_mbytes_per_sec": 0, 00:20:41.542 "w_mbytes_per_sec": 0 00:20:41.542 }, 00:20:41.542 "claimed": true, 00:20:41.542 "claim_type": "exclusive_write", 00:20:41.542 "zoned": false, 00:20:41.542 "supported_io_types": { 00:20:41.542 "read": true, 00:20:41.542 "write": true, 00:20:41.542 "unmap": true, 00:20:41.542 "flush": true, 00:20:41.542 "reset": true, 00:20:41.542 "nvme_admin": false, 00:20:41.542 "nvme_io": false, 00:20:41.542 "nvme_io_md": false, 00:20:41.542 "write_zeroes": true, 00:20:41.542 "zcopy": true, 00:20:41.542 "get_zone_info": false, 00:20:41.542 "zone_management": false, 00:20:41.542 "zone_append": false, 00:20:41.542 "compare": false, 00:20:41.542 "compare_and_write": false, 00:20:41.542 "abort": true, 00:20:41.542 "seek_hole": false, 00:20:41.542 "seek_data": false, 00:20:41.542 "copy": true, 00:20:41.542 "nvme_iov_md": false 00:20:41.542 }, 00:20:41.542 "memory_domains": [ 00:20:41.542 { 00:20:41.542 "dma_device_id": "system", 00:20:41.542 "dma_device_type": 1 00:20:41.542 }, 00:20:41.542 { 00:20:41.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:41.542 "dma_device_type": 2 00:20:41.542 } 00:20:41.542 ], 00:20:41.542 "driver_specific": {} 00:20:41.542 } 00:20:41.542 ] 00:20:41.542 21:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:41.542 21:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:41.542 21:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:41.542 21:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:20:41.542 21:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:41.542 21:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:41.542 21:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:41.542 21:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:41.542 21:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:41.542 21:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:41.542 21:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:41.542 21:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:41.542 21:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:41.542 21:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.542 21:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:41.800 21:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:41.800 "name": "Existed_Raid", 00:20:41.800 "uuid": "0cc5d8f7-2b9d-4514-9db0-1f0ece8ff7e5", 00:20:41.800 "strip_size_kb": 64, 00:20:41.800 "state": "online", 00:20:41.800 "raid_level": "concat", 00:20:41.800 "superblock": true, 00:20:41.800 "num_base_bdevs": 3, 00:20:41.800 "num_base_bdevs_discovered": 3, 00:20:41.800 "num_base_bdevs_operational": 3, 00:20:41.800 "base_bdevs_list": [ 00:20:41.800 { 00:20:41.800 "name": "BaseBdev1", 00:20:41.800 "uuid": "d920d912-39d6-4368-a779-6a5bf3a46d1f", 00:20:41.800 "is_configured": true, 00:20:41.800 "data_offset": 2048, 00:20:41.800 "data_size": 63488 00:20:41.800 }, 00:20:41.800 { 00:20:41.800 "name": "BaseBdev2", 00:20:41.800 "uuid": "8e1788d4-a7a4-4f63-96ef-74aa25c1f72a", 00:20:41.800 "is_configured": true, 00:20:41.800 "data_offset": 2048, 00:20:41.800 "data_size": 63488 00:20:41.800 }, 00:20:41.800 { 00:20:41.800 "name": "BaseBdev3", 00:20:41.800 "uuid": "0bddba61-5c27-4700-8870-ba84d1683880", 00:20:41.800 "is_configured": true, 00:20:41.800 "data_offset": 2048, 00:20:41.800 "data_size": 63488 00:20:41.800 } 00:20:41.800 ] 00:20:41.800 }' 00:20:41.800 21:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:41.800 21:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.365 21:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:42.365 21:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:42.365 21:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:42.365 21:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:42.365 21:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:42.365 21:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:42.365 21:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:42.365 21:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:42.624 [2024-07-15 21:34:15.792779] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:42.624 21:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:42.624 "name": "Existed_Raid", 00:20:42.624 "aliases": [ 00:20:42.624 "0cc5d8f7-2b9d-4514-9db0-1f0ece8ff7e5" 00:20:42.624 ], 00:20:42.624 "product_name": "Raid Volume", 00:20:42.624 "block_size": 512, 00:20:42.624 "num_blocks": 190464, 00:20:42.624 "uuid": "0cc5d8f7-2b9d-4514-9db0-1f0ece8ff7e5", 00:20:42.624 "assigned_rate_limits": { 00:20:42.624 "rw_ios_per_sec": 0, 00:20:42.624 "rw_mbytes_per_sec": 0, 00:20:42.624 "r_mbytes_per_sec": 0, 00:20:42.624 "w_mbytes_per_sec": 0 00:20:42.624 }, 00:20:42.624 "claimed": false, 00:20:42.624 "zoned": false, 00:20:42.624 "supported_io_types": { 00:20:42.624 "read": true, 00:20:42.624 "write": true, 00:20:42.624 "unmap": true, 00:20:42.624 "flush": true, 00:20:42.624 "reset": true, 00:20:42.624 "nvme_admin": false, 00:20:42.624 "nvme_io": false, 00:20:42.624 "nvme_io_md": false, 00:20:42.624 "write_zeroes": true, 00:20:42.624 "zcopy": false, 00:20:42.624 "get_zone_info": false, 00:20:42.624 "zone_management": false, 00:20:42.624 "zone_append": false, 00:20:42.624 "compare": false, 00:20:42.624 "compare_and_write": false, 00:20:42.624 "abort": false, 00:20:42.624 "seek_hole": false, 00:20:42.624 "seek_data": false, 00:20:42.624 "copy": false, 00:20:42.624 "nvme_iov_md": false 00:20:42.624 }, 00:20:42.624 "memory_domains": [ 00:20:42.624 { 00:20:42.624 "dma_device_id": "system", 00:20:42.624 "dma_device_type": 1 00:20:42.624 }, 00:20:42.624 { 00:20:42.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.624 "dma_device_type": 2 00:20:42.624 }, 00:20:42.624 { 00:20:42.624 "dma_device_id": "system", 00:20:42.624 "dma_device_type": 1 00:20:42.624 }, 00:20:42.624 { 00:20:42.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.624 "dma_device_type": 2 00:20:42.624 }, 00:20:42.624 { 00:20:42.624 "dma_device_id": "system", 00:20:42.624 "dma_device_type": 1 00:20:42.624 }, 00:20:42.624 { 00:20:42.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.624 "dma_device_type": 2 00:20:42.624 } 00:20:42.624 ], 00:20:42.624 "driver_specific": { 00:20:42.624 "raid": { 00:20:42.624 "uuid": "0cc5d8f7-2b9d-4514-9db0-1f0ece8ff7e5", 00:20:42.624 "strip_size_kb": 64, 00:20:42.624 "state": "online", 00:20:42.624 "raid_level": "concat", 00:20:42.624 "superblock": true, 00:20:42.624 "num_base_bdevs": 3, 00:20:42.624 "num_base_bdevs_discovered": 3, 00:20:42.624 "num_base_bdevs_operational": 3, 00:20:42.624 "base_bdevs_list": [ 00:20:42.624 { 00:20:42.624 "name": "BaseBdev1", 00:20:42.624 "uuid": "d920d912-39d6-4368-a779-6a5bf3a46d1f", 00:20:42.624 "is_configured": true, 00:20:42.624 "data_offset": 2048, 00:20:42.624 "data_size": 63488 00:20:42.624 }, 00:20:42.624 { 00:20:42.624 "name": "BaseBdev2", 00:20:42.624 "uuid": "8e1788d4-a7a4-4f63-96ef-74aa25c1f72a", 00:20:42.624 "is_configured": true, 00:20:42.624 "data_offset": 2048, 00:20:42.624 "data_size": 63488 00:20:42.624 }, 00:20:42.624 { 00:20:42.624 "name": "BaseBdev3", 00:20:42.624 "uuid": "0bddba61-5c27-4700-8870-ba84d1683880", 00:20:42.624 "is_configured": true, 00:20:42.624 "data_offset": 2048, 00:20:42.624 "data_size": 63488 00:20:42.624 } 00:20:42.624 ] 00:20:42.624 } 00:20:42.624 } 00:20:42.624 }' 00:20:42.624 21:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:42.624 21:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:42.624 BaseBdev2 00:20:42.624 BaseBdev3' 00:20:42.624 21:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:42.624 21:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:42.624 21:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:42.883 21:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:42.883 "name": "BaseBdev1", 00:20:42.883 "aliases": [ 00:20:42.883 "d920d912-39d6-4368-a779-6a5bf3a46d1f" 00:20:42.883 ], 00:20:42.883 "product_name": "Malloc disk", 00:20:42.883 "block_size": 512, 00:20:42.883 "num_blocks": 65536, 00:20:42.883 "uuid": "d920d912-39d6-4368-a779-6a5bf3a46d1f", 00:20:42.883 "assigned_rate_limits": { 00:20:42.883 "rw_ios_per_sec": 0, 00:20:42.883 "rw_mbytes_per_sec": 0, 00:20:42.883 "r_mbytes_per_sec": 0, 00:20:42.883 "w_mbytes_per_sec": 0 00:20:42.883 }, 00:20:42.883 "claimed": true, 00:20:42.883 "claim_type": "exclusive_write", 00:20:42.883 "zoned": false, 00:20:42.883 "supported_io_types": { 00:20:42.883 "read": true, 00:20:42.883 "write": true, 00:20:42.883 "unmap": true, 00:20:42.883 "flush": true, 00:20:42.883 "reset": true, 00:20:42.883 "nvme_admin": false, 00:20:42.883 "nvme_io": false, 00:20:42.883 "nvme_io_md": false, 00:20:42.883 "write_zeroes": true, 00:20:42.883 "zcopy": true, 00:20:42.883 "get_zone_info": false, 00:20:42.883 "zone_management": false, 00:20:42.883 "zone_append": false, 00:20:42.883 "compare": false, 00:20:42.883 "compare_and_write": false, 00:20:42.883 "abort": true, 00:20:42.883 "seek_hole": false, 00:20:42.883 "seek_data": false, 00:20:42.883 "copy": true, 00:20:42.883 "nvme_iov_md": false 00:20:42.883 }, 00:20:42.883 "memory_domains": [ 00:20:42.883 { 00:20:42.883 "dma_device_id": "system", 00:20:42.883 "dma_device_type": 1 00:20:42.883 }, 00:20:42.883 { 00:20:42.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.883 "dma_device_type": 2 00:20:42.883 } 00:20:42.883 ], 00:20:42.883 "driver_specific": {} 00:20:42.883 }' 00:20:42.883 21:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:42.883 21:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:42.883 21:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:42.883 21:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:42.883 21:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:42.883 21:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:42.883 21:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:43.141 21:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:43.141 21:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:43.141 21:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:43.141 21:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:43.141 21:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:43.141 21:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:43.141 21:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:43.141 21:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:43.421 21:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:43.421 "name": "BaseBdev2", 00:20:43.421 "aliases": [ 00:20:43.421 "8e1788d4-a7a4-4f63-96ef-74aa25c1f72a" 00:20:43.421 ], 00:20:43.421 "product_name": "Malloc disk", 00:20:43.421 "block_size": 512, 00:20:43.421 "num_blocks": 65536, 00:20:43.421 "uuid": "8e1788d4-a7a4-4f63-96ef-74aa25c1f72a", 00:20:43.421 "assigned_rate_limits": { 00:20:43.421 "rw_ios_per_sec": 0, 00:20:43.421 "rw_mbytes_per_sec": 0, 00:20:43.421 "r_mbytes_per_sec": 0, 00:20:43.421 "w_mbytes_per_sec": 0 00:20:43.421 }, 00:20:43.421 "claimed": true, 00:20:43.421 "claim_type": "exclusive_write", 00:20:43.421 "zoned": false, 00:20:43.421 "supported_io_types": { 00:20:43.421 "read": true, 00:20:43.421 "write": true, 00:20:43.421 "unmap": true, 00:20:43.421 "flush": true, 00:20:43.421 "reset": true, 00:20:43.421 "nvme_admin": false, 00:20:43.421 "nvme_io": false, 00:20:43.421 "nvme_io_md": false, 00:20:43.421 "write_zeroes": true, 00:20:43.421 "zcopy": true, 00:20:43.421 "get_zone_info": false, 00:20:43.421 "zone_management": false, 00:20:43.421 "zone_append": false, 00:20:43.421 "compare": false, 00:20:43.421 "compare_and_write": false, 00:20:43.421 "abort": true, 00:20:43.421 "seek_hole": false, 00:20:43.421 "seek_data": false, 00:20:43.421 "copy": true, 00:20:43.421 "nvme_iov_md": false 00:20:43.421 }, 00:20:43.421 "memory_domains": [ 00:20:43.421 { 00:20:43.421 "dma_device_id": "system", 00:20:43.421 "dma_device_type": 1 00:20:43.421 }, 00:20:43.421 { 00:20:43.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:43.421 "dma_device_type": 2 00:20:43.421 } 00:20:43.421 ], 00:20:43.421 "driver_specific": {} 00:20:43.421 }' 00:20:43.421 21:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:43.421 21:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:43.421 21:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:43.421 21:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:43.421 21:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:43.697 21:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:43.697 21:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:43.697 21:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:43.697 21:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:43.697 21:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:43.697 21:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:43.697 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:43.697 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:43.697 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:43.697 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:43.955 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:43.955 "name": "BaseBdev3", 00:20:43.955 "aliases": [ 00:20:43.955 "0bddba61-5c27-4700-8870-ba84d1683880" 00:20:43.955 ], 00:20:43.955 "product_name": "Malloc disk", 00:20:43.955 "block_size": 512, 00:20:43.955 "num_blocks": 65536, 00:20:43.955 "uuid": "0bddba61-5c27-4700-8870-ba84d1683880", 00:20:43.955 "assigned_rate_limits": { 00:20:43.955 "rw_ios_per_sec": 0, 00:20:43.955 "rw_mbytes_per_sec": 0, 00:20:43.955 "r_mbytes_per_sec": 0, 00:20:43.955 "w_mbytes_per_sec": 0 00:20:43.955 }, 00:20:43.955 "claimed": true, 00:20:43.955 "claim_type": "exclusive_write", 00:20:43.955 "zoned": false, 00:20:43.955 "supported_io_types": { 00:20:43.955 "read": true, 00:20:43.955 "write": true, 00:20:43.955 "unmap": true, 00:20:43.955 "flush": true, 00:20:43.955 "reset": true, 00:20:43.955 "nvme_admin": false, 00:20:43.955 "nvme_io": false, 00:20:43.955 "nvme_io_md": false, 00:20:43.955 "write_zeroes": true, 00:20:43.955 "zcopy": true, 00:20:43.955 "get_zone_info": false, 00:20:43.955 "zone_management": false, 00:20:43.955 "zone_append": false, 00:20:43.955 "compare": false, 00:20:43.955 "compare_and_write": false, 00:20:43.955 "abort": true, 00:20:43.955 "seek_hole": false, 00:20:43.955 "seek_data": false, 00:20:43.955 "copy": true, 00:20:43.955 "nvme_iov_md": false 00:20:43.955 }, 00:20:43.955 "memory_domains": [ 00:20:43.955 { 00:20:43.955 "dma_device_id": "system", 00:20:43.955 "dma_device_type": 1 00:20:43.955 }, 00:20:43.955 { 00:20:43.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:43.955 "dma_device_type": 2 00:20:43.955 } 00:20:43.955 ], 00:20:43.955 "driver_specific": {} 00:20:43.955 }' 00:20:43.955 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:43.955 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:44.213 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:44.213 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:44.213 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:44.213 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:44.213 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:44.213 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:44.213 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:44.213 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:44.471 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:44.471 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:44.471 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:44.727 [2024-07-15 21:34:17.856908] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:44.728 [2024-07-15 21:34:17.857060] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:44.728 [2024-07-15 21:34:17.857153] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:44.728 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:44.728 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:20:44.728 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:44.728 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:20:44.728 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:20:44.728 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:20:44.728 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:44.728 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:20:44.728 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:44.728 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:44.728 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:44.728 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:44.728 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:44.728 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:44.728 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:44.728 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.728 21:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:44.985 21:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:44.985 "name": "Existed_Raid", 00:20:44.985 "uuid": "0cc5d8f7-2b9d-4514-9db0-1f0ece8ff7e5", 00:20:44.985 "strip_size_kb": 64, 00:20:44.985 "state": "offline", 00:20:44.985 "raid_level": "concat", 00:20:44.985 "superblock": true, 00:20:44.985 "num_base_bdevs": 3, 00:20:44.985 "num_base_bdevs_discovered": 2, 00:20:44.985 "num_base_bdevs_operational": 2, 00:20:44.985 "base_bdevs_list": [ 00:20:44.985 { 00:20:44.985 "name": null, 00:20:44.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.985 "is_configured": false, 00:20:44.985 "data_offset": 2048, 00:20:44.985 "data_size": 63488 00:20:44.985 }, 00:20:44.985 { 00:20:44.985 "name": "BaseBdev2", 00:20:44.985 "uuid": "8e1788d4-a7a4-4f63-96ef-74aa25c1f72a", 00:20:44.985 "is_configured": true, 00:20:44.985 "data_offset": 2048, 00:20:44.985 "data_size": 63488 00:20:44.985 }, 00:20:44.985 { 00:20:44.985 "name": "BaseBdev3", 00:20:44.985 "uuid": "0bddba61-5c27-4700-8870-ba84d1683880", 00:20:44.985 "is_configured": true, 00:20:44.985 "data_offset": 2048, 00:20:44.985 "data_size": 63488 00:20:44.985 } 00:20:44.985 ] 00:20:44.985 }' 00:20:44.985 21:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:44.985 21:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.552 21:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:45.552 21:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:45.552 21:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.552 21:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:45.811 21:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:45.811 21:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:45.811 21:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:45.811 [2024-07-15 21:34:19.134185] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:46.070 21:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:46.070 21:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:46.070 21:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.070 21:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:46.329 21:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:46.329 21:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:46.329 21:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:46.329 [2024-07-15 21:34:19.609169] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:46.329 [2024-07-15 21:34:19.609325] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:20:46.587 21:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:46.587 21:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:46.587 21:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.587 21:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:46.587 21:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:46.587 21:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:46.587 21:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:20:46.587 21:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:46.587 21:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:46.587 21:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:46.846 BaseBdev2 00:20:46.846 21:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:46.846 21:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:46.846 21:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:46.846 21:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:46.846 21:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:46.846 21:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:46.846 21:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:47.105 21:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:47.105 [ 00:20:47.105 { 00:20:47.105 "name": "BaseBdev2", 00:20:47.105 "aliases": [ 00:20:47.105 "d3235c8d-28c6-4cef-a1fb-1aeb982183d0" 00:20:47.106 ], 00:20:47.106 "product_name": "Malloc disk", 00:20:47.106 "block_size": 512, 00:20:47.106 "num_blocks": 65536, 00:20:47.106 "uuid": "d3235c8d-28c6-4cef-a1fb-1aeb982183d0", 00:20:47.106 "assigned_rate_limits": { 00:20:47.106 "rw_ios_per_sec": 0, 00:20:47.106 "rw_mbytes_per_sec": 0, 00:20:47.106 "r_mbytes_per_sec": 0, 00:20:47.106 "w_mbytes_per_sec": 0 00:20:47.106 }, 00:20:47.106 "claimed": false, 00:20:47.106 "zoned": false, 00:20:47.106 "supported_io_types": { 00:20:47.106 "read": true, 00:20:47.106 "write": true, 00:20:47.106 "unmap": true, 00:20:47.106 "flush": true, 00:20:47.106 "reset": true, 00:20:47.106 "nvme_admin": false, 00:20:47.106 "nvme_io": false, 00:20:47.106 "nvme_io_md": false, 00:20:47.106 "write_zeroes": true, 00:20:47.106 "zcopy": true, 00:20:47.106 "get_zone_info": false, 00:20:47.106 "zone_management": false, 00:20:47.106 "zone_append": false, 00:20:47.106 "compare": false, 00:20:47.106 "compare_and_write": false, 00:20:47.106 "abort": true, 00:20:47.106 "seek_hole": false, 00:20:47.106 "seek_data": false, 00:20:47.106 "copy": true, 00:20:47.106 "nvme_iov_md": false 00:20:47.106 }, 00:20:47.106 "memory_domains": [ 00:20:47.106 { 00:20:47.106 "dma_device_id": "system", 00:20:47.106 "dma_device_type": 1 00:20:47.106 }, 00:20:47.106 { 00:20:47.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.106 "dma_device_type": 2 00:20:47.106 } 00:20:47.106 ], 00:20:47.106 "driver_specific": {} 00:20:47.106 } 00:20:47.106 ] 00:20:47.365 21:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:47.365 21:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:47.365 21:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:47.365 21:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:47.365 BaseBdev3 00:20:47.365 21:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:47.365 21:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:47.365 21:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:47.365 21:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:47.365 21:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:47.365 21:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:47.365 21:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:47.623 21:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:47.890 [ 00:20:47.890 { 00:20:47.890 "name": "BaseBdev3", 00:20:47.890 "aliases": [ 00:20:47.890 "2d0c779e-4324-42b3-994a-d9006dcc6103" 00:20:47.890 ], 00:20:47.890 "product_name": "Malloc disk", 00:20:47.890 "block_size": 512, 00:20:47.890 "num_blocks": 65536, 00:20:47.890 "uuid": "2d0c779e-4324-42b3-994a-d9006dcc6103", 00:20:47.890 "assigned_rate_limits": { 00:20:47.890 "rw_ios_per_sec": 0, 00:20:47.890 "rw_mbytes_per_sec": 0, 00:20:47.890 "r_mbytes_per_sec": 0, 00:20:47.890 "w_mbytes_per_sec": 0 00:20:47.890 }, 00:20:47.890 "claimed": false, 00:20:47.890 "zoned": false, 00:20:47.890 "supported_io_types": { 00:20:47.890 "read": true, 00:20:47.890 "write": true, 00:20:47.890 "unmap": true, 00:20:47.890 "flush": true, 00:20:47.890 "reset": true, 00:20:47.890 "nvme_admin": false, 00:20:47.890 "nvme_io": false, 00:20:47.890 "nvme_io_md": false, 00:20:47.890 "write_zeroes": true, 00:20:47.890 "zcopy": true, 00:20:47.890 "get_zone_info": false, 00:20:47.890 "zone_management": false, 00:20:47.890 "zone_append": false, 00:20:47.890 "compare": false, 00:20:47.890 "compare_and_write": false, 00:20:47.890 "abort": true, 00:20:47.890 "seek_hole": false, 00:20:47.890 "seek_data": false, 00:20:47.890 "copy": true, 00:20:47.890 "nvme_iov_md": false 00:20:47.890 }, 00:20:47.890 "memory_domains": [ 00:20:47.890 { 00:20:47.890 "dma_device_id": "system", 00:20:47.890 "dma_device_type": 1 00:20:47.890 }, 00:20:47.890 { 00:20:47.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.890 "dma_device_type": 2 00:20:47.890 } 00:20:47.890 ], 00:20:47.890 "driver_specific": {} 00:20:47.890 } 00:20:47.890 ] 00:20:47.890 21:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:47.890 21:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:47.890 21:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:47.890 21:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:48.150 [2024-07-15 21:34:21.299823] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:48.150 [2024-07-15 21:34:21.300278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:48.150 [2024-07-15 21:34:21.300363] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:48.150 [2024-07-15 21:34:21.302012] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:48.150 21:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:48.150 21:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:48.150 21:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:48.150 21:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:48.150 21:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:48.150 21:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:48.150 21:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:48.150 21:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:48.150 21:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:48.150 21:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:48.150 21:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.150 21:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:48.150 21:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:48.150 "name": "Existed_Raid", 00:20:48.150 "uuid": "73be2876-3ee4-4c7b-bd79-13e0371077be", 00:20:48.150 "strip_size_kb": 64, 00:20:48.150 "state": "configuring", 00:20:48.150 "raid_level": "concat", 00:20:48.150 "superblock": true, 00:20:48.150 "num_base_bdevs": 3, 00:20:48.150 "num_base_bdevs_discovered": 2, 00:20:48.150 "num_base_bdevs_operational": 3, 00:20:48.150 "base_bdevs_list": [ 00:20:48.150 { 00:20:48.150 "name": "BaseBdev1", 00:20:48.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.150 "is_configured": false, 00:20:48.150 "data_offset": 0, 00:20:48.150 "data_size": 0 00:20:48.150 }, 00:20:48.150 { 00:20:48.150 "name": "BaseBdev2", 00:20:48.150 "uuid": "d3235c8d-28c6-4cef-a1fb-1aeb982183d0", 00:20:48.150 "is_configured": true, 00:20:48.150 "data_offset": 2048, 00:20:48.150 "data_size": 63488 00:20:48.150 }, 00:20:48.150 { 00:20:48.150 "name": "BaseBdev3", 00:20:48.150 "uuid": "2d0c779e-4324-42b3-994a-d9006dcc6103", 00:20:48.150 "is_configured": true, 00:20:48.150 "data_offset": 2048, 00:20:48.150 "data_size": 63488 00:20:48.150 } 00:20:48.150 ] 00:20:48.150 }' 00:20:48.150 21:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:48.150 21:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.777 21:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:49.036 [2024-07-15 21:34:22.246127] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:49.036 21:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:49.036 21:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:49.036 21:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:49.036 21:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:49.036 21:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:49.036 21:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:49.036 21:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:49.036 21:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:49.036 21:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:49.036 21:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:49.036 21:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.036 21:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:49.295 21:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:49.295 "name": "Existed_Raid", 00:20:49.295 "uuid": "73be2876-3ee4-4c7b-bd79-13e0371077be", 00:20:49.295 "strip_size_kb": 64, 00:20:49.295 "state": "configuring", 00:20:49.296 "raid_level": "concat", 00:20:49.296 "superblock": true, 00:20:49.296 "num_base_bdevs": 3, 00:20:49.296 "num_base_bdevs_discovered": 1, 00:20:49.296 "num_base_bdevs_operational": 3, 00:20:49.296 "base_bdevs_list": [ 00:20:49.296 { 00:20:49.296 "name": "BaseBdev1", 00:20:49.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.296 "is_configured": false, 00:20:49.296 "data_offset": 0, 00:20:49.296 "data_size": 0 00:20:49.296 }, 00:20:49.296 { 00:20:49.296 "name": null, 00:20:49.296 "uuid": "d3235c8d-28c6-4cef-a1fb-1aeb982183d0", 00:20:49.296 "is_configured": false, 00:20:49.296 "data_offset": 2048, 00:20:49.296 "data_size": 63488 00:20:49.296 }, 00:20:49.296 { 00:20:49.296 "name": "BaseBdev3", 00:20:49.296 "uuid": "2d0c779e-4324-42b3-994a-d9006dcc6103", 00:20:49.296 "is_configured": true, 00:20:49.296 "data_offset": 2048, 00:20:49.296 "data_size": 63488 00:20:49.296 } 00:20:49.296 ] 00:20:49.296 }' 00:20:49.296 21:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:49.296 21:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.873 21:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.873 21:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:49.873 21:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:49.873 21:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:50.132 [2024-07-15 21:34:23.402774] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:50.132 BaseBdev1 00:20:50.132 21:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:50.132 21:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:50.132 21:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:50.132 21:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:50.132 21:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:50.132 21:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:50.132 21:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:50.390 21:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:50.390 [ 00:20:50.390 { 00:20:50.390 "name": "BaseBdev1", 00:20:50.390 "aliases": [ 00:20:50.390 "f31b082b-a6e3-47f7-91cc-9449b563806b" 00:20:50.390 ], 00:20:50.390 "product_name": "Malloc disk", 00:20:50.390 "block_size": 512, 00:20:50.390 "num_blocks": 65536, 00:20:50.390 "uuid": "f31b082b-a6e3-47f7-91cc-9449b563806b", 00:20:50.390 "assigned_rate_limits": { 00:20:50.390 "rw_ios_per_sec": 0, 00:20:50.390 "rw_mbytes_per_sec": 0, 00:20:50.390 "r_mbytes_per_sec": 0, 00:20:50.390 "w_mbytes_per_sec": 0 00:20:50.390 }, 00:20:50.390 "claimed": true, 00:20:50.390 "claim_type": "exclusive_write", 00:20:50.390 "zoned": false, 00:20:50.390 "supported_io_types": { 00:20:50.390 "read": true, 00:20:50.390 "write": true, 00:20:50.390 "unmap": true, 00:20:50.390 "flush": true, 00:20:50.390 "reset": true, 00:20:50.390 "nvme_admin": false, 00:20:50.390 "nvme_io": false, 00:20:50.390 "nvme_io_md": false, 00:20:50.390 "write_zeroes": true, 00:20:50.390 "zcopy": true, 00:20:50.390 "get_zone_info": false, 00:20:50.390 "zone_management": false, 00:20:50.390 "zone_append": false, 00:20:50.390 "compare": false, 00:20:50.390 "compare_and_write": false, 00:20:50.390 "abort": true, 00:20:50.390 "seek_hole": false, 00:20:50.390 "seek_data": false, 00:20:50.390 "copy": true, 00:20:50.390 "nvme_iov_md": false 00:20:50.390 }, 00:20:50.390 "memory_domains": [ 00:20:50.390 { 00:20:50.390 "dma_device_id": "system", 00:20:50.390 "dma_device_type": 1 00:20:50.390 }, 00:20:50.390 { 00:20:50.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:50.390 "dma_device_type": 2 00:20:50.390 } 00:20:50.390 ], 00:20:50.390 "driver_specific": {} 00:20:50.390 } 00:20:50.390 ] 00:20:50.390 21:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:50.390 21:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:50.390 21:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:50.390 21:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:50.390 21:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:50.390 21:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:50.390 21:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:50.390 21:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:50.390 21:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:50.390 21:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:50.390 21:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:50.390 21:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.390 21:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:50.649 21:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:50.649 "name": "Existed_Raid", 00:20:50.649 "uuid": "73be2876-3ee4-4c7b-bd79-13e0371077be", 00:20:50.649 "strip_size_kb": 64, 00:20:50.649 "state": "configuring", 00:20:50.649 "raid_level": "concat", 00:20:50.649 "superblock": true, 00:20:50.649 "num_base_bdevs": 3, 00:20:50.649 "num_base_bdevs_discovered": 2, 00:20:50.649 "num_base_bdevs_operational": 3, 00:20:50.649 "base_bdevs_list": [ 00:20:50.649 { 00:20:50.649 "name": "BaseBdev1", 00:20:50.649 "uuid": "f31b082b-a6e3-47f7-91cc-9449b563806b", 00:20:50.649 "is_configured": true, 00:20:50.649 "data_offset": 2048, 00:20:50.649 "data_size": 63488 00:20:50.649 }, 00:20:50.649 { 00:20:50.649 "name": null, 00:20:50.649 "uuid": "d3235c8d-28c6-4cef-a1fb-1aeb982183d0", 00:20:50.649 "is_configured": false, 00:20:50.649 "data_offset": 2048, 00:20:50.649 "data_size": 63488 00:20:50.649 }, 00:20:50.649 { 00:20:50.649 "name": "BaseBdev3", 00:20:50.649 "uuid": "2d0c779e-4324-42b3-994a-d9006dcc6103", 00:20:50.649 "is_configured": true, 00:20:50.649 "data_offset": 2048, 00:20:50.649 "data_size": 63488 00:20:50.649 } 00:20:50.649 ] 00:20:50.649 }' 00:20:50.649 21:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:50.649 21:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.218 21:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.218 21:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:51.477 21:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:51.477 21:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:51.737 [2024-07-15 21:34:24.851381] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:51.737 21:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:51.737 21:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:51.737 21:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:51.737 21:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:51.737 21:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:51.737 21:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:51.737 21:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:51.737 21:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:51.737 21:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:51.737 21:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:51.737 21:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.737 21:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:51.737 21:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:51.737 "name": "Existed_Raid", 00:20:51.737 "uuid": "73be2876-3ee4-4c7b-bd79-13e0371077be", 00:20:51.737 "strip_size_kb": 64, 00:20:51.737 "state": "configuring", 00:20:51.737 "raid_level": "concat", 00:20:51.737 "superblock": true, 00:20:51.737 "num_base_bdevs": 3, 00:20:51.737 "num_base_bdevs_discovered": 1, 00:20:51.737 "num_base_bdevs_operational": 3, 00:20:51.737 "base_bdevs_list": [ 00:20:51.737 { 00:20:51.737 "name": "BaseBdev1", 00:20:51.737 "uuid": "f31b082b-a6e3-47f7-91cc-9449b563806b", 00:20:51.737 "is_configured": true, 00:20:51.737 "data_offset": 2048, 00:20:51.737 "data_size": 63488 00:20:51.737 }, 00:20:51.737 { 00:20:51.737 "name": null, 00:20:51.737 "uuid": "d3235c8d-28c6-4cef-a1fb-1aeb982183d0", 00:20:51.737 "is_configured": false, 00:20:51.737 "data_offset": 2048, 00:20:51.737 "data_size": 63488 00:20:51.737 }, 00:20:51.737 { 00:20:51.737 "name": null, 00:20:51.737 "uuid": "2d0c779e-4324-42b3-994a-d9006dcc6103", 00:20:51.737 "is_configured": false, 00:20:51.737 "data_offset": 2048, 00:20:51.737 "data_size": 63488 00:20:51.737 } 00:20:51.737 ] 00:20:51.737 }' 00:20:51.737 21:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:51.737 21:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.304 21:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.304 21:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:52.563 21:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:52.563 21:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:52.823 [2024-07-15 21:34:26.021784] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:52.823 21:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:52.823 21:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:52.823 21:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:52.823 21:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:52.823 21:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:52.823 21:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:52.823 21:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:52.823 21:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:52.823 21:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:52.823 21:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:52.823 21:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.823 21:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:53.082 21:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:53.082 "name": "Existed_Raid", 00:20:53.082 "uuid": "73be2876-3ee4-4c7b-bd79-13e0371077be", 00:20:53.082 "strip_size_kb": 64, 00:20:53.082 "state": "configuring", 00:20:53.082 "raid_level": "concat", 00:20:53.082 "superblock": true, 00:20:53.082 "num_base_bdevs": 3, 00:20:53.082 "num_base_bdevs_discovered": 2, 00:20:53.082 "num_base_bdevs_operational": 3, 00:20:53.082 "base_bdevs_list": [ 00:20:53.082 { 00:20:53.082 "name": "BaseBdev1", 00:20:53.082 "uuid": "f31b082b-a6e3-47f7-91cc-9449b563806b", 00:20:53.082 "is_configured": true, 00:20:53.082 "data_offset": 2048, 00:20:53.082 "data_size": 63488 00:20:53.082 }, 00:20:53.082 { 00:20:53.082 "name": null, 00:20:53.082 "uuid": "d3235c8d-28c6-4cef-a1fb-1aeb982183d0", 00:20:53.082 "is_configured": false, 00:20:53.082 "data_offset": 2048, 00:20:53.082 "data_size": 63488 00:20:53.082 }, 00:20:53.082 { 00:20:53.082 "name": "BaseBdev3", 00:20:53.082 "uuid": "2d0c779e-4324-42b3-994a-d9006dcc6103", 00:20:53.082 "is_configured": true, 00:20:53.082 "data_offset": 2048, 00:20:53.082 "data_size": 63488 00:20:53.082 } 00:20:53.082 ] 00:20:53.082 }' 00:20:53.082 21:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:53.082 21:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.653 21:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.653 21:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:53.920 21:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:53.920 21:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:53.920 [2024-07-15 21:34:27.224252] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:54.179 21:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:54.179 21:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:54.179 21:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:54.179 21:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:54.179 21:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:54.179 21:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:54.179 21:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:54.179 21:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:54.179 21:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:54.179 21:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:54.179 21:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.179 21:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:54.179 21:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:54.179 "name": "Existed_Raid", 00:20:54.179 "uuid": "73be2876-3ee4-4c7b-bd79-13e0371077be", 00:20:54.179 "strip_size_kb": 64, 00:20:54.179 "state": "configuring", 00:20:54.179 "raid_level": "concat", 00:20:54.179 "superblock": true, 00:20:54.179 "num_base_bdevs": 3, 00:20:54.179 "num_base_bdevs_discovered": 1, 00:20:54.179 "num_base_bdevs_operational": 3, 00:20:54.179 "base_bdevs_list": [ 00:20:54.179 { 00:20:54.179 "name": null, 00:20:54.179 "uuid": "f31b082b-a6e3-47f7-91cc-9449b563806b", 00:20:54.179 "is_configured": false, 00:20:54.179 "data_offset": 2048, 00:20:54.179 "data_size": 63488 00:20:54.179 }, 00:20:54.179 { 00:20:54.179 "name": null, 00:20:54.179 "uuid": "d3235c8d-28c6-4cef-a1fb-1aeb982183d0", 00:20:54.179 "is_configured": false, 00:20:54.179 "data_offset": 2048, 00:20:54.179 "data_size": 63488 00:20:54.179 }, 00:20:54.179 { 00:20:54.179 "name": "BaseBdev3", 00:20:54.179 "uuid": "2d0c779e-4324-42b3-994a-d9006dcc6103", 00:20:54.179 "is_configured": true, 00:20:54.179 "data_offset": 2048, 00:20:54.179 "data_size": 63488 00:20:54.179 } 00:20:54.179 ] 00:20:54.179 }' 00:20:54.179 21:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:54.179 21:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:55.115 21:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.115 21:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:55.115 21:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:55.115 21:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:55.374 [2024-07-15 21:34:28.553321] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:55.374 21:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:55.374 21:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:55.374 21:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:55.374 21:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:55.374 21:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:55.374 21:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:55.374 21:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:55.374 21:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:55.374 21:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:55.374 21:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:55.374 21:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.374 21:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:55.374 21:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:55.374 "name": "Existed_Raid", 00:20:55.374 "uuid": "73be2876-3ee4-4c7b-bd79-13e0371077be", 00:20:55.374 "strip_size_kb": 64, 00:20:55.374 "state": "configuring", 00:20:55.374 "raid_level": "concat", 00:20:55.374 "superblock": true, 00:20:55.374 "num_base_bdevs": 3, 00:20:55.374 "num_base_bdevs_discovered": 2, 00:20:55.374 "num_base_bdevs_operational": 3, 00:20:55.374 "base_bdevs_list": [ 00:20:55.374 { 00:20:55.374 "name": null, 00:20:55.374 "uuid": "f31b082b-a6e3-47f7-91cc-9449b563806b", 00:20:55.374 "is_configured": false, 00:20:55.374 "data_offset": 2048, 00:20:55.374 "data_size": 63488 00:20:55.374 }, 00:20:55.374 { 00:20:55.374 "name": "BaseBdev2", 00:20:55.374 "uuid": "d3235c8d-28c6-4cef-a1fb-1aeb982183d0", 00:20:55.374 "is_configured": true, 00:20:55.374 "data_offset": 2048, 00:20:55.374 "data_size": 63488 00:20:55.374 }, 00:20:55.374 { 00:20:55.374 "name": "BaseBdev3", 00:20:55.374 "uuid": "2d0c779e-4324-42b3-994a-d9006dcc6103", 00:20:55.374 "is_configured": true, 00:20:55.374 "data_offset": 2048, 00:20:55.374 "data_size": 63488 00:20:55.374 } 00:20:55.374 ] 00:20:55.374 }' 00:20:55.374 21:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:55.374 21:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.312 21:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.312 21:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:56.312 21:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:56.312 21:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.312 21:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:56.572 21:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u f31b082b-a6e3-47f7-91cc-9449b563806b 00:20:56.831 [2024-07-15 21:34:29.975010] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:56.831 [2024-07-15 21:34:29.975294] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:20:56.831 [2024-07-15 21:34:29.975324] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:56.831 [2024-07-15 21:34:29.975466] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:20:56.831 [2024-07-15 21:34:29.975756] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:20:56.831 NewBaseBdev 00:20:56.831 [2024-07-15 21:34:29.975797] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008d80 00:20:56.831 [2024-07-15 21:34:29.975933] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:56.831 21:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:56.831 21:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:20:56.831 21:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:56.831 21:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:56.831 21:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:56.831 21:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:56.831 21:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:56.831 21:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:57.090 [ 00:20:57.090 { 00:20:57.090 "name": "NewBaseBdev", 00:20:57.090 "aliases": [ 00:20:57.090 "f31b082b-a6e3-47f7-91cc-9449b563806b" 00:20:57.090 ], 00:20:57.090 "product_name": "Malloc disk", 00:20:57.090 "block_size": 512, 00:20:57.090 "num_blocks": 65536, 00:20:57.090 "uuid": "f31b082b-a6e3-47f7-91cc-9449b563806b", 00:20:57.090 "assigned_rate_limits": { 00:20:57.090 "rw_ios_per_sec": 0, 00:20:57.090 "rw_mbytes_per_sec": 0, 00:20:57.090 "r_mbytes_per_sec": 0, 00:20:57.090 "w_mbytes_per_sec": 0 00:20:57.090 }, 00:20:57.090 "claimed": true, 00:20:57.090 "claim_type": "exclusive_write", 00:20:57.090 "zoned": false, 00:20:57.090 "supported_io_types": { 00:20:57.090 "read": true, 00:20:57.090 "write": true, 00:20:57.090 "unmap": true, 00:20:57.090 "flush": true, 00:20:57.090 "reset": true, 00:20:57.090 "nvme_admin": false, 00:20:57.090 "nvme_io": false, 00:20:57.090 "nvme_io_md": false, 00:20:57.090 "write_zeroes": true, 00:20:57.090 "zcopy": true, 00:20:57.090 "get_zone_info": false, 00:20:57.090 "zone_management": false, 00:20:57.090 "zone_append": false, 00:20:57.090 "compare": false, 00:20:57.090 "compare_and_write": false, 00:20:57.090 "abort": true, 00:20:57.090 "seek_hole": false, 00:20:57.090 "seek_data": false, 00:20:57.090 "copy": true, 00:20:57.090 "nvme_iov_md": false 00:20:57.090 }, 00:20:57.090 "memory_domains": [ 00:20:57.090 { 00:20:57.090 "dma_device_id": "system", 00:20:57.090 "dma_device_type": 1 00:20:57.090 }, 00:20:57.090 { 00:20:57.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.090 "dma_device_type": 2 00:20:57.090 } 00:20:57.090 ], 00:20:57.090 "driver_specific": {} 00:20:57.090 } 00:20:57.090 ] 00:20:57.090 21:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:57.090 21:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:20:57.090 21:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:57.090 21:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:57.090 21:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:57.090 21:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:57.090 21:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:57.090 21:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:57.090 21:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:57.090 21:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:57.090 21:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:57.090 21:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.090 21:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:57.350 21:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:57.350 "name": "Existed_Raid", 00:20:57.350 "uuid": "73be2876-3ee4-4c7b-bd79-13e0371077be", 00:20:57.350 "strip_size_kb": 64, 00:20:57.350 "state": "online", 00:20:57.350 "raid_level": "concat", 00:20:57.350 "superblock": true, 00:20:57.350 "num_base_bdevs": 3, 00:20:57.350 "num_base_bdevs_discovered": 3, 00:20:57.350 "num_base_bdevs_operational": 3, 00:20:57.350 "base_bdevs_list": [ 00:20:57.350 { 00:20:57.350 "name": "NewBaseBdev", 00:20:57.350 "uuid": "f31b082b-a6e3-47f7-91cc-9449b563806b", 00:20:57.350 "is_configured": true, 00:20:57.350 "data_offset": 2048, 00:20:57.350 "data_size": 63488 00:20:57.350 }, 00:20:57.350 { 00:20:57.350 "name": "BaseBdev2", 00:20:57.350 "uuid": "d3235c8d-28c6-4cef-a1fb-1aeb982183d0", 00:20:57.350 "is_configured": true, 00:20:57.350 "data_offset": 2048, 00:20:57.350 "data_size": 63488 00:20:57.350 }, 00:20:57.350 { 00:20:57.350 "name": "BaseBdev3", 00:20:57.350 "uuid": "2d0c779e-4324-42b3-994a-d9006dcc6103", 00:20:57.350 "is_configured": true, 00:20:57.350 "data_offset": 2048, 00:20:57.350 "data_size": 63488 00:20:57.350 } 00:20:57.350 ] 00:20:57.350 }' 00:20:57.350 21:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:57.350 21:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:57.918 21:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:57.918 21:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:57.918 21:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:57.918 21:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:57.918 21:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:57.918 21:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:57.918 21:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:57.918 21:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:58.178 [2024-07-15 21:34:31.318854] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:58.178 21:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:58.178 "name": "Existed_Raid", 00:20:58.178 "aliases": [ 00:20:58.178 "73be2876-3ee4-4c7b-bd79-13e0371077be" 00:20:58.178 ], 00:20:58.178 "product_name": "Raid Volume", 00:20:58.178 "block_size": 512, 00:20:58.178 "num_blocks": 190464, 00:20:58.178 "uuid": "73be2876-3ee4-4c7b-bd79-13e0371077be", 00:20:58.178 "assigned_rate_limits": { 00:20:58.178 "rw_ios_per_sec": 0, 00:20:58.178 "rw_mbytes_per_sec": 0, 00:20:58.178 "r_mbytes_per_sec": 0, 00:20:58.178 "w_mbytes_per_sec": 0 00:20:58.178 }, 00:20:58.178 "claimed": false, 00:20:58.178 "zoned": false, 00:20:58.178 "supported_io_types": { 00:20:58.178 "read": true, 00:20:58.178 "write": true, 00:20:58.178 "unmap": true, 00:20:58.178 "flush": true, 00:20:58.178 "reset": true, 00:20:58.178 "nvme_admin": false, 00:20:58.178 "nvme_io": false, 00:20:58.178 "nvme_io_md": false, 00:20:58.178 "write_zeroes": true, 00:20:58.178 "zcopy": false, 00:20:58.178 "get_zone_info": false, 00:20:58.178 "zone_management": false, 00:20:58.178 "zone_append": false, 00:20:58.178 "compare": false, 00:20:58.178 "compare_and_write": false, 00:20:58.178 "abort": false, 00:20:58.178 "seek_hole": false, 00:20:58.178 "seek_data": false, 00:20:58.178 "copy": false, 00:20:58.178 "nvme_iov_md": false 00:20:58.178 }, 00:20:58.178 "memory_domains": [ 00:20:58.178 { 00:20:58.178 "dma_device_id": "system", 00:20:58.178 "dma_device_type": 1 00:20:58.178 }, 00:20:58.178 { 00:20:58.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:58.178 "dma_device_type": 2 00:20:58.178 }, 00:20:58.178 { 00:20:58.178 "dma_device_id": "system", 00:20:58.178 "dma_device_type": 1 00:20:58.178 }, 00:20:58.178 { 00:20:58.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:58.178 "dma_device_type": 2 00:20:58.178 }, 00:20:58.178 { 00:20:58.178 "dma_device_id": "system", 00:20:58.178 "dma_device_type": 1 00:20:58.178 }, 00:20:58.178 { 00:20:58.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:58.178 "dma_device_type": 2 00:20:58.178 } 00:20:58.178 ], 00:20:58.178 "driver_specific": { 00:20:58.178 "raid": { 00:20:58.178 "uuid": "73be2876-3ee4-4c7b-bd79-13e0371077be", 00:20:58.178 "strip_size_kb": 64, 00:20:58.178 "state": "online", 00:20:58.178 "raid_level": "concat", 00:20:58.178 "superblock": true, 00:20:58.178 "num_base_bdevs": 3, 00:20:58.178 "num_base_bdevs_discovered": 3, 00:20:58.178 "num_base_bdevs_operational": 3, 00:20:58.178 "base_bdevs_list": [ 00:20:58.178 { 00:20:58.178 "name": "NewBaseBdev", 00:20:58.178 "uuid": "f31b082b-a6e3-47f7-91cc-9449b563806b", 00:20:58.178 "is_configured": true, 00:20:58.178 "data_offset": 2048, 00:20:58.178 "data_size": 63488 00:20:58.178 }, 00:20:58.178 { 00:20:58.178 "name": "BaseBdev2", 00:20:58.178 "uuid": "d3235c8d-28c6-4cef-a1fb-1aeb982183d0", 00:20:58.178 "is_configured": true, 00:20:58.178 "data_offset": 2048, 00:20:58.178 "data_size": 63488 00:20:58.178 }, 00:20:58.178 { 00:20:58.178 "name": "BaseBdev3", 00:20:58.178 "uuid": "2d0c779e-4324-42b3-994a-d9006dcc6103", 00:20:58.178 "is_configured": true, 00:20:58.178 "data_offset": 2048, 00:20:58.178 "data_size": 63488 00:20:58.178 } 00:20:58.178 ] 00:20:58.178 } 00:20:58.178 } 00:20:58.178 }' 00:20:58.178 21:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:58.178 21:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:58.178 BaseBdev2 00:20:58.178 BaseBdev3' 00:20:58.178 21:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:58.178 21:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:58.178 21:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:58.438 21:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:58.438 "name": "NewBaseBdev", 00:20:58.438 "aliases": [ 00:20:58.438 "f31b082b-a6e3-47f7-91cc-9449b563806b" 00:20:58.438 ], 00:20:58.438 "product_name": "Malloc disk", 00:20:58.438 "block_size": 512, 00:20:58.438 "num_blocks": 65536, 00:20:58.438 "uuid": "f31b082b-a6e3-47f7-91cc-9449b563806b", 00:20:58.438 "assigned_rate_limits": { 00:20:58.438 "rw_ios_per_sec": 0, 00:20:58.438 "rw_mbytes_per_sec": 0, 00:20:58.438 "r_mbytes_per_sec": 0, 00:20:58.438 "w_mbytes_per_sec": 0 00:20:58.438 }, 00:20:58.438 "claimed": true, 00:20:58.438 "claim_type": "exclusive_write", 00:20:58.438 "zoned": false, 00:20:58.438 "supported_io_types": { 00:20:58.438 "read": true, 00:20:58.438 "write": true, 00:20:58.438 "unmap": true, 00:20:58.438 "flush": true, 00:20:58.438 "reset": true, 00:20:58.438 "nvme_admin": false, 00:20:58.438 "nvme_io": false, 00:20:58.438 "nvme_io_md": false, 00:20:58.438 "write_zeroes": true, 00:20:58.438 "zcopy": true, 00:20:58.438 "get_zone_info": false, 00:20:58.438 "zone_management": false, 00:20:58.438 "zone_append": false, 00:20:58.438 "compare": false, 00:20:58.438 "compare_and_write": false, 00:20:58.438 "abort": true, 00:20:58.438 "seek_hole": false, 00:20:58.438 "seek_data": false, 00:20:58.438 "copy": true, 00:20:58.438 "nvme_iov_md": false 00:20:58.438 }, 00:20:58.438 "memory_domains": [ 00:20:58.438 { 00:20:58.438 "dma_device_id": "system", 00:20:58.438 "dma_device_type": 1 00:20:58.438 }, 00:20:58.438 { 00:20:58.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:58.438 "dma_device_type": 2 00:20:58.438 } 00:20:58.438 ], 00:20:58.438 "driver_specific": {} 00:20:58.438 }' 00:20:58.438 21:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:58.438 21:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:58.438 21:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:58.438 21:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:58.438 21:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:58.438 21:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:58.438 21:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:58.438 21:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:58.703 21:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:58.703 21:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:58.703 21:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:58.703 21:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:58.703 21:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:58.703 21:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:58.703 21:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:58.962 21:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:58.962 "name": "BaseBdev2", 00:20:58.962 "aliases": [ 00:20:58.962 "d3235c8d-28c6-4cef-a1fb-1aeb982183d0" 00:20:58.962 ], 00:20:58.962 "product_name": "Malloc disk", 00:20:58.962 "block_size": 512, 00:20:58.962 "num_blocks": 65536, 00:20:58.962 "uuid": "d3235c8d-28c6-4cef-a1fb-1aeb982183d0", 00:20:58.962 "assigned_rate_limits": { 00:20:58.962 "rw_ios_per_sec": 0, 00:20:58.962 "rw_mbytes_per_sec": 0, 00:20:58.962 "r_mbytes_per_sec": 0, 00:20:58.962 "w_mbytes_per_sec": 0 00:20:58.962 }, 00:20:58.962 "claimed": true, 00:20:58.962 "claim_type": "exclusive_write", 00:20:58.962 "zoned": false, 00:20:58.962 "supported_io_types": { 00:20:58.962 "read": true, 00:20:58.962 "write": true, 00:20:58.962 "unmap": true, 00:20:58.962 "flush": true, 00:20:58.962 "reset": true, 00:20:58.962 "nvme_admin": false, 00:20:58.962 "nvme_io": false, 00:20:58.962 "nvme_io_md": false, 00:20:58.962 "write_zeroes": true, 00:20:58.962 "zcopy": true, 00:20:58.962 "get_zone_info": false, 00:20:58.962 "zone_management": false, 00:20:58.962 "zone_append": false, 00:20:58.962 "compare": false, 00:20:58.962 "compare_and_write": false, 00:20:58.962 "abort": true, 00:20:58.962 "seek_hole": false, 00:20:58.962 "seek_data": false, 00:20:58.962 "copy": true, 00:20:58.962 "nvme_iov_md": false 00:20:58.962 }, 00:20:58.962 "memory_domains": [ 00:20:58.962 { 00:20:58.962 "dma_device_id": "system", 00:20:58.962 "dma_device_type": 1 00:20:58.962 }, 00:20:58.962 { 00:20:58.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:58.962 "dma_device_type": 2 00:20:58.962 } 00:20:58.962 ], 00:20:58.962 "driver_specific": {} 00:20:58.962 }' 00:20:58.962 21:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:58.962 21:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:58.962 21:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:58.962 21:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:58.962 21:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:58.962 21:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:59.222 21:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:59.222 21:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:59.222 21:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:59.222 21:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:59.222 21:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:59.222 21:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:59.222 21:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:59.222 21:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:59.222 21:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:59.481 21:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:59.481 "name": "BaseBdev3", 00:20:59.481 "aliases": [ 00:20:59.481 "2d0c779e-4324-42b3-994a-d9006dcc6103" 00:20:59.481 ], 00:20:59.481 "product_name": "Malloc disk", 00:20:59.481 "block_size": 512, 00:20:59.481 "num_blocks": 65536, 00:20:59.481 "uuid": "2d0c779e-4324-42b3-994a-d9006dcc6103", 00:20:59.481 "assigned_rate_limits": { 00:20:59.481 "rw_ios_per_sec": 0, 00:20:59.481 "rw_mbytes_per_sec": 0, 00:20:59.481 "r_mbytes_per_sec": 0, 00:20:59.481 "w_mbytes_per_sec": 0 00:20:59.481 }, 00:20:59.481 "claimed": true, 00:20:59.481 "claim_type": "exclusive_write", 00:20:59.481 "zoned": false, 00:20:59.481 "supported_io_types": { 00:20:59.481 "read": true, 00:20:59.481 "write": true, 00:20:59.481 "unmap": true, 00:20:59.481 "flush": true, 00:20:59.481 "reset": true, 00:20:59.481 "nvme_admin": false, 00:20:59.481 "nvme_io": false, 00:20:59.481 "nvme_io_md": false, 00:20:59.481 "write_zeroes": true, 00:20:59.481 "zcopy": true, 00:20:59.481 "get_zone_info": false, 00:20:59.481 "zone_management": false, 00:20:59.481 "zone_append": false, 00:20:59.481 "compare": false, 00:20:59.481 "compare_and_write": false, 00:20:59.481 "abort": true, 00:20:59.481 "seek_hole": false, 00:20:59.481 "seek_data": false, 00:20:59.481 "copy": true, 00:20:59.481 "nvme_iov_md": false 00:20:59.481 }, 00:20:59.481 "memory_domains": [ 00:20:59.481 { 00:20:59.481 "dma_device_id": "system", 00:20:59.481 "dma_device_type": 1 00:20:59.481 }, 00:20:59.481 { 00:20:59.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:59.481 "dma_device_type": 2 00:20:59.481 } 00:20:59.481 ], 00:20:59.481 "driver_specific": {} 00:20:59.481 }' 00:20:59.481 21:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:59.481 21:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:59.481 21:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:59.481 21:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:59.740 21:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:59.740 21:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:59.740 21:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:59.741 21:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:59.741 21:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:59.741 21:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:00.000 21:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:00.000 21:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:00.000 21:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:00.000 [2024-07-15 21:34:33.355042] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:00.000 [2024-07-15 21:34:33.355146] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:00.000 [2024-07-15 21:34:33.355240] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:00.000 [2024-07-15 21:34:33.355320] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:00.000 [2024-07-15 21:34:33.355338] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name Existed_Raid, state offline 00:21:00.000 21:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 130088 00:21:00.000 21:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 130088 ']' 00:21:00.000 21:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 130088 00:21:00.000 21:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:21:00.259 21:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:00.259 21:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 130088 00:21:00.259 21:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:00.259 21:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:00.259 21:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 130088' 00:21:00.259 killing process with pid 130088 00:21:00.259 21:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 130088 00:21:00.259 [2024-07-15 21:34:33.395691] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:00.259 21:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 130088 00:21:00.517 [2024-07-15 21:34:33.670985] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:01.896 ************************************ 00:21:01.896 END TEST raid_state_function_test_sb 00:21:01.896 ************************************ 00:21:01.896 21:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:21:01.896 00:21:01.896 real 0m26.879s 00:21:01.896 user 0m49.650s 00:21:01.896 sys 0m3.312s 00:21:01.896 21:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:01.896 21:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:01.896 21:34:34 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:01.896 21:34:34 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:21:01.896 21:34:34 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:21:01.896 21:34:34 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:01.896 21:34:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:01.896 ************************************ 00:21:01.896 START TEST raid_superblock_test 00:21:01.896 ************************************ 00:21:01.896 21:34:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 3 00:21:01.896 21:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:21:01.896 21:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:21:01.896 21:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:21:01.896 21:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:21:01.896 21:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:21:01.896 21:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:21:01.896 21:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:21:01.896 21:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:21:01.896 21:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:21:01.896 21:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:21:01.896 21:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:21:01.896 21:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:21:01.896 21:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:21:01.896 21:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:21:01.896 21:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:21:01.896 21:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:21:01.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:01.896 21:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=131092 00:21:01.896 21:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:01.896 21:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 131092 /var/tmp/spdk-raid.sock 00:21:01.896 21:34:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 131092 ']' 00:21:01.896 21:34:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:01.896 21:34:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:01.896 21:34:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:01.896 21:34:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:01.896 21:34:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.896 [2024-07-15 21:34:35.000139] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:21:01.896 [2024-07-15 21:34:35.000370] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131092 ] 00:21:01.896 [2024-07-15 21:34:35.164920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.155 [2024-07-15 21:34:35.399736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.414 [2024-07-15 21:34:35.627447] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:02.414 21:34:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:02.414 21:34:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:21:02.414 21:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:21:02.414 21:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:02.414 21:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:21:02.414 21:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:21:02.414 21:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:02.414 21:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:02.414 21:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:02.414 21:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:02.414 21:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:21:02.672 malloc1 00:21:02.672 21:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:02.930 [2024-07-15 21:34:36.168070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:02.930 [2024-07-15 21:34:36.168278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:02.930 [2024-07-15 21:34:36.168327] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:21:02.930 [2024-07-15 21:34:36.168365] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:02.930 [2024-07-15 21:34:36.170746] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:02.930 [2024-07-15 21:34:36.170831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:02.930 pt1 00:21:02.930 21:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:02.930 21:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:02.930 21:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:21:02.930 21:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:21:02.930 21:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:02.930 21:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:02.930 21:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:02.930 21:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:02.930 21:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:21:03.188 malloc2 00:21:03.188 21:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:03.447 [2024-07-15 21:34:36.581799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:03.447 [2024-07-15 21:34:36.582009] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:03.447 [2024-07-15 21:34:36.582063] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:21:03.447 [2024-07-15 21:34:36.582101] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:03.447 [2024-07-15 21:34:36.584354] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:03.447 [2024-07-15 21:34:36.584436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:03.447 pt2 00:21:03.447 21:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:03.447 21:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:03.447 21:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:21:03.447 21:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:21:03.447 21:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:03.447 21:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:03.447 21:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:03.447 21:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:03.447 21:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:21:03.447 malloc3 00:21:03.447 21:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:03.704 [2024-07-15 21:34:36.981537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:03.704 [2024-07-15 21:34:36.981709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:03.704 [2024-07-15 21:34:36.981752] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:21:03.704 [2024-07-15 21:34:36.981809] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:03.704 [2024-07-15 21:34:36.983686] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:03.704 [2024-07-15 21:34:36.983767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:03.704 pt3 00:21:03.704 21:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:03.704 21:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:03.704 21:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:21:03.962 [2024-07-15 21:34:37.165266] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:03.962 [2024-07-15 21:34:37.166934] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:03.962 [2024-07-15 21:34:37.167032] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:03.962 [2024-07-15 21:34:37.167218] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:21:03.962 [2024-07-15 21:34:37.167259] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:03.962 [2024-07-15 21:34:37.167413] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:21:03.962 [2024-07-15 21:34:37.167761] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:21:03.962 [2024-07-15 21:34:37.167801] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:21:03.962 [2024-07-15 21:34:37.167978] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:03.962 21:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:21:03.962 21:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:03.962 21:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:03.962 21:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:03.962 21:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:03.962 21:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:03.962 21:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:03.962 21:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:03.962 21:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:03.962 21:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:03.962 21:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.962 21:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.220 21:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:04.220 "name": "raid_bdev1", 00:21:04.220 "uuid": "8b6fe9b0-8494-43d5-bd15-88722300134e", 00:21:04.220 "strip_size_kb": 64, 00:21:04.220 "state": "online", 00:21:04.220 "raid_level": "concat", 00:21:04.220 "superblock": true, 00:21:04.220 "num_base_bdevs": 3, 00:21:04.220 "num_base_bdevs_discovered": 3, 00:21:04.220 "num_base_bdevs_operational": 3, 00:21:04.220 "base_bdevs_list": [ 00:21:04.220 { 00:21:04.220 "name": "pt1", 00:21:04.220 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:04.220 "is_configured": true, 00:21:04.220 "data_offset": 2048, 00:21:04.220 "data_size": 63488 00:21:04.220 }, 00:21:04.220 { 00:21:04.220 "name": "pt2", 00:21:04.220 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:04.220 "is_configured": true, 00:21:04.220 "data_offset": 2048, 00:21:04.220 "data_size": 63488 00:21:04.220 }, 00:21:04.220 { 00:21:04.220 "name": "pt3", 00:21:04.220 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:04.220 "is_configured": true, 00:21:04.220 "data_offset": 2048, 00:21:04.220 "data_size": 63488 00:21:04.220 } 00:21:04.220 ] 00:21:04.220 }' 00:21:04.220 21:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:04.220 21:34:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.794 21:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:21:04.794 21:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:04.794 21:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:04.794 21:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:04.794 21:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:04.794 21:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:04.794 21:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:04.794 21:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:04.794 [2024-07-15 21:34:38.088045] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:04.794 21:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:04.794 "name": "raid_bdev1", 00:21:04.794 "aliases": [ 00:21:04.794 "8b6fe9b0-8494-43d5-bd15-88722300134e" 00:21:04.794 ], 00:21:04.794 "product_name": "Raid Volume", 00:21:04.794 "block_size": 512, 00:21:04.794 "num_blocks": 190464, 00:21:04.794 "uuid": "8b6fe9b0-8494-43d5-bd15-88722300134e", 00:21:04.794 "assigned_rate_limits": { 00:21:04.794 "rw_ios_per_sec": 0, 00:21:04.794 "rw_mbytes_per_sec": 0, 00:21:04.794 "r_mbytes_per_sec": 0, 00:21:04.794 "w_mbytes_per_sec": 0 00:21:04.794 }, 00:21:04.794 "claimed": false, 00:21:04.795 "zoned": false, 00:21:04.795 "supported_io_types": { 00:21:04.795 "read": true, 00:21:04.795 "write": true, 00:21:04.795 "unmap": true, 00:21:04.795 "flush": true, 00:21:04.795 "reset": true, 00:21:04.795 "nvme_admin": false, 00:21:04.795 "nvme_io": false, 00:21:04.795 "nvme_io_md": false, 00:21:04.795 "write_zeroes": true, 00:21:04.795 "zcopy": false, 00:21:04.795 "get_zone_info": false, 00:21:04.795 "zone_management": false, 00:21:04.795 "zone_append": false, 00:21:04.795 "compare": false, 00:21:04.795 "compare_and_write": false, 00:21:04.795 "abort": false, 00:21:04.795 "seek_hole": false, 00:21:04.795 "seek_data": false, 00:21:04.795 "copy": false, 00:21:04.795 "nvme_iov_md": false 00:21:04.795 }, 00:21:04.795 "memory_domains": [ 00:21:04.795 { 00:21:04.795 "dma_device_id": "system", 00:21:04.795 "dma_device_type": 1 00:21:04.795 }, 00:21:04.795 { 00:21:04.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.795 "dma_device_type": 2 00:21:04.795 }, 00:21:04.795 { 00:21:04.795 "dma_device_id": "system", 00:21:04.795 "dma_device_type": 1 00:21:04.795 }, 00:21:04.795 { 00:21:04.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.795 "dma_device_type": 2 00:21:04.795 }, 00:21:04.795 { 00:21:04.795 "dma_device_id": "system", 00:21:04.795 "dma_device_type": 1 00:21:04.795 }, 00:21:04.795 { 00:21:04.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.795 "dma_device_type": 2 00:21:04.795 } 00:21:04.795 ], 00:21:04.795 "driver_specific": { 00:21:04.795 "raid": { 00:21:04.795 "uuid": "8b6fe9b0-8494-43d5-bd15-88722300134e", 00:21:04.795 "strip_size_kb": 64, 00:21:04.795 "state": "online", 00:21:04.795 "raid_level": "concat", 00:21:04.795 "superblock": true, 00:21:04.795 "num_base_bdevs": 3, 00:21:04.795 "num_base_bdevs_discovered": 3, 00:21:04.795 "num_base_bdevs_operational": 3, 00:21:04.795 "base_bdevs_list": [ 00:21:04.795 { 00:21:04.795 "name": "pt1", 00:21:04.795 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:04.795 "is_configured": true, 00:21:04.795 "data_offset": 2048, 00:21:04.795 "data_size": 63488 00:21:04.795 }, 00:21:04.795 { 00:21:04.795 "name": "pt2", 00:21:04.795 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:04.795 "is_configured": true, 00:21:04.795 "data_offset": 2048, 00:21:04.795 "data_size": 63488 00:21:04.795 }, 00:21:04.795 { 00:21:04.795 "name": "pt3", 00:21:04.795 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:04.795 "is_configured": true, 00:21:04.795 "data_offset": 2048, 00:21:04.795 "data_size": 63488 00:21:04.795 } 00:21:04.795 ] 00:21:04.795 } 00:21:04.795 } 00:21:04.795 }' 00:21:04.795 21:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:04.795 21:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:04.795 pt2 00:21:04.795 pt3' 00:21:04.795 21:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:04.795 21:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:04.795 21:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:05.064 21:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:05.064 "name": "pt1", 00:21:05.064 "aliases": [ 00:21:05.064 "00000000-0000-0000-0000-000000000001" 00:21:05.064 ], 00:21:05.064 "product_name": "passthru", 00:21:05.064 "block_size": 512, 00:21:05.064 "num_blocks": 65536, 00:21:05.064 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:05.064 "assigned_rate_limits": { 00:21:05.064 "rw_ios_per_sec": 0, 00:21:05.064 "rw_mbytes_per_sec": 0, 00:21:05.064 "r_mbytes_per_sec": 0, 00:21:05.064 "w_mbytes_per_sec": 0 00:21:05.064 }, 00:21:05.064 "claimed": true, 00:21:05.064 "claim_type": "exclusive_write", 00:21:05.064 "zoned": false, 00:21:05.064 "supported_io_types": { 00:21:05.064 "read": true, 00:21:05.064 "write": true, 00:21:05.064 "unmap": true, 00:21:05.064 "flush": true, 00:21:05.064 "reset": true, 00:21:05.064 "nvme_admin": false, 00:21:05.064 "nvme_io": false, 00:21:05.064 "nvme_io_md": false, 00:21:05.064 "write_zeroes": true, 00:21:05.064 "zcopy": true, 00:21:05.064 "get_zone_info": false, 00:21:05.064 "zone_management": false, 00:21:05.064 "zone_append": false, 00:21:05.064 "compare": false, 00:21:05.064 "compare_and_write": false, 00:21:05.064 "abort": true, 00:21:05.064 "seek_hole": false, 00:21:05.064 "seek_data": false, 00:21:05.064 "copy": true, 00:21:05.064 "nvme_iov_md": false 00:21:05.064 }, 00:21:05.064 "memory_domains": [ 00:21:05.064 { 00:21:05.064 "dma_device_id": "system", 00:21:05.064 "dma_device_type": 1 00:21:05.064 }, 00:21:05.064 { 00:21:05.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.064 "dma_device_type": 2 00:21:05.064 } 00:21:05.064 ], 00:21:05.064 "driver_specific": { 00:21:05.064 "passthru": { 00:21:05.064 "name": "pt1", 00:21:05.064 "base_bdev_name": "malloc1" 00:21:05.064 } 00:21:05.064 } 00:21:05.064 }' 00:21:05.064 21:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:05.064 21:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:05.064 21:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:05.064 21:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:05.323 21:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:05.323 21:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:05.323 21:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:05.323 21:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:05.323 21:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:05.323 21:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:05.323 21:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:05.582 21:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:05.582 21:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:05.582 21:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:05.582 21:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:05.582 21:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:05.582 "name": "pt2", 00:21:05.582 "aliases": [ 00:21:05.582 "00000000-0000-0000-0000-000000000002" 00:21:05.582 ], 00:21:05.582 "product_name": "passthru", 00:21:05.582 "block_size": 512, 00:21:05.582 "num_blocks": 65536, 00:21:05.582 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:05.582 "assigned_rate_limits": { 00:21:05.582 "rw_ios_per_sec": 0, 00:21:05.582 "rw_mbytes_per_sec": 0, 00:21:05.582 "r_mbytes_per_sec": 0, 00:21:05.582 "w_mbytes_per_sec": 0 00:21:05.582 }, 00:21:05.582 "claimed": true, 00:21:05.582 "claim_type": "exclusive_write", 00:21:05.582 "zoned": false, 00:21:05.582 "supported_io_types": { 00:21:05.582 "read": true, 00:21:05.582 "write": true, 00:21:05.582 "unmap": true, 00:21:05.582 "flush": true, 00:21:05.582 "reset": true, 00:21:05.582 "nvme_admin": false, 00:21:05.582 "nvme_io": false, 00:21:05.582 "nvme_io_md": false, 00:21:05.582 "write_zeroes": true, 00:21:05.582 "zcopy": true, 00:21:05.582 "get_zone_info": false, 00:21:05.582 "zone_management": false, 00:21:05.582 "zone_append": false, 00:21:05.582 "compare": false, 00:21:05.582 "compare_and_write": false, 00:21:05.582 "abort": true, 00:21:05.582 "seek_hole": false, 00:21:05.582 "seek_data": false, 00:21:05.582 "copy": true, 00:21:05.582 "nvme_iov_md": false 00:21:05.582 }, 00:21:05.582 "memory_domains": [ 00:21:05.582 { 00:21:05.582 "dma_device_id": "system", 00:21:05.582 "dma_device_type": 1 00:21:05.582 }, 00:21:05.582 { 00:21:05.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.582 "dma_device_type": 2 00:21:05.582 } 00:21:05.582 ], 00:21:05.582 "driver_specific": { 00:21:05.582 "passthru": { 00:21:05.582 "name": "pt2", 00:21:05.582 "base_bdev_name": "malloc2" 00:21:05.582 } 00:21:05.582 } 00:21:05.582 }' 00:21:05.582 21:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:05.841 21:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:05.841 21:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:05.841 21:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:05.841 21:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:05.841 21:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:05.841 21:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:05.841 21:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:06.100 21:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:06.100 21:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:06.100 21:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:06.100 21:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:06.100 21:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:06.100 21:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:06.100 21:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:06.358 21:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:06.358 "name": "pt3", 00:21:06.358 "aliases": [ 00:21:06.358 "00000000-0000-0000-0000-000000000003" 00:21:06.358 ], 00:21:06.358 "product_name": "passthru", 00:21:06.358 "block_size": 512, 00:21:06.358 "num_blocks": 65536, 00:21:06.358 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:06.358 "assigned_rate_limits": { 00:21:06.358 "rw_ios_per_sec": 0, 00:21:06.358 "rw_mbytes_per_sec": 0, 00:21:06.358 "r_mbytes_per_sec": 0, 00:21:06.358 "w_mbytes_per_sec": 0 00:21:06.358 }, 00:21:06.358 "claimed": true, 00:21:06.358 "claim_type": "exclusive_write", 00:21:06.358 "zoned": false, 00:21:06.358 "supported_io_types": { 00:21:06.358 "read": true, 00:21:06.358 "write": true, 00:21:06.358 "unmap": true, 00:21:06.358 "flush": true, 00:21:06.358 "reset": true, 00:21:06.358 "nvme_admin": false, 00:21:06.358 "nvme_io": false, 00:21:06.358 "nvme_io_md": false, 00:21:06.358 "write_zeroes": true, 00:21:06.358 "zcopy": true, 00:21:06.358 "get_zone_info": false, 00:21:06.358 "zone_management": false, 00:21:06.358 "zone_append": false, 00:21:06.358 "compare": false, 00:21:06.358 "compare_and_write": false, 00:21:06.358 "abort": true, 00:21:06.358 "seek_hole": false, 00:21:06.358 "seek_data": false, 00:21:06.358 "copy": true, 00:21:06.358 "nvme_iov_md": false 00:21:06.358 }, 00:21:06.358 "memory_domains": [ 00:21:06.358 { 00:21:06.358 "dma_device_id": "system", 00:21:06.358 "dma_device_type": 1 00:21:06.358 }, 00:21:06.358 { 00:21:06.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:06.358 "dma_device_type": 2 00:21:06.358 } 00:21:06.358 ], 00:21:06.358 "driver_specific": { 00:21:06.358 "passthru": { 00:21:06.358 "name": "pt3", 00:21:06.358 "base_bdev_name": "malloc3" 00:21:06.358 } 00:21:06.358 } 00:21:06.358 }' 00:21:06.358 21:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:06.358 21:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:06.358 21:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:06.358 21:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:06.358 21:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:06.617 21:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:06.617 21:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:06.617 21:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:06.617 21:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:06.617 21:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:06.617 21:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:06.617 21:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:06.617 21:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:06.617 21:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:21:06.876 [2024-07-15 21:34:40.112720] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:06.876 21:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=8b6fe9b0-8494-43d5-bd15-88722300134e 00:21:06.876 21:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 8b6fe9b0-8494-43d5-bd15-88722300134e ']' 00:21:06.876 21:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:07.135 [2024-07-15 21:34:40.276126] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:07.135 [2024-07-15 21:34:40.276247] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:07.135 [2024-07-15 21:34:40.276359] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:07.135 [2024-07-15 21:34:40.276443] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:07.135 [2024-07-15 21:34:40.276473] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:21:07.135 21:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.135 21:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:21:07.135 21:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:21:07.135 21:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:21:07.135 21:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:07.135 21:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:07.394 21:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:07.394 21:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:07.653 21:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:07.653 21:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:07.653 21:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:07.653 21:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:07.912 21:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:21:07.912 21:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:07.912 21:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:21:07.912 21:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:07.912 21:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:07.912 21:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:07.912 21:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:07.912 21:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:07.912 21:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:07.912 21:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:07.912 21:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:07.912 21:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:07.912 21:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:08.171 [2024-07-15 21:34:41.374286] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:08.171 [2024-07-15 21:34:41.376774] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:08.171 [2024-07-15 21:34:41.376929] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:08.171 [2024-07-15 21:34:41.377027] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:08.171 [2024-07-15 21:34:41.377181] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:08.171 [2024-07-15 21:34:41.377269] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:08.171 [2024-07-15 21:34:41.377365] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:08.171 [2024-07-15 21:34:41.377404] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state configuring 00:21:08.171 request: 00:21:08.171 { 00:21:08.171 "name": "raid_bdev1", 00:21:08.171 "raid_level": "concat", 00:21:08.171 "base_bdevs": [ 00:21:08.171 "malloc1", 00:21:08.171 "malloc2", 00:21:08.171 "malloc3" 00:21:08.171 ], 00:21:08.171 "strip_size_kb": 64, 00:21:08.171 "superblock": false, 00:21:08.171 "method": "bdev_raid_create", 00:21:08.171 "req_id": 1 00:21:08.171 } 00:21:08.171 Got JSON-RPC error response 00:21:08.171 response: 00:21:08.171 { 00:21:08.171 "code": -17, 00:21:08.171 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:08.171 } 00:21:08.171 21:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:21:08.171 21:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:08.171 21:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:08.171 21:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:08.171 21:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.171 21:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:21:08.430 21:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:21:08.430 21:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:21:08.430 21:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:08.430 [2024-07-15 21:34:41.741504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:08.430 [2024-07-15 21:34:41.741720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.430 [2024-07-15 21:34:41.741773] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:08.430 [2024-07-15 21:34:41.741813] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.430 [2024-07-15 21:34:41.744155] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.430 [2024-07-15 21:34:41.744239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:08.430 [2024-07-15 21:34:41.744404] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:08.430 [2024-07-15 21:34:41.744487] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:08.430 pt1 00:21:08.430 21:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:21:08.430 21:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:08.430 21:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:08.430 21:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:08.430 21:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:08.430 21:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:08.430 21:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:08.430 21:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:08.430 21:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:08.430 21:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:08.430 21:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.430 21:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.689 21:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:08.689 "name": "raid_bdev1", 00:21:08.689 "uuid": "8b6fe9b0-8494-43d5-bd15-88722300134e", 00:21:08.689 "strip_size_kb": 64, 00:21:08.689 "state": "configuring", 00:21:08.689 "raid_level": "concat", 00:21:08.689 "superblock": true, 00:21:08.689 "num_base_bdevs": 3, 00:21:08.689 "num_base_bdevs_discovered": 1, 00:21:08.689 "num_base_bdevs_operational": 3, 00:21:08.689 "base_bdevs_list": [ 00:21:08.689 { 00:21:08.689 "name": "pt1", 00:21:08.689 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:08.689 "is_configured": true, 00:21:08.689 "data_offset": 2048, 00:21:08.689 "data_size": 63488 00:21:08.689 }, 00:21:08.689 { 00:21:08.689 "name": null, 00:21:08.689 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:08.689 "is_configured": false, 00:21:08.689 "data_offset": 2048, 00:21:08.689 "data_size": 63488 00:21:08.689 }, 00:21:08.689 { 00:21:08.689 "name": null, 00:21:08.689 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:08.689 "is_configured": false, 00:21:08.689 "data_offset": 2048, 00:21:08.689 "data_size": 63488 00:21:08.689 } 00:21:08.689 ] 00:21:08.689 }' 00:21:08.689 21:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:08.689 21:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.257 21:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:21:09.257 21:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:09.516 [2024-07-15 21:34:42.711831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:09.516 [2024-07-15 21:34:42.712042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.516 [2024-07-15 21:34:42.712095] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:09.516 [2024-07-15 21:34:42.712133] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.516 [2024-07-15 21:34:42.712695] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.516 [2024-07-15 21:34:42.712770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:09.516 [2024-07-15 21:34:42.712921] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:09.516 [2024-07-15 21:34:42.712978] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:09.516 pt2 00:21:09.516 21:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:09.775 [2024-07-15 21:34:42.895537] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:09.775 21:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:21:09.775 21:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:09.775 21:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:09.775 21:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:09.775 21:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:09.775 21:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:09.776 21:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:09.776 21:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:09.776 21:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:09.776 21:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:09.776 21:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.776 21:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.776 21:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:09.776 "name": "raid_bdev1", 00:21:09.776 "uuid": "8b6fe9b0-8494-43d5-bd15-88722300134e", 00:21:09.776 "strip_size_kb": 64, 00:21:09.776 "state": "configuring", 00:21:09.776 "raid_level": "concat", 00:21:09.776 "superblock": true, 00:21:09.776 "num_base_bdevs": 3, 00:21:09.776 "num_base_bdevs_discovered": 1, 00:21:09.776 "num_base_bdevs_operational": 3, 00:21:09.776 "base_bdevs_list": [ 00:21:09.776 { 00:21:09.776 "name": "pt1", 00:21:09.776 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:09.776 "is_configured": true, 00:21:09.776 "data_offset": 2048, 00:21:09.776 "data_size": 63488 00:21:09.776 }, 00:21:09.776 { 00:21:09.776 "name": null, 00:21:09.776 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:09.776 "is_configured": false, 00:21:09.776 "data_offset": 2048, 00:21:09.776 "data_size": 63488 00:21:09.776 }, 00:21:09.776 { 00:21:09.776 "name": null, 00:21:09.776 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:09.776 "is_configured": false, 00:21:09.776 "data_offset": 2048, 00:21:09.776 "data_size": 63488 00:21:09.776 } 00:21:09.776 ] 00:21:09.776 }' 00:21:09.776 21:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:09.776 21:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.358 21:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:21:10.358 21:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:10.358 21:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:10.616 [2024-07-15 21:34:43.885756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:10.616 [2024-07-15 21:34:43.885971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:10.616 [2024-07-15 21:34:43.886034] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:21:10.616 [2024-07-15 21:34:43.886077] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:10.616 [2024-07-15 21:34:43.886624] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:10.617 [2024-07-15 21:34:43.886696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:10.617 [2024-07-15 21:34:43.886865] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:10.617 [2024-07-15 21:34:43.886911] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:10.617 pt2 00:21:10.617 21:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:21:10.617 21:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:10.617 21:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:10.910 [2024-07-15 21:34:44.045470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:10.910 [2024-07-15 21:34:44.045656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:10.910 [2024-07-15 21:34:44.045700] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:10.910 [2024-07-15 21:34:44.045739] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:10.910 [2024-07-15 21:34:44.046284] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:10.910 [2024-07-15 21:34:44.046351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:10.910 [2024-07-15 21:34:44.046510] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:10.910 [2024-07-15 21:34:44.046559] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:10.910 [2024-07-15 21:34:44.046708] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:21:10.910 [2024-07-15 21:34:44.046737] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:10.910 [2024-07-15 21:34:44.046857] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:10.910 [2024-07-15 21:34:44.047170] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:21:10.910 [2024-07-15 21:34:44.047209] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:21:10.910 [2024-07-15 21:34:44.047372] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:10.910 pt3 00:21:10.910 21:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:21:10.910 21:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:10.910 21:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:21:10.910 21:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:10.910 21:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:10.910 21:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:10.910 21:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:10.910 21:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:10.910 21:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:10.910 21:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:10.910 21:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:10.910 21:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:10.910 21:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.910 21:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.910 21:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:10.910 "name": "raid_bdev1", 00:21:10.910 "uuid": "8b6fe9b0-8494-43d5-bd15-88722300134e", 00:21:10.910 "strip_size_kb": 64, 00:21:10.910 "state": "online", 00:21:10.910 "raid_level": "concat", 00:21:10.910 "superblock": true, 00:21:10.910 "num_base_bdevs": 3, 00:21:10.910 "num_base_bdevs_discovered": 3, 00:21:10.910 "num_base_bdevs_operational": 3, 00:21:10.910 "base_bdevs_list": [ 00:21:10.910 { 00:21:10.910 "name": "pt1", 00:21:10.910 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:10.910 "is_configured": true, 00:21:10.910 "data_offset": 2048, 00:21:10.910 "data_size": 63488 00:21:10.910 }, 00:21:10.910 { 00:21:10.910 "name": "pt2", 00:21:10.910 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:10.910 "is_configured": true, 00:21:10.910 "data_offset": 2048, 00:21:10.910 "data_size": 63488 00:21:10.910 }, 00:21:10.910 { 00:21:10.910 "name": "pt3", 00:21:10.910 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:10.910 "is_configured": true, 00:21:10.910 "data_offset": 2048, 00:21:10.910 "data_size": 63488 00:21:10.910 } 00:21:10.910 ] 00:21:10.910 }' 00:21:10.910 21:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:10.910 21:34:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.848 21:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:21:11.848 21:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:11.848 21:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:11.848 21:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:11.848 21:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:11.848 21:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:11.848 21:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:11.848 21:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:11.848 [2024-07-15 21:34:45.008026] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:11.848 21:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:11.848 "name": "raid_bdev1", 00:21:11.848 "aliases": [ 00:21:11.848 "8b6fe9b0-8494-43d5-bd15-88722300134e" 00:21:11.848 ], 00:21:11.848 "product_name": "Raid Volume", 00:21:11.848 "block_size": 512, 00:21:11.848 "num_blocks": 190464, 00:21:11.848 "uuid": "8b6fe9b0-8494-43d5-bd15-88722300134e", 00:21:11.848 "assigned_rate_limits": { 00:21:11.848 "rw_ios_per_sec": 0, 00:21:11.848 "rw_mbytes_per_sec": 0, 00:21:11.848 "r_mbytes_per_sec": 0, 00:21:11.848 "w_mbytes_per_sec": 0 00:21:11.848 }, 00:21:11.848 "claimed": false, 00:21:11.848 "zoned": false, 00:21:11.848 "supported_io_types": { 00:21:11.848 "read": true, 00:21:11.848 "write": true, 00:21:11.848 "unmap": true, 00:21:11.848 "flush": true, 00:21:11.848 "reset": true, 00:21:11.848 "nvme_admin": false, 00:21:11.848 "nvme_io": false, 00:21:11.848 "nvme_io_md": false, 00:21:11.848 "write_zeroes": true, 00:21:11.848 "zcopy": false, 00:21:11.848 "get_zone_info": false, 00:21:11.848 "zone_management": false, 00:21:11.848 "zone_append": false, 00:21:11.848 "compare": false, 00:21:11.848 "compare_and_write": false, 00:21:11.848 "abort": false, 00:21:11.848 "seek_hole": false, 00:21:11.848 "seek_data": false, 00:21:11.848 "copy": false, 00:21:11.848 "nvme_iov_md": false 00:21:11.848 }, 00:21:11.848 "memory_domains": [ 00:21:11.848 { 00:21:11.848 "dma_device_id": "system", 00:21:11.848 "dma_device_type": 1 00:21:11.848 }, 00:21:11.848 { 00:21:11.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.848 "dma_device_type": 2 00:21:11.848 }, 00:21:11.848 { 00:21:11.848 "dma_device_id": "system", 00:21:11.848 "dma_device_type": 1 00:21:11.848 }, 00:21:11.848 { 00:21:11.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.848 "dma_device_type": 2 00:21:11.848 }, 00:21:11.848 { 00:21:11.848 "dma_device_id": "system", 00:21:11.848 "dma_device_type": 1 00:21:11.848 }, 00:21:11.848 { 00:21:11.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.848 "dma_device_type": 2 00:21:11.848 } 00:21:11.848 ], 00:21:11.848 "driver_specific": { 00:21:11.848 "raid": { 00:21:11.848 "uuid": "8b6fe9b0-8494-43d5-bd15-88722300134e", 00:21:11.848 "strip_size_kb": 64, 00:21:11.848 "state": "online", 00:21:11.848 "raid_level": "concat", 00:21:11.848 "superblock": true, 00:21:11.848 "num_base_bdevs": 3, 00:21:11.848 "num_base_bdevs_discovered": 3, 00:21:11.848 "num_base_bdevs_operational": 3, 00:21:11.848 "base_bdevs_list": [ 00:21:11.848 { 00:21:11.848 "name": "pt1", 00:21:11.848 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:11.848 "is_configured": true, 00:21:11.848 "data_offset": 2048, 00:21:11.848 "data_size": 63488 00:21:11.848 }, 00:21:11.848 { 00:21:11.848 "name": "pt2", 00:21:11.848 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:11.848 "is_configured": true, 00:21:11.848 "data_offset": 2048, 00:21:11.848 "data_size": 63488 00:21:11.848 }, 00:21:11.848 { 00:21:11.848 "name": "pt3", 00:21:11.848 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:11.848 "is_configured": true, 00:21:11.848 "data_offset": 2048, 00:21:11.848 "data_size": 63488 00:21:11.848 } 00:21:11.848 ] 00:21:11.848 } 00:21:11.848 } 00:21:11.848 }' 00:21:11.848 21:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:11.848 21:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:11.848 pt2 00:21:11.848 pt3' 00:21:11.848 21:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:11.848 21:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:11.848 21:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:12.108 21:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:12.108 "name": "pt1", 00:21:12.108 "aliases": [ 00:21:12.108 "00000000-0000-0000-0000-000000000001" 00:21:12.108 ], 00:21:12.108 "product_name": "passthru", 00:21:12.108 "block_size": 512, 00:21:12.108 "num_blocks": 65536, 00:21:12.108 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:12.108 "assigned_rate_limits": { 00:21:12.108 "rw_ios_per_sec": 0, 00:21:12.108 "rw_mbytes_per_sec": 0, 00:21:12.108 "r_mbytes_per_sec": 0, 00:21:12.108 "w_mbytes_per_sec": 0 00:21:12.108 }, 00:21:12.108 "claimed": true, 00:21:12.108 "claim_type": "exclusive_write", 00:21:12.108 "zoned": false, 00:21:12.108 "supported_io_types": { 00:21:12.108 "read": true, 00:21:12.108 "write": true, 00:21:12.108 "unmap": true, 00:21:12.108 "flush": true, 00:21:12.108 "reset": true, 00:21:12.108 "nvme_admin": false, 00:21:12.108 "nvme_io": false, 00:21:12.108 "nvme_io_md": false, 00:21:12.108 "write_zeroes": true, 00:21:12.108 "zcopy": true, 00:21:12.108 "get_zone_info": false, 00:21:12.108 "zone_management": false, 00:21:12.108 "zone_append": false, 00:21:12.108 "compare": false, 00:21:12.108 "compare_and_write": false, 00:21:12.108 "abort": true, 00:21:12.108 "seek_hole": false, 00:21:12.108 "seek_data": false, 00:21:12.108 "copy": true, 00:21:12.108 "nvme_iov_md": false 00:21:12.108 }, 00:21:12.108 "memory_domains": [ 00:21:12.108 { 00:21:12.108 "dma_device_id": "system", 00:21:12.108 "dma_device_type": 1 00:21:12.108 }, 00:21:12.108 { 00:21:12.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.108 "dma_device_type": 2 00:21:12.108 } 00:21:12.108 ], 00:21:12.109 "driver_specific": { 00:21:12.109 "passthru": { 00:21:12.109 "name": "pt1", 00:21:12.109 "base_bdev_name": "malloc1" 00:21:12.109 } 00:21:12.109 } 00:21:12.109 }' 00:21:12.109 21:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:12.109 21:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:12.109 21:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:12.109 21:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:12.109 21:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:12.368 21:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:12.368 21:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:12.368 21:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:12.368 21:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:12.368 21:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:12.368 21:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:12.369 21:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:12.369 21:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:12.369 21:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:12.369 21:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:12.628 21:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:12.628 "name": "pt2", 00:21:12.628 "aliases": [ 00:21:12.628 "00000000-0000-0000-0000-000000000002" 00:21:12.628 ], 00:21:12.628 "product_name": "passthru", 00:21:12.628 "block_size": 512, 00:21:12.628 "num_blocks": 65536, 00:21:12.628 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:12.628 "assigned_rate_limits": { 00:21:12.628 "rw_ios_per_sec": 0, 00:21:12.628 "rw_mbytes_per_sec": 0, 00:21:12.628 "r_mbytes_per_sec": 0, 00:21:12.628 "w_mbytes_per_sec": 0 00:21:12.628 }, 00:21:12.628 "claimed": true, 00:21:12.628 "claim_type": "exclusive_write", 00:21:12.628 "zoned": false, 00:21:12.628 "supported_io_types": { 00:21:12.628 "read": true, 00:21:12.628 "write": true, 00:21:12.628 "unmap": true, 00:21:12.628 "flush": true, 00:21:12.628 "reset": true, 00:21:12.628 "nvme_admin": false, 00:21:12.628 "nvme_io": false, 00:21:12.628 "nvme_io_md": false, 00:21:12.628 "write_zeroes": true, 00:21:12.628 "zcopy": true, 00:21:12.628 "get_zone_info": false, 00:21:12.628 "zone_management": false, 00:21:12.628 "zone_append": false, 00:21:12.628 "compare": false, 00:21:12.628 "compare_and_write": false, 00:21:12.628 "abort": true, 00:21:12.628 "seek_hole": false, 00:21:12.628 "seek_data": false, 00:21:12.628 "copy": true, 00:21:12.628 "nvme_iov_md": false 00:21:12.628 }, 00:21:12.628 "memory_domains": [ 00:21:12.628 { 00:21:12.628 "dma_device_id": "system", 00:21:12.628 "dma_device_type": 1 00:21:12.628 }, 00:21:12.628 { 00:21:12.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.628 "dma_device_type": 2 00:21:12.628 } 00:21:12.628 ], 00:21:12.628 "driver_specific": { 00:21:12.628 "passthru": { 00:21:12.628 "name": "pt2", 00:21:12.628 "base_bdev_name": "malloc2" 00:21:12.628 } 00:21:12.628 } 00:21:12.628 }' 00:21:12.628 21:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:12.628 21:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:12.628 21:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:12.628 21:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:12.888 21:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:12.888 21:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:12.888 21:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:12.888 21:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:12.888 21:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:12.888 21:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:13.147 21:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:13.147 21:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:13.147 21:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:13.147 21:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:13.147 21:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:13.147 21:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:13.147 "name": "pt3", 00:21:13.147 "aliases": [ 00:21:13.147 "00000000-0000-0000-0000-000000000003" 00:21:13.147 ], 00:21:13.147 "product_name": "passthru", 00:21:13.147 "block_size": 512, 00:21:13.147 "num_blocks": 65536, 00:21:13.147 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:13.147 "assigned_rate_limits": { 00:21:13.147 "rw_ios_per_sec": 0, 00:21:13.147 "rw_mbytes_per_sec": 0, 00:21:13.147 "r_mbytes_per_sec": 0, 00:21:13.147 "w_mbytes_per_sec": 0 00:21:13.147 }, 00:21:13.147 "claimed": true, 00:21:13.147 "claim_type": "exclusive_write", 00:21:13.147 "zoned": false, 00:21:13.147 "supported_io_types": { 00:21:13.147 "read": true, 00:21:13.147 "write": true, 00:21:13.147 "unmap": true, 00:21:13.147 "flush": true, 00:21:13.147 "reset": true, 00:21:13.147 "nvme_admin": false, 00:21:13.147 "nvme_io": false, 00:21:13.147 "nvme_io_md": false, 00:21:13.147 "write_zeroes": true, 00:21:13.147 "zcopy": true, 00:21:13.147 "get_zone_info": false, 00:21:13.147 "zone_management": false, 00:21:13.147 "zone_append": false, 00:21:13.147 "compare": false, 00:21:13.147 "compare_and_write": false, 00:21:13.147 "abort": true, 00:21:13.147 "seek_hole": false, 00:21:13.147 "seek_data": false, 00:21:13.147 "copy": true, 00:21:13.147 "nvme_iov_md": false 00:21:13.147 }, 00:21:13.147 "memory_domains": [ 00:21:13.147 { 00:21:13.147 "dma_device_id": "system", 00:21:13.147 "dma_device_type": 1 00:21:13.147 }, 00:21:13.147 { 00:21:13.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:13.147 "dma_device_type": 2 00:21:13.147 } 00:21:13.147 ], 00:21:13.147 "driver_specific": { 00:21:13.147 "passthru": { 00:21:13.147 "name": "pt3", 00:21:13.147 "base_bdev_name": "malloc3" 00:21:13.147 } 00:21:13.147 } 00:21:13.147 }' 00:21:13.147 21:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:13.407 21:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:13.407 21:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:13.407 21:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:13.407 21:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:13.407 21:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:13.407 21:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:13.407 21:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:13.666 21:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:13.666 21:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:13.666 21:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:13.666 21:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:13.666 21:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:21:13.666 21:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:13.926 [2024-07-15 21:34:47.096371] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:13.926 21:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 8b6fe9b0-8494-43d5-bd15-88722300134e '!=' 8b6fe9b0-8494-43d5-bd15-88722300134e ']' 00:21:13.926 21:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:21:13.926 21:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:13.926 21:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:21:13.926 21:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 131092 00:21:13.926 21:34:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 131092 ']' 00:21:13.926 21:34:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 131092 00:21:13.926 21:34:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:21:13.926 21:34:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:13.926 21:34:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 131092 00:21:13.926 killing process with pid 131092 00:21:13.926 21:34:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:13.926 21:34:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:13.926 21:34:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 131092' 00:21:13.926 21:34:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 131092 00:21:13.926 21:34:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 131092 00:21:13.926 [2024-07-15 21:34:47.127047] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:13.926 [2024-07-15 21:34:47.127149] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:13.926 [2024-07-15 21:34:47.127221] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:13.926 [2024-07-15 21:34:47.127238] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:21:14.185 [2024-07-15 21:34:47.437996] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:15.617 21:34:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:21:15.617 00:21:15.617 real 0m13.714s 00:21:15.617 user 0m24.235s 00:21:15.617 sys 0m1.850s 00:21:15.617 21:34:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:15.617 ************************************ 00:21:15.617 END TEST raid_superblock_test 00:21:15.617 ************************************ 00:21:15.617 21:34:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.617 21:34:48 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:15.617 21:34:48 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:21:15.617 21:34:48 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:15.617 21:34:48 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:15.617 21:34:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:15.617 ************************************ 00:21:15.617 START TEST raid_read_error_test 00:21:15.617 ************************************ 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 read 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.e1fQOhvMBp 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=131578 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 131578 /var/tmp/spdk-raid.sock 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 131578 ']' 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:15.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:15.617 21:34:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.617 [2024-07-15 21:34:48.785721] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:21:15.617 [2024-07-15 21:34:48.785928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131578 ] 00:21:15.617 [2024-07-15 21:34:48.946693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.874 [2024-07-15 21:34:49.146306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.131 [2024-07-15 21:34:49.334513] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:16.389 21:34:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:16.389 21:34:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:21:16.389 21:34:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:16.389 21:34:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:16.648 BaseBdev1_malloc 00:21:16.648 21:34:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:21:16.648 true 00:21:16.648 21:34:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:16.906 [2024-07-15 21:34:50.190347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:16.906 [2024-07-15 21:34:50.190512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:16.906 [2024-07-15 21:34:50.190576] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:16.906 [2024-07-15 21:34:50.190616] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:16.906 [2024-07-15 21:34:50.192605] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:16.906 [2024-07-15 21:34:50.192710] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:16.906 BaseBdev1 00:21:16.906 21:34:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:16.906 21:34:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:17.165 BaseBdev2_malloc 00:21:17.165 21:34:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:21:17.448 true 00:21:17.448 21:34:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:17.448 [2024-07-15 21:34:50.795656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:17.448 [2024-07-15 21:34:50.795859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:17.448 [2024-07-15 21:34:50.795932] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:21:17.448 [2024-07-15 21:34:50.795979] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:17.448 [2024-07-15 21:34:50.797973] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:17.448 [2024-07-15 21:34:50.798079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:17.448 BaseBdev2 00:21:17.448 21:34:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:17.448 21:34:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:17.706 BaseBdev3_malloc 00:21:17.706 21:34:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:21:17.964 true 00:21:17.964 21:34:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:18.222 [2024-07-15 21:34:51.341422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:18.222 [2024-07-15 21:34:51.341588] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:18.222 [2024-07-15 21:34:51.341657] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:18.222 [2024-07-15 21:34:51.341706] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:18.222 [2024-07-15 21:34:51.343652] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:18.222 [2024-07-15 21:34:51.343774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:18.222 BaseBdev3 00:21:18.222 21:34:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:21:18.222 [2024-07-15 21:34:51.545124] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:18.222 [2024-07-15 21:34:51.546930] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:18.222 [2024-07-15 21:34:51.547052] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:18.222 [2024-07-15 21:34:51.547298] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:21:18.222 [2024-07-15 21:34:51.547351] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:18.222 [2024-07-15 21:34:51.547522] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:21:18.222 [2024-07-15 21:34:51.547899] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:21:18.222 [2024-07-15 21:34:51.547950] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:21:18.222 [2024-07-15 21:34:51.548156] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:18.222 21:34:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:21:18.222 21:34:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:18.222 21:34:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:18.222 21:34:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:18.222 21:34:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:18.222 21:34:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:18.222 21:34:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:18.222 21:34:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:18.222 21:34:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:18.222 21:34:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:18.222 21:34:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.222 21:34:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.511 21:34:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:18.511 "name": "raid_bdev1", 00:21:18.511 "uuid": "4ad510d5-084d-4d05-9451-06b421819d9f", 00:21:18.511 "strip_size_kb": 64, 00:21:18.511 "state": "online", 00:21:18.511 "raid_level": "concat", 00:21:18.511 "superblock": true, 00:21:18.511 "num_base_bdevs": 3, 00:21:18.511 "num_base_bdevs_discovered": 3, 00:21:18.511 "num_base_bdevs_operational": 3, 00:21:18.511 "base_bdevs_list": [ 00:21:18.511 { 00:21:18.511 "name": "BaseBdev1", 00:21:18.511 "uuid": "36a49a35-e703-5fdd-8abd-7bcaa50d36aa", 00:21:18.511 "is_configured": true, 00:21:18.511 "data_offset": 2048, 00:21:18.511 "data_size": 63488 00:21:18.511 }, 00:21:18.511 { 00:21:18.511 "name": "BaseBdev2", 00:21:18.511 "uuid": "034ede78-8b08-5ae1-b3df-23a98e9bfb01", 00:21:18.511 "is_configured": true, 00:21:18.511 "data_offset": 2048, 00:21:18.511 "data_size": 63488 00:21:18.511 }, 00:21:18.511 { 00:21:18.511 "name": "BaseBdev3", 00:21:18.511 "uuid": "8687d7ab-538a-5a3a-b80c-4c849d1716e2", 00:21:18.511 "is_configured": true, 00:21:18.511 "data_offset": 2048, 00:21:18.511 "data_size": 63488 00:21:18.511 } 00:21:18.511 ] 00:21:18.511 }' 00:21:18.511 21:34:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:18.511 21:34:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.080 21:34:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:21:19.080 21:34:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:19.080 [2024-07-15 21:34:52.384805] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:21:20.016 21:34:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:21:20.274 21:34:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:21:20.274 21:34:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:21:20.274 21:34:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:21:20.274 21:34:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:21:20.274 21:34:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:20.274 21:34:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:20.274 21:34:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:20.274 21:34:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:20.274 21:34:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:20.274 21:34:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:20.274 21:34:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:20.274 21:34:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:20.274 21:34:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:20.274 21:34:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.274 21:34:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.533 21:34:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:20.533 "name": "raid_bdev1", 00:21:20.533 "uuid": "4ad510d5-084d-4d05-9451-06b421819d9f", 00:21:20.533 "strip_size_kb": 64, 00:21:20.533 "state": "online", 00:21:20.533 "raid_level": "concat", 00:21:20.533 "superblock": true, 00:21:20.533 "num_base_bdevs": 3, 00:21:20.533 "num_base_bdevs_discovered": 3, 00:21:20.533 "num_base_bdevs_operational": 3, 00:21:20.533 "base_bdevs_list": [ 00:21:20.533 { 00:21:20.533 "name": "BaseBdev1", 00:21:20.533 "uuid": "36a49a35-e703-5fdd-8abd-7bcaa50d36aa", 00:21:20.533 "is_configured": true, 00:21:20.533 "data_offset": 2048, 00:21:20.533 "data_size": 63488 00:21:20.533 }, 00:21:20.533 { 00:21:20.533 "name": "BaseBdev2", 00:21:20.533 "uuid": "034ede78-8b08-5ae1-b3df-23a98e9bfb01", 00:21:20.533 "is_configured": true, 00:21:20.533 "data_offset": 2048, 00:21:20.533 "data_size": 63488 00:21:20.533 }, 00:21:20.533 { 00:21:20.533 "name": "BaseBdev3", 00:21:20.533 "uuid": "8687d7ab-538a-5a3a-b80c-4c849d1716e2", 00:21:20.533 "is_configured": true, 00:21:20.533 "data_offset": 2048, 00:21:20.534 "data_size": 63488 00:21:20.534 } 00:21:20.534 ] 00:21:20.534 }' 00:21:20.534 21:34:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:20.534 21:34:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.103 21:34:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:21.103 [2024-07-15 21:34:54.411751] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:21.103 [2024-07-15 21:34:54.411869] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:21.103 [2024-07-15 21:34:54.414284] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:21.103 [2024-07-15 21:34:54.414372] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:21.103 [2024-07-15 21:34:54.414440] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:21.103 [2024-07-15 21:34:54.414473] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:21:21.103 0 00:21:21.103 21:34:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 131578 00:21:21.103 21:34:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 131578 ']' 00:21:21.103 21:34:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 131578 00:21:21.103 21:34:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:21:21.103 21:34:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:21.103 21:34:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 131578 00:21:21.103 killing process with pid 131578 00:21:21.103 21:34:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:21.103 21:34:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:21.103 21:34:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 131578' 00:21:21.103 21:34:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 131578 00:21:21.103 21:34:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 131578 00:21:21.103 [2024-07-15 21:34:54.468195] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:21.362 [2024-07-15 21:34:54.681894] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:22.740 21:34:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.e1fQOhvMBp 00:21:22.740 21:34:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:21:22.740 21:34:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:21:22.740 ************************************ 00:21:22.740 END TEST raid_read_error_test 00:21:22.740 ************************************ 00:21:22.740 21:34:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.49 00:21:22.740 21:34:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:21:22.740 21:34:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:22.740 21:34:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:21:22.740 21:34:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.49 != \0\.\0\0 ]] 00:21:22.740 00:21:22.740 real 0m7.232s 00:21:22.740 user 0m10.721s 00:21:22.740 sys 0m0.738s 00:21:22.740 21:34:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:22.740 21:34:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.740 21:34:55 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:22.740 21:34:55 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:21:22.740 21:34:55 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:22.740 21:34:55 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:22.740 21:34:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:22.740 ************************************ 00:21:22.740 START TEST raid_write_error_test 00:21:22.740 ************************************ 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 write 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.ImES1Zy8CN 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=131789 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 131789 /var/tmp/spdk-raid.sock 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 131789 ']' 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:22.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:22.740 21:34:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.740 [2024-07-15 21:34:56.090293] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:21:22.740 [2024-07-15 21:34:56.090567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131789 ] 00:21:22.999 [2024-07-15 21:34:56.264405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.257 [2024-07-15 21:34:56.459412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.515 [2024-07-15 21:34:56.647904] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:23.774 21:34:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:23.774 21:34:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:21:23.774 21:34:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:23.774 21:34:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:23.774 BaseBdev1_malloc 00:21:23.774 21:34:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:21:24.032 true 00:21:24.032 21:34:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:24.291 [2024-07-15 21:34:57.448313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:24.291 [2024-07-15 21:34:57.448469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:24.291 [2024-07-15 21:34:57.448523] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:24.291 [2024-07-15 21:34:57.448569] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:24.291 [2024-07-15 21:34:57.450681] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:24.291 [2024-07-15 21:34:57.450766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:24.291 BaseBdev1 00:21:24.291 21:34:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:24.291 21:34:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:24.549 BaseBdev2_malloc 00:21:24.549 21:34:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:21:24.549 true 00:21:24.549 21:34:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:24.807 [2024-07-15 21:34:58.065326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:24.807 [2024-07-15 21:34:58.065499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:24.807 [2024-07-15 21:34:58.065554] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:21:24.807 [2024-07-15 21:34:58.065614] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:24.807 [2024-07-15 21:34:58.067424] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:24.807 [2024-07-15 21:34:58.067505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:24.807 BaseBdev2 00:21:24.807 21:34:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:24.807 21:34:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:25.071 BaseBdev3_malloc 00:21:25.071 21:34:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:21:25.338 true 00:21:25.338 21:34:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:25.338 [2024-07-15 21:34:58.652718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:25.338 [2024-07-15 21:34:58.652886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:25.338 [2024-07-15 21:34:58.652945] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:25.338 [2024-07-15 21:34:58.653001] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:25.338 [2024-07-15 21:34:58.655235] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:25.338 [2024-07-15 21:34:58.655343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:25.338 BaseBdev3 00:21:25.338 21:34:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:21:25.597 [2024-07-15 21:34:58.836425] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:25.597 [2024-07-15 21:34:58.838084] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:25.597 [2024-07-15 21:34:58.838202] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:25.597 [2024-07-15 21:34:58.838443] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:21:25.597 [2024-07-15 21:34:58.838489] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:25.597 [2024-07-15 21:34:58.838651] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:21:25.597 [2024-07-15 21:34:58.839004] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:21:25.597 [2024-07-15 21:34:58.839050] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:21:25.597 [2024-07-15 21:34:58.839232] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:25.597 21:34:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:21:25.597 21:34:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:25.597 21:34:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:25.597 21:34:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:25.597 21:34:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:25.597 21:34:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:25.597 21:34:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:25.597 21:34:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:25.597 21:34:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:25.597 21:34:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:25.597 21:34:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.597 21:34:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.856 21:34:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:25.856 "name": "raid_bdev1", 00:21:25.856 "uuid": "d183efc4-d1ff-49d1-afb1-a66b4a800e40", 00:21:25.856 "strip_size_kb": 64, 00:21:25.856 "state": "online", 00:21:25.856 "raid_level": "concat", 00:21:25.856 "superblock": true, 00:21:25.856 "num_base_bdevs": 3, 00:21:25.856 "num_base_bdevs_discovered": 3, 00:21:25.856 "num_base_bdevs_operational": 3, 00:21:25.856 "base_bdevs_list": [ 00:21:25.856 { 00:21:25.856 "name": "BaseBdev1", 00:21:25.856 "uuid": "269a80aa-07c8-5ff1-b0ed-5edaafec3eb2", 00:21:25.856 "is_configured": true, 00:21:25.856 "data_offset": 2048, 00:21:25.856 "data_size": 63488 00:21:25.856 }, 00:21:25.856 { 00:21:25.856 "name": "BaseBdev2", 00:21:25.856 "uuid": "4f87e663-8c06-535b-9b2f-8879fa115e0b", 00:21:25.856 "is_configured": true, 00:21:25.856 "data_offset": 2048, 00:21:25.856 "data_size": 63488 00:21:25.856 }, 00:21:25.856 { 00:21:25.856 "name": "BaseBdev3", 00:21:25.856 "uuid": "b37acb4b-8175-5e4d-8a72-631c1f086e3e", 00:21:25.856 "is_configured": true, 00:21:25.856 "data_offset": 2048, 00:21:25.856 "data_size": 63488 00:21:25.856 } 00:21:25.856 ] 00:21:25.856 }' 00:21:25.856 21:34:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:25.856 21:34:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.426 21:34:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:21:26.426 21:34:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:26.426 [2024-07-15 21:34:59.704488] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:21:27.364 21:35:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:21:27.623 21:35:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:21:27.623 21:35:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:21:27.623 21:35:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:21:27.624 21:35:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:21:27.624 21:35:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:27.624 21:35:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:27.624 21:35:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:27.624 21:35:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:27.624 21:35:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:27.624 21:35:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:27.624 21:35:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:27.624 21:35:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:27.624 21:35:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:27.624 21:35:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.624 21:35:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.882 21:35:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:27.882 "name": "raid_bdev1", 00:21:27.882 "uuid": "d183efc4-d1ff-49d1-afb1-a66b4a800e40", 00:21:27.882 "strip_size_kb": 64, 00:21:27.882 "state": "online", 00:21:27.882 "raid_level": "concat", 00:21:27.882 "superblock": true, 00:21:27.882 "num_base_bdevs": 3, 00:21:27.882 "num_base_bdevs_discovered": 3, 00:21:27.882 "num_base_bdevs_operational": 3, 00:21:27.882 "base_bdevs_list": [ 00:21:27.882 { 00:21:27.882 "name": "BaseBdev1", 00:21:27.882 "uuid": "269a80aa-07c8-5ff1-b0ed-5edaafec3eb2", 00:21:27.882 "is_configured": true, 00:21:27.882 "data_offset": 2048, 00:21:27.882 "data_size": 63488 00:21:27.882 }, 00:21:27.882 { 00:21:27.882 "name": "BaseBdev2", 00:21:27.882 "uuid": "4f87e663-8c06-535b-9b2f-8879fa115e0b", 00:21:27.882 "is_configured": true, 00:21:27.882 "data_offset": 2048, 00:21:27.882 "data_size": 63488 00:21:27.882 }, 00:21:27.882 { 00:21:27.882 "name": "BaseBdev3", 00:21:27.882 "uuid": "b37acb4b-8175-5e4d-8a72-631c1f086e3e", 00:21:27.882 "is_configured": true, 00:21:27.882 "data_offset": 2048, 00:21:27.882 "data_size": 63488 00:21:27.882 } 00:21:27.882 ] 00:21:27.882 }' 00:21:27.882 21:35:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:27.882 21:35:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.450 21:35:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:28.709 [2024-07-15 21:35:01.827043] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:28.709 [2024-07-15 21:35:01.827163] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:28.709 [2024-07-15 21:35:01.829692] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:28.709 [2024-07-15 21:35:01.829775] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:28.709 [2024-07-15 21:35:01.829839] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:28.709 [2024-07-15 21:35:01.829866] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:21:28.709 0 00:21:28.709 21:35:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 131789 00:21:28.709 21:35:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 131789 ']' 00:21:28.709 21:35:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 131789 00:21:28.709 21:35:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:21:28.709 21:35:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:28.709 21:35:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 131789 00:21:28.709 killing process with pid 131789 00:21:28.709 21:35:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:28.709 21:35:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:28.709 21:35:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 131789' 00:21:28.709 21:35:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 131789 00:21:28.709 21:35:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 131789 00:21:28.709 [2024-07-15 21:35:01.857851] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:28.709 [2024-07-15 21:35:02.068901] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:30.128 21:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.ImES1Zy8CN 00:21:30.128 21:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:21:30.128 21:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:21:30.128 ************************************ 00:21:30.128 END TEST raid_write_error_test 00:21:30.128 ************************************ 00:21:30.128 21:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.47 00:21:30.128 21:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:21:30.128 21:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:30.128 21:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:21:30.128 21:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.47 != \0\.\0\0 ]] 00:21:30.128 00:21:30.128 real 0m7.328s 00:21:30.128 user 0m10.740s 00:21:30.128 sys 0m0.839s 00:21:30.128 21:35:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:30.128 21:35:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.128 21:35:03 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:30.128 21:35:03 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:21:30.128 21:35:03 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:21:30.128 21:35:03 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:30.128 21:35:03 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:30.128 21:35:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:30.128 ************************************ 00:21:30.128 START TEST raid_state_function_test 00:21:30.128 ************************************ 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 false 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=132019 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 132019' 00:21:30.128 Process raid pid: 132019 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 132019 /var/tmp/spdk-raid.sock 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 132019 ']' 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:30.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:30.128 21:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.128 [2024-07-15 21:35:03.470416] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:21:30.128 [2024-07-15 21:35:03.470620] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:30.434 [2024-07-15 21:35:03.631030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.691 [2024-07-15 21:35:03.829822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.691 [2024-07-15 21:35:04.026517] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:30.950 21:35:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:30.950 21:35:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:21:30.950 21:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:31.210 [2024-07-15 21:35:04.440139] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:31.210 [2024-07-15 21:35:04.440305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:31.210 [2024-07-15 21:35:04.440340] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:31.210 [2024-07-15 21:35:04.440380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:31.210 [2024-07-15 21:35:04.440400] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:31.210 [2024-07-15 21:35:04.440425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:31.210 21:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:31.210 21:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:31.210 21:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:31.210 21:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:31.210 21:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:31.210 21:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:31.210 21:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:31.210 21:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:31.210 21:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:31.210 21:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:31.210 21:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.210 21:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:31.470 21:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:31.470 "name": "Existed_Raid", 00:21:31.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.470 "strip_size_kb": 0, 00:21:31.470 "state": "configuring", 00:21:31.470 "raid_level": "raid1", 00:21:31.470 "superblock": false, 00:21:31.470 "num_base_bdevs": 3, 00:21:31.470 "num_base_bdevs_discovered": 0, 00:21:31.470 "num_base_bdevs_operational": 3, 00:21:31.470 "base_bdevs_list": [ 00:21:31.470 { 00:21:31.470 "name": "BaseBdev1", 00:21:31.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.470 "is_configured": false, 00:21:31.470 "data_offset": 0, 00:21:31.470 "data_size": 0 00:21:31.470 }, 00:21:31.470 { 00:21:31.470 "name": "BaseBdev2", 00:21:31.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.470 "is_configured": false, 00:21:31.470 "data_offset": 0, 00:21:31.470 "data_size": 0 00:21:31.470 }, 00:21:31.470 { 00:21:31.470 "name": "BaseBdev3", 00:21:31.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.470 "is_configured": false, 00:21:31.470 "data_offset": 0, 00:21:31.470 "data_size": 0 00:21:31.470 } 00:21:31.470 ] 00:21:31.470 }' 00:21:31.470 21:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:31.470 21:35:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.040 21:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:32.300 [2024-07-15 21:35:05.466264] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:32.300 [2024-07-15 21:35:05.466380] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:21:32.300 21:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:32.300 [2024-07-15 21:35:05.637992] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:32.300 [2024-07-15 21:35:05.638146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:32.300 [2024-07-15 21:35:05.638180] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:32.300 [2024-07-15 21:35:05.638209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:32.300 [2024-07-15 21:35:05.638228] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:32.300 [2024-07-15 21:35:05.638281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:32.300 21:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:32.560 [2024-07-15 21:35:05.848268] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:32.560 BaseBdev1 00:21:32.560 21:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:32.560 21:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:32.560 21:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:32.561 21:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:32.561 21:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:32.561 21:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:32.561 21:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:32.820 21:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:33.079 [ 00:21:33.079 { 00:21:33.079 "name": "BaseBdev1", 00:21:33.079 "aliases": [ 00:21:33.079 "2b1a0e51-a7a8-4f4f-b48a-ed10d9e4bea7" 00:21:33.079 ], 00:21:33.079 "product_name": "Malloc disk", 00:21:33.079 "block_size": 512, 00:21:33.079 "num_blocks": 65536, 00:21:33.079 "uuid": "2b1a0e51-a7a8-4f4f-b48a-ed10d9e4bea7", 00:21:33.079 "assigned_rate_limits": { 00:21:33.079 "rw_ios_per_sec": 0, 00:21:33.079 "rw_mbytes_per_sec": 0, 00:21:33.079 "r_mbytes_per_sec": 0, 00:21:33.079 "w_mbytes_per_sec": 0 00:21:33.079 }, 00:21:33.079 "claimed": true, 00:21:33.079 "claim_type": "exclusive_write", 00:21:33.079 "zoned": false, 00:21:33.079 "supported_io_types": { 00:21:33.079 "read": true, 00:21:33.079 "write": true, 00:21:33.079 "unmap": true, 00:21:33.079 "flush": true, 00:21:33.079 "reset": true, 00:21:33.079 "nvme_admin": false, 00:21:33.079 "nvme_io": false, 00:21:33.079 "nvme_io_md": false, 00:21:33.079 "write_zeroes": true, 00:21:33.079 "zcopy": true, 00:21:33.079 "get_zone_info": false, 00:21:33.079 "zone_management": false, 00:21:33.079 "zone_append": false, 00:21:33.079 "compare": false, 00:21:33.079 "compare_and_write": false, 00:21:33.079 "abort": true, 00:21:33.079 "seek_hole": false, 00:21:33.079 "seek_data": false, 00:21:33.079 "copy": true, 00:21:33.079 "nvme_iov_md": false 00:21:33.079 }, 00:21:33.079 "memory_domains": [ 00:21:33.079 { 00:21:33.079 "dma_device_id": "system", 00:21:33.079 "dma_device_type": 1 00:21:33.079 }, 00:21:33.079 { 00:21:33.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:33.079 "dma_device_type": 2 00:21:33.079 } 00:21:33.079 ], 00:21:33.079 "driver_specific": {} 00:21:33.079 } 00:21:33.079 ] 00:21:33.079 21:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:33.079 21:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:33.079 21:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:33.079 21:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:33.080 21:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:33.080 21:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:33.080 21:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:33.080 21:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:33.080 21:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:33.080 21:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:33.080 21:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:33.080 21:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.080 21:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:33.080 21:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:33.080 "name": "Existed_Raid", 00:21:33.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.080 "strip_size_kb": 0, 00:21:33.080 "state": "configuring", 00:21:33.080 "raid_level": "raid1", 00:21:33.080 "superblock": false, 00:21:33.080 "num_base_bdevs": 3, 00:21:33.080 "num_base_bdevs_discovered": 1, 00:21:33.080 "num_base_bdevs_operational": 3, 00:21:33.080 "base_bdevs_list": [ 00:21:33.080 { 00:21:33.080 "name": "BaseBdev1", 00:21:33.080 "uuid": "2b1a0e51-a7a8-4f4f-b48a-ed10d9e4bea7", 00:21:33.080 "is_configured": true, 00:21:33.080 "data_offset": 0, 00:21:33.080 "data_size": 65536 00:21:33.080 }, 00:21:33.080 { 00:21:33.080 "name": "BaseBdev2", 00:21:33.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.080 "is_configured": false, 00:21:33.080 "data_offset": 0, 00:21:33.080 "data_size": 0 00:21:33.080 }, 00:21:33.080 { 00:21:33.080 "name": "BaseBdev3", 00:21:33.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.080 "is_configured": false, 00:21:33.080 "data_offset": 0, 00:21:33.080 "data_size": 0 00:21:33.080 } 00:21:33.080 ] 00:21:33.080 }' 00:21:33.080 21:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:33.080 21:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.646 21:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:33.904 [2024-07-15 21:35:07.162080] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:33.904 [2024-07-15 21:35:07.162194] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:21:33.904 21:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:34.161 [2024-07-15 21:35:07.349822] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:34.161 [2024-07-15 21:35:07.351511] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:34.161 [2024-07-15 21:35:07.351640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:34.161 [2024-07-15 21:35:07.351674] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:34.161 [2024-07-15 21:35:07.351724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:34.161 21:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:34.161 21:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:34.161 21:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:34.161 21:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:34.161 21:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:34.161 21:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:34.161 21:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:34.161 21:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:34.161 21:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:34.161 21:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:34.161 21:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:34.161 21:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:34.161 21:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:34.161 21:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.419 21:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:34.419 "name": "Existed_Raid", 00:21:34.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.419 "strip_size_kb": 0, 00:21:34.419 "state": "configuring", 00:21:34.419 "raid_level": "raid1", 00:21:34.419 "superblock": false, 00:21:34.419 "num_base_bdevs": 3, 00:21:34.419 "num_base_bdevs_discovered": 1, 00:21:34.419 "num_base_bdevs_operational": 3, 00:21:34.419 "base_bdevs_list": [ 00:21:34.419 { 00:21:34.419 "name": "BaseBdev1", 00:21:34.419 "uuid": "2b1a0e51-a7a8-4f4f-b48a-ed10d9e4bea7", 00:21:34.419 "is_configured": true, 00:21:34.419 "data_offset": 0, 00:21:34.419 "data_size": 65536 00:21:34.419 }, 00:21:34.419 { 00:21:34.419 "name": "BaseBdev2", 00:21:34.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.419 "is_configured": false, 00:21:34.419 "data_offset": 0, 00:21:34.419 "data_size": 0 00:21:34.419 }, 00:21:34.419 { 00:21:34.419 "name": "BaseBdev3", 00:21:34.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.419 "is_configured": false, 00:21:34.419 "data_offset": 0, 00:21:34.419 "data_size": 0 00:21:34.419 } 00:21:34.419 ] 00:21:34.419 }' 00:21:34.419 21:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:34.419 21:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.990 21:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:35.248 [2024-07-15 21:35:08.430842] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:35.248 BaseBdev2 00:21:35.248 21:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:35.248 21:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:35.248 21:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:35.248 21:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:35.248 21:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:35.248 21:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:35.248 21:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:35.507 21:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:35.507 [ 00:21:35.507 { 00:21:35.507 "name": "BaseBdev2", 00:21:35.507 "aliases": [ 00:21:35.507 "f9c84c51-889a-4016-bfcf-664f47c8e905" 00:21:35.507 ], 00:21:35.507 "product_name": "Malloc disk", 00:21:35.507 "block_size": 512, 00:21:35.507 "num_blocks": 65536, 00:21:35.507 "uuid": "f9c84c51-889a-4016-bfcf-664f47c8e905", 00:21:35.507 "assigned_rate_limits": { 00:21:35.507 "rw_ios_per_sec": 0, 00:21:35.507 "rw_mbytes_per_sec": 0, 00:21:35.507 "r_mbytes_per_sec": 0, 00:21:35.507 "w_mbytes_per_sec": 0 00:21:35.507 }, 00:21:35.507 "claimed": true, 00:21:35.507 "claim_type": "exclusive_write", 00:21:35.507 "zoned": false, 00:21:35.507 "supported_io_types": { 00:21:35.507 "read": true, 00:21:35.507 "write": true, 00:21:35.507 "unmap": true, 00:21:35.507 "flush": true, 00:21:35.507 "reset": true, 00:21:35.507 "nvme_admin": false, 00:21:35.507 "nvme_io": false, 00:21:35.507 "nvme_io_md": false, 00:21:35.507 "write_zeroes": true, 00:21:35.507 "zcopy": true, 00:21:35.507 "get_zone_info": false, 00:21:35.507 "zone_management": false, 00:21:35.507 "zone_append": false, 00:21:35.507 "compare": false, 00:21:35.507 "compare_and_write": false, 00:21:35.507 "abort": true, 00:21:35.507 "seek_hole": false, 00:21:35.507 "seek_data": false, 00:21:35.507 "copy": true, 00:21:35.507 "nvme_iov_md": false 00:21:35.507 }, 00:21:35.507 "memory_domains": [ 00:21:35.507 { 00:21:35.507 "dma_device_id": "system", 00:21:35.507 "dma_device_type": 1 00:21:35.507 }, 00:21:35.507 { 00:21:35.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.507 "dma_device_type": 2 00:21:35.507 } 00:21:35.507 ], 00:21:35.507 "driver_specific": {} 00:21:35.507 } 00:21:35.507 ] 00:21:35.507 21:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:35.507 21:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:35.507 21:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:35.507 21:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:35.507 21:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:35.507 21:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:35.507 21:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:35.507 21:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:35.507 21:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:35.507 21:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:35.507 21:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:35.507 21:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:35.507 21:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:35.507 21:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.507 21:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:35.764 21:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:35.764 "name": "Existed_Raid", 00:21:35.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.764 "strip_size_kb": 0, 00:21:35.764 "state": "configuring", 00:21:35.764 "raid_level": "raid1", 00:21:35.764 "superblock": false, 00:21:35.764 "num_base_bdevs": 3, 00:21:35.764 "num_base_bdevs_discovered": 2, 00:21:35.764 "num_base_bdevs_operational": 3, 00:21:35.764 "base_bdevs_list": [ 00:21:35.764 { 00:21:35.764 "name": "BaseBdev1", 00:21:35.764 "uuid": "2b1a0e51-a7a8-4f4f-b48a-ed10d9e4bea7", 00:21:35.764 "is_configured": true, 00:21:35.764 "data_offset": 0, 00:21:35.764 "data_size": 65536 00:21:35.764 }, 00:21:35.764 { 00:21:35.764 "name": "BaseBdev2", 00:21:35.764 "uuid": "f9c84c51-889a-4016-bfcf-664f47c8e905", 00:21:35.764 "is_configured": true, 00:21:35.764 "data_offset": 0, 00:21:35.764 "data_size": 65536 00:21:35.764 }, 00:21:35.764 { 00:21:35.764 "name": "BaseBdev3", 00:21:35.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.764 "is_configured": false, 00:21:35.764 "data_offset": 0, 00:21:35.764 "data_size": 0 00:21:35.764 } 00:21:35.764 ] 00:21:35.764 }' 00:21:35.764 21:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:35.764 21:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.331 21:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:36.591 [2024-07-15 21:35:09.808140] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:36.591 [2024-07-15 21:35:09.808299] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:21:36.591 [2024-07-15 21:35:09.808325] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:36.591 [2024-07-15 21:35:09.808488] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:21:36.591 [2024-07-15 21:35:09.808855] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:21:36.591 [2024-07-15 21:35:09.808902] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:21:36.591 [2024-07-15 21:35:09.809188] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:36.591 BaseBdev3 00:21:36.591 21:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:21:36.591 21:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:36.591 21:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:36.591 21:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:36.591 21:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:36.591 21:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:36.591 21:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:36.851 21:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:36.851 [ 00:21:36.851 { 00:21:36.851 "name": "BaseBdev3", 00:21:36.851 "aliases": [ 00:21:36.851 "b7961b31-9325-4b0d-ac88-c2210ed74660" 00:21:36.851 ], 00:21:36.851 "product_name": "Malloc disk", 00:21:36.851 "block_size": 512, 00:21:36.851 "num_blocks": 65536, 00:21:36.851 "uuid": "b7961b31-9325-4b0d-ac88-c2210ed74660", 00:21:36.851 "assigned_rate_limits": { 00:21:36.851 "rw_ios_per_sec": 0, 00:21:36.851 "rw_mbytes_per_sec": 0, 00:21:36.851 "r_mbytes_per_sec": 0, 00:21:36.851 "w_mbytes_per_sec": 0 00:21:36.851 }, 00:21:36.851 "claimed": true, 00:21:36.851 "claim_type": "exclusive_write", 00:21:36.851 "zoned": false, 00:21:36.851 "supported_io_types": { 00:21:36.851 "read": true, 00:21:36.851 "write": true, 00:21:36.851 "unmap": true, 00:21:36.851 "flush": true, 00:21:36.851 "reset": true, 00:21:36.851 "nvme_admin": false, 00:21:36.851 "nvme_io": false, 00:21:36.851 "nvme_io_md": false, 00:21:36.851 "write_zeroes": true, 00:21:36.851 "zcopy": true, 00:21:36.851 "get_zone_info": false, 00:21:36.851 "zone_management": false, 00:21:36.851 "zone_append": false, 00:21:36.851 "compare": false, 00:21:36.851 "compare_and_write": false, 00:21:36.851 "abort": true, 00:21:36.851 "seek_hole": false, 00:21:36.851 "seek_data": false, 00:21:36.851 "copy": true, 00:21:36.851 "nvme_iov_md": false 00:21:36.851 }, 00:21:36.851 "memory_domains": [ 00:21:36.851 { 00:21:36.851 "dma_device_id": "system", 00:21:36.851 "dma_device_type": 1 00:21:36.851 }, 00:21:36.851 { 00:21:36.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:36.851 "dma_device_type": 2 00:21:36.851 } 00:21:36.851 ], 00:21:36.851 "driver_specific": {} 00:21:36.851 } 00:21:36.851 ] 00:21:36.851 21:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:36.851 21:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:36.851 21:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:36.851 21:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:21:36.851 21:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:36.851 21:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:36.851 21:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:36.851 21:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:36.851 21:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:36.851 21:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:36.851 21:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:36.851 21:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:36.852 21:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:36.852 21:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.852 21:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:37.111 21:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:37.111 "name": "Existed_Raid", 00:21:37.111 "uuid": "81b1ce60-1647-4eff-aa34-d3d4ac46db06", 00:21:37.111 "strip_size_kb": 0, 00:21:37.111 "state": "online", 00:21:37.111 "raid_level": "raid1", 00:21:37.111 "superblock": false, 00:21:37.111 "num_base_bdevs": 3, 00:21:37.111 "num_base_bdevs_discovered": 3, 00:21:37.111 "num_base_bdevs_operational": 3, 00:21:37.111 "base_bdevs_list": [ 00:21:37.111 { 00:21:37.111 "name": "BaseBdev1", 00:21:37.111 "uuid": "2b1a0e51-a7a8-4f4f-b48a-ed10d9e4bea7", 00:21:37.111 "is_configured": true, 00:21:37.111 "data_offset": 0, 00:21:37.111 "data_size": 65536 00:21:37.111 }, 00:21:37.111 { 00:21:37.111 "name": "BaseBdev2", 00:21:37.111 "uuid": "f9c84c51-889a-4016-bfcf-664f47c8e905", 00:21:37.111 "is_configured": true, 00:21:37.111 "data_offset": 0, 00:21:37.111 "data_size": 65536 00:21:37.111 }, 00:21:37.111 { 00:21:37.111 "name": "BaseBdev3", 00:21:37.111 "uuid": "b7961b31-9325-4b0d-ac88-c2210ed74660", 00:21:37.111 "is_configured": true, 00:21:37.111 "data_offset": 0, 00:21:37.111 "data_size": 65536 00:21:37.111 } 00:21:37.111 ] 00:21:37.111 }' 00:21:37.111 21:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:37.111 21:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.681 21:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:37.681 21:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:37.681 21:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:37.681 21:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:37.681 21:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:37.681 21:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:37.681 21:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:37.681 21:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:37.940 [2024-07-15 21:35:11.134144] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:37.940 21:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:37.940 "name": "Existed_Raid", 00:21:37.940 "aliases": [ 00:21:37.940 "81b1ce60-1647-4eff-aa34-d3d4ac46db06" 00:21:37.940 ], 00:21:37.940 "product_name": "Raid Volume", 00:21:37.940 "block_size": 512, 00:21:37.940 "num_blocks": 65536, 00:21:37.940 "uuid": "81b1ce60-1647-4eff-aa34-d3d4ac46db06", 00:21:37.940 "assigned_rate_limits": { 00:21:37.940 "rw_ios_per_sec": 0, 00:21:37.940 "rw_mbytes_per_sec": 0, 00:21:37.940 "r_mbytes_per_sec": 0, 00:21:37.940 "w_mbytes_per_sec": 0 00:21:37.940 }, 00:21:37.940 "claimed": false, 00:21:37.940 "zoned": false, 00:21:37.940 "supported_io_types": { 00:21:37.940 "read": true, 00:21:37.940 "write": true, 00:21:37.940 "unmap": false, 00:21:37.940 "flush": false, 00:21:37.940 "reset": true, 00:21:37.940 "nvme_admin": false, 00:21:37.941 "nvme_io": false, 00:21:37.941 "nvme_io_md": false, 00:21:37.941 "write_zeroes": true, 00:21:37.941 "zcopy": false, 00:21:37.941 "get_zone_info": false, 00:21:37.941 "zone_management": false, 00:21:37.941 "zone_append": false, 00:21:37.941 "compare": false, 00:21:37.941 "compare_and_write": false, 00:21:37.941 "abort": false, 00:21:37.941 "seek_hole": false, 00:21:37.941 "seek_data": false, 00:21:37.941 "copy": false, 00:21:37.941 "nvme_iov_md": false 00:21:37.941 }, 00:21:37.941 "memory_domains": [ 00:21:37.941 { 00:21:37.941 "dma_device_id": "system", 00:21:37.941 "dma_device_type": 1 00:21:37.941 }, 00:21:37.941 { 00:21:37.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:37.941 "dma_device_type": 2 00:21:37.941 }, 00:21:37.941 { 00:21:37.941 "dma_device_id": "system", 00:21:37.941 "dma_device_type": 1 00:21:37.941 }, 00:21:37.941 { 00:21:37.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:37.941 "dma_device_type": 2 00:21:37.941 }, 00:21:37.941 { 00:21:37.941 "dma_device_id": "system", 00:21:37.941 "dma_device_type": 1 00:21:37.941 }, 00:21:37.941 { 00:21:37.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:37.941 "dma_device_type": 2 00:21:37.941 } 00:21:37.941 ], 00:21:37.941 "driver_specific": { 00:21:37.941 "raid": { 00:21:37.941 "uuid": "81b1ce60-1647-4eff-aa34-d3d4ac46db06", 00:21:37.941 "strip_size_kb": 0, 00:21:37.941 "state": "online", 00:21:37.941 "raid_level": "raid1", 00:21:37.941 "superblock": false, 00:21:37.941 "num_base_bdevs": 3, 00:21:37.941 "num_base_bdevs_discovered": 3, 00:21:37.941 "num_base_bdevs_operational": 3, 00:21:37.941 "base_bdevs_list": [ 00:21:37.941 { 00:21:37.941 "name": "BaseBdev1", 00:21:37.941 "uuid": "2b1a0e51-a7a8-4f4f-b48a-ed10d9e4bea7", 00:21:37.941 "is_configured": true, 00:21:37.941 "data_offset": 0, 00:21:37.941 "data_size": 65536 00:21:37.941 }, 00:21:37.941 { 00:21:37.941 "name": "BaseBdev2", 00:21:37.941 "uuid": "f9c84c51-889a-4016-bfcf-664f47c8e905", 00:21:37.941 "is_configured": true, 00:21:37.941 "data_offset": 0, 00:21:37.941 "data_size": 65536 00:21:37.941 }, 00:21:37.941 { 00:21:37.941 "name": "BaseBdev3", 00:21:37.941 "uuid": "b7961b31-9325-4b0d-ac88-c2210ed74660", 00:21:37.941 "is_configured": true, 00:21:37.941 "data_offset": 0, 00:21:37.941 "data_size": 65536 00:21:37.941 } 00:21:37.941 ] 00:21:37.941 } 00:21:37.941 } 00:21:37.941 }' 00:21:37.941 21:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:37.941 21:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:37.941 BaseBdev2 00:21:37.941 BaseBdev3' 00:21:37.941 21:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:37.941 21:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:37.941 21:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:38.200 21:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:38.200 "name": "BaseBdev1", 00:21:38.200 "aliases": [ 00:21:38.200 "2b1a0e51-a7a8-4f4f-b48a-ed10d9e4bea7" 00:21:38.200 ], 00:21:38.200 "product_name": "Malloc disk", 00:21:38.200 "block_size": 512, 00:21:38.200 "num_blocks": 65536, 00:21:38.200 "uuid": "2b1a0e51-a7a8-4f4f-b48a-ed10d9e4bea7", 00:21:38.200 "assigned_rate_limits": { 00:21:38.200 "rw_ios_per_sec": 0, 00:21:38.200 "rw_mbytes_per_sec": 0, 00:21:38.200 "r_mbytes_per_sec": 0, 00:21:38.200 "w_mbytes_per_sec": 0 00:21:38.200 }, 00:21:38.200 "claimed": true, 00:21:38.200 "claim_type": "exclusive_write", 00:21:38.200 "zoned": false, 00:21:38.200 "supported_io_types": { 00:21:38.200 "read": true, 00:21:38.200 "write": true, 00:21:38.200 "unmap": true, 00:21:38.200 "flush": true, 00:21:38.200 "reset": true, 00:21:38.200 "nvme_admin": false, 00:21:38.200 "nvme_io": false, 00:21:38.200 "nvme_io_md": false, 00:21:38.200 "write_zeroes": true, 00:21:38.200 "zcopy": true, 00:21:38.200 "get_zone_info": false, 00:21:38.200 "zone_management": false, 00:21:38.200 "zone_append": false, 00:21:38.200 "compare": false, 00:21:38.200 "compare_and_write": false, 00:21:38.200 "abort": true, 00:21:38.200 "seek_hole": false, 00:21:38.200 "seek_data": false, 00:21:38.200 "copy": true, 00:21:38.200 "nvme_iov_md": false 00:21:38.200 }, 00:21:38.200 "memory_domains": [ 00:21:38.200 { 00:21:38.200 "dma_device_id": "system", 00:21:38.200 "dma_device_type": 1 00:21:38.200 }, 00:21:38.200 { 00:21:38.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:38.200 "dma_device_type": 2 00:21:38.200 } 00:21:38.200 ], 00:21:38.200 "driver_specific": {} 00:21:38.200 }' 00:21:38.200 21:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:38.200 21:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:38.200 21:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:38.200 21:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:38.459 21:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:38.459 21:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:38.459 21:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:38.459 21:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:38.459 21:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:38.459 21:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:38.459 21:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:38.718 21:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:38.718 21:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:38.718 21:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:38.718 21:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:38.718 21:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:38.718 "name": "BaseBdev2", 00:21:38.718 "aliases": [ 00:21:38.718 "f9c84c51-889a-4016-bfcf-664f47c8e905" 00:21:38.718 ], 00:21:38.718 "product_name": "Malloc disk", 00:21:38.718 "block_size": 512, 00:21:38.718 "num_blocks": 65536, 00:21:38.718 "uuid": "f9c84c51-889a-4016-bfcf-664f47c8e905", 00:21:38.718 "assigned_rate_limits": { 00:21:38.718 "rw_ios_per_sec": 0, 00:21:38.718 "rw_mbytes_per_sec": 0, 00:21:38.718 "r_mbytes_per_sec": 0, 00:21:38.718 "w_mbytes_per_sec": 0 00:21:38.718 }, 00:21:38.718 "claimed": true, 00:21:38.718 "claim_type": "exclusive_write", 00:21:38.718 "zoned": false, 00:21:38.718 "supported_io_types": { 00:21:38.718 "read": true, 00:21:38.718 "write": true, 00:21:38.718 "unmap": true, 00:21:38.718 "flush": true, 00:21:38.718 "reset": true, 00:21:38.718 "nvme_admin": false, 00:21:38.718 "nvme_io": false, 00:21:38.718 "nvme_io_md": false, 00:21:38.718 "write_zeroes": true, 00:21:38.718 "zcopy": true, 00:21:38.718 "get_zone_info": false, 00:21:38.718 "zone_management": false, 00:21:38.718 "zone_append": false, 00:21:38.718 "compare": false, 00:21:38.718 "compare_and_write": false, 00:21:38.718 "abort": true, 00:21:38.718 "seek_hole": false, 00:21:38.718 "seek_data": false, 00:21:38.718 "copy": true, 00:21:38.718 "nvme_iov_md": false 00:21:38.718 }, 00:21:38.718 "memory_domains": [ 00:21:38.718 { 00:21:38.718 "dma_device_id": "system", 00:21:38.718 "dma_device_type": 1 00:21:38.718 }, 00:21:38.718 { 00:21:38.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:38.718 "dma_device_type": 2 00:21:38.718 } 00:21:38.718 ], 00:21:38.718 "driver_specific": {} 00:21:38.718 }' 00:21:38.718 21:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:38.977 21:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:38.977 21:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:38.977 21:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:38.977 21:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:38.977 21:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:38.977 21:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:38.977 21:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:39.236 21:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:39.236 21:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:39.236 21:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:39.236 21:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:39.236 21:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:39.236 21:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:39.236 21:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:39.495 21:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:39.495 "name": "BaseBdev3", 00:21:39.495 "aliases": [ 00:21:39.495 "b7961b31-9325-4b0d-ac88-c2210ed74660" 00:21:39.495 ], 00:21:39.495 "product_name": "Malloc disk", 00:21:39.495 "block_size": 512, 00:21:39.495 "num_blocks": 65536, 00:21:39.495 "uuid": "b7961b31-9325-4b0d-ac88-c2210ed74660", 00:21:39.495 "assigned_rate_limits": { 00:21:39.495 "rw_ios_per_sec": 0, 00:21:39.495 "rw_mbytes_per_sec": 0, 00:21:39.495 "r_mbytes_per_sec": 0, 00:21:39.495 "w_mbytes_per_sec": 0 00:21:39.495 }, 00:21:39.495 "claimed": true, 00:21:39.495 "claim_type": "exclusive_write", 00:21:39.495 "zoned": false, 00:21:39.495 "supported_io_types": { 00:21:39.495 "read": true, 00:21:39.496 "write": true, 00:21:39.496 "unmap": true, 00:21:39.496 "flush": true, 00:21:39.496 "reset": true, 00:21:39.496 "nvme_admin": false, 00:21:39.496 "nvme_io": false, 00:21:39.496 "nvme_io_md": false, 00:21:39.496 "write_zeroes": true, 00:21:39.496 "zcopy": true, 00:21:39.496 "get_zone_info": false, 00:21:39.496 "zone_management": false, 00:21:39.496 "zone_append": false, 00:21:39.496 "compare": false, 00:21:39.496 "compare_and_write": false, 00:21:39.496 "abort": true, 00:21:39.496 "seek_hole": false, 00:21:39.496 "seek_data": false, 00:21:39.496 "copy": true, 00:21:39.496 "nvme_iov_md": false 00:21:39.496 }, 00:21:39.496 "memory_domains": [ 00:21:39.496 { 00:21:39.496 "dma_device_id": "system", 00:21:39.496 "dma_device_type": 1 00:21:39.496 }, 00:21:39.496 { 00:21:39.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.496 "dma_device_type": 2 00:21:39.496 } 00:21:39.496 ], 00:21:39.496 "driver_specific": {} 00:21:39.496 }' 00:21:39.496 21:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:39.496 21:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:39.496 21:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:39.496 21:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:39.496 21:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:39.496 21:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:39.496 21:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:39.755 21:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:39.755 21:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:39.755 21:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:39.755 21:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:39.755 21:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:39.755 21:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:40.015 [2024-07-15 21:35:13.270343] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:40.274 21:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:40.274 21:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:21:40.274 21:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:40.274 21:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:40.274 21:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:21:40.274 21:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:40.274 21:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:40.274 21:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:40.274 21:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:40.274 21:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:40.274 21:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:40.274 21:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:40.274 21:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:40.274 21:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:40.274 21:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:40.274 21:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.274 21:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:40.274 21:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:40.274 "name": "Existed_Raid", 00:21:40.274 "uuid": "81b1ce60-1647-4eff-aa34-d3d4ac46db06", 00:21:40.274 "strip_size_kb": 0, 00:21:40.274 "state": "online", 00:21:40.274 "raid_level": "raid1", 00:21:40.274 "superblock": false, 00:21:40.274 "num_base_bdevs": 3, 00:21:40.274 "num_base_bdevs_discovered": 2, 00:21:40.274 "num_base_bdevs_operational": 2, 00:21:40.274 "base_bdevs_list": [ 00:21:40.274 { 00:21:40.274 "name": null, 00:21:40.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.274 "is_configured": false, 00:21:40.274 "data_offset": 0, 00:21:40.274 "data_size": 65536 00:21:40.274 }, 00:21:40.274 { 00:21:40.274 "name": "BaseBdev2", 00:21:40.274 "uuid": "f9c84c51-889a-4016-bfcf-664f47c8e905", 00:21:40.274 "is_configured": true, 00:21:40.274 "data_offset": 0, 00:21:40.274 "data_size": 65536 00:21:40.274 }, 00:21:40.274 { 00:21:40.274 "name": "BaseBdev3", 00:21:40.274 "uuid": "b7961b31-9325-4b0d-ac88-c2210ed74660", 00:21:40.274 "is_configured": true, 00:21:40.274 "data_offset": 0, 00:21:40.274 "data_size": 65536 00:21:40.274 } 00:21:40.274 ] 00:21:40.274 }' 00:21:40.274 21:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:40.274 21:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.841 21:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:40.841 21:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:40.841 21:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.841 21:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:41.099 21:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:41.099 21:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:41.099 21:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:41.359 [2024-07-15 21:35:14.579067] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:41.359 21:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:41.359 21:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:41.359 21:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.359 21:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:41.617 21:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:41.617 21:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:41.617 21:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:41.877 [2024-07-15 21:35:15.065370] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:41.877 [2024-07-15 21:35:15.065549] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:41.877 [2024-07-15 21:35:15.157608] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:41.877 [2024-07-15 21:35:15.157694] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:41.877 [2024-07-15 21:35:15.157722] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:21:41.877 21:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:41.877 21:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:41.877 21:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.877 21:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:42.136 21:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:42.136 21:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:42.136 21:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:21:42.136 21:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:21:42.136 21:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:42.136 21:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:42.395 BaseBdev2 00:21:42.395 21:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:21:42.395 21:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:42.395 21:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:42.395 21:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:42.395 21:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:42.395 21:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:42.395 21:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:42.395 21:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:42.654 [ 00:21:42.654 { 00:21:42.654 "name": "BaseBdev2", 00:21:42.654 "aliases": [ 00:21:42.654 "8aea9315-d69f-4890-83a6-4bed2fc0d9d6" 00:21:42.654 ], 00:21:42.654 "product_name": "Malloc disk", 00:21:42.654 "block_size": 512, 00:21:42.654 "num_blocks": 65536, 00:21:42.654 "uuid": "8aea9315-d69f-4890-83a6-4bed2fc0d9d6", 00:21:42.654 "assigned_rate_limits": { 00:21:42.654 "rw_ios_per_sec": 0, 00:21:42.654 "rw_mbytes_per_sec": 0, 00:21:42.654 "r_mbytes_per_sec": 0, 00:21:42.654 "w_mbytes_per_sec": 0 00:21:42.654 }, 00:21:42.654 "claimed": false, 00:21:42.654 "zoned": false, 00:21:42.654 "supported_io_types": { 00:21:42.654 "read": true, 00:21:42.654 "write": true, 00:21:42.654 "unmap": true, 00:21:42.654 "flush": true, 00:21:42.654 "reset": true, 00:21:42.654 "nvme_admin": false, 00:21:42.654 "nvme_io": false, 00:21:42.654 "nvme_io_md": false, 00:21:42.654 "write_zeroes": true, 00:21:42.654 "zcopy": true, 00:21:42.654 "get_zone_info": false, 00:21:42.654 "zone_management": false, 00:21:42.654 "zone_append": false, 00:21:42.654 "compare": false, 00:21:42.654 "compare_and_write": false, 00:21:42.654 "abort": true, 00:21:42.654 "seek_hole": false, 00:21:42.654 "seek_data": false, 00:21:42.654 "copy": true, 00:21:42.654 "nvme_iov_md": false 00:21:42.654 }, 00:21:42.654 "memory_domains": [ 00:21:42.654 { 00:21:42.654 "dma_device_id": "system", 00:21:42.654 "dma_device_type": 1 00:21:42.654 }, 00:21:42.654 { 00:21:42.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:42.654 "dma_device_type": 2 00:21:42.654 } 00:21:42.654 ], 00:21:42.654 "driver_specific": {} 00:21:42.654 } 00:21:42.654 ] 00:21:42.654 21:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:42.654 21:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:42.654 21:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:42.654 21:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:42.913 BaseBdev3 00:21:42.913 21:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:21:42.913 21:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:42.913 21:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:42.913 21:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:42.913 21:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:42.913 21:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:42.913 21:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:43.173 21:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:43.173 [ 00:21:43.173 { 00:21:43.173 "name": "BaseBdev3", 00:21:43.173 "aliases": [ 00:21:43.173 "c4ded8e3-d55d-4353-9c17-2d127ce28580" 00:21:43.173 ], 00:21:43.173 "product_name": "Malloc disk", 00:21:43.173 "block_size": 512, 00:21:43.173 "num_blocks": 65536, 00:21:43.173 "uuid": "c4ded8e3-d55d-4353-9c17-2d127ce28580", 00:21:43.173 "assigned_rate_limits": { 00:21:43.173 "rw_ios_per_sec": 0, 00:21:43.173 "rw_mbytes_per_sec": 0, 00:21:43.173 "r_mbytes_per_sec": 0, 00:21:43.173 "w_mbytes_per_sec": 0 00:21:43.173 }, 00:21:43.173 "claimed": false, 00:21:43.173 "zoned": false, 00:21:43.173 "supported_io_types": { 00:21:43.173 "read": true, 00:21:43.173 "write": true, 00:21:43.173 "unmap": true, 00:21:43.173 "flush": true, 00:21:43.173 "reset": true, 00:21:43.173 "nvme_admin": false, 00:21:43.173 "nvme_io": false, 00:21:43.173 "nvme_io_md": false, 00:21:43.173 "write_zeroes": true, 00:21:43.173 "zcopy": true, 00:21:43.173 "get_zone_info": false, 00:21:43.173 "zone_management": false, 00:21:43.173 "zone_append": false, 00:21:43.173 "compare": false, 00:21:43.173 "compare_and_write": false, 00:21:43.173 "abort": true, 00:21:43.173 "seek_hole": false, 00:21:43.173 "seek_data": false, 00:21:43.173 "copy": true, 00:21:43.173 "nvme_iov_md": false 00:21:43.173 }, 00:21:43.173 "memory_domains": [ 00:21:43.173 { 00:21:43.173 "dma_device_id": "system", 00:21:43.173 "dma_device_type": 1 00:21:43.173 }, 00:21:43.173 { 00:21:43.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.173 "dma_device_type": 2 00:21:43.173 } 00:21:43.173 ], 00:21:43.173 "driver_specific": {} 00:21:43.173 } 00:21:43.173 ] 00:21:43.173 21:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:43.173 21:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:43.173 21:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:43.173 21:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:43.432 [2024-07-15 21:35:16.702684] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:43.432 [2024-07-15 21:35:16.702817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:43.432 [2024-07-15 21:35:16.702877] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:43.432 [2024-07-15 21:35:16.704800] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:43.432 21:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:43.432 21:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:43.432 21:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:43.432 21:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:43.432 21:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:43.432 21:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:43.432 21:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:43.432 21:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:43.432 21:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:43.432 21:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:43.432 21:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:43.432 21:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.692 21:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:43.692 "name": "Existed_Raid", 00:21:43.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.692 "strip_size_kb": 0, 00:21:43.692 "state": "configuring", 00:21:43.692 "raid_level": "raid1", 00:21:43.692 "superblock": false, 00:21:43.692 "num_base_bdevs": 3, 00:21:43.692 "num_base_bdevs_discovered": 2, 00:21:43.692 "num_base_bdevs_operational": 3, 00:21:43.692 "base_bdevs_list": [ 00:21:43.692 { 00:21:43.692 "name": "BaseBdev1", 00:21:43.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.692 "is_configured": false, 00:21:43.692 "data_offset": 0, 00:21:43.692 "data_size": 0 00:21:43.692 }, 00:21:43.692 { 00:21:43.692 "name": "BaseBdev2", 00:21:43.692 "uuid": "8aea9315-d69f-4890-83a6-4bed2fc0d9d6", 00:21:43.692 "is_configured": true, 00:21:43.692 "data_offset": 0, 00:21:43.692 "data_size": 65536 00:21:43.692 }, 00:21:43.692 { 00:21:43.692 "name": "BaseBdev3", 00:21:43.692 "uuid": "c4ded8e3-d55d-4353-9c17-2d127ce28580", 00:21:43.692 "is_configured": true, 00:21:43.692 "data_offset": 0, 00:21:43.692 "data_size": 65536 00:21:43.692 } 00:21:43.692 ] 00:21:43.692 }' 00:21:43.692 21:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:43.692 21:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.260 21:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:44.519 [2024-07-15 21:35:17.728960] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:44.519 21:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:44.519 21:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:44.519 21:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:44.519 21:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:44.519 21:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:44.519 21:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:44.519 21:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:44.519 21:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:44.519 21:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:44.519 21:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:44.519 21:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.519 21:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:44.778 21:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:44.778 "name": "Existed_Raid", 00:21:44.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.778 "strip_size_kb": 0, 00:21:44.778 "state": "configuring", 00:21:44.778 "raid_level": "raid1", 00:21:44.778 "superblock": false, 00:21:44.778 "num_base_bdevs": 3, 00:21:44.778 "num_base_bdevs_discovered": 1, 00:21:44.778 "num_base_bdevs_operational": 3, 00:21:44.778 "base_bdevs_list": [ 00:21:44.778 { 00:21:44.778 "name": "BaseBdev1", 00:21:44.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.779 "is_configured": false, 00:21:44.779 "data_offset": 0, 00:21:44.779 "data_size": 0 00:21:44.779 }, 00:21:44.779 { 00:21:44.779 "name": null, 00:21:44.779 "uuid": "8aea9315-d69f-4890-83a6-4bed2fc0d9d6", 00:21:44.779 "is_configured": false, 00:21:44.779 "data_offset": 0, 00:21:44.779 "data_size": 65536 00:21:44.779 }, 00:21:44.779 { 00:21:44.779 "name": "BaseBdev3", 00:21:44.779 "uuid": "c4ded8e3-d55d-4353-9c17-2d127ce28580", 00:21:44.779 "is_configured": true, 00:21:44.779 "data_offset": 0, 00:21:44.779 "data_size": 65536 00:21:44.779 } 00:21:44.779 ] 00:21:44.779 }' 00:21:44.779 21:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:44.779 21:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.348 21:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.348 21:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:45.608 21:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:21:45.608 21:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:45.868 [2024-07-15 21:35:19.038615] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:45.868 BaseBdev1 00:21:45.868 21:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:21:45.868 21:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:45.868 21:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:45.868 21:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:45.868 21:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:45.868 21:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:45.868 21:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:45.868 21:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:46.128 [ 00:21:46.128 { 00:21:46.128 "name": "BaseBdev1", 00:21:46.128 "aliases": [ 00:21:46.128 "6efa6bae-a0c7-4541-af13-2c72aa1f17b4" 00:21:46.128 ], 00:21:46.128 "product_name": "Malloc disk", 00:21:46.128 "block_size": 512, 00:21:46.128 "num_blocks": 65536, 00:21:46.128 "uuid": "6efa6bae-a0c7-4541-af13-2c72aa1f17b4", 00:21:46.128 "assigned_rate_limits": { 00:21:46.128 "rw_ios_per_sec": 0, 00:21:46.128 "rw_mbytes_per_sec": 0, 00:21:46.128 "r_mbytes_per_sec": 0, 00:21:46.128 "w_mbytes_per_sec": 0 00:21:46.128 }, 00:21:46.128 "claimed": true, 00:21:46.128 "claim_type": "exclusive_write", 00:21:46.128 "zoned": false, 00:21:46.128 "supported_io_types": { 00:21:46.128 "read": true, 00:21:46.128 "write": true, 00:21:46.128 "unmap": true, 00:21:46.128 "flush": true, 00:21:46.128 "reset": true, 00:21:46.128 "nvme_admin": false, 00:21:46.128 "nvme_io": false, 00:21:46.128 "nvme_io_md": false, 00:21:46.128 "write_zeroes": true, 00:21:46.128 "zcopy": true, 00:21:46.128 "get_zone_info": false, 00:21:46.128 "zone_management": false, 00:21:46.128 "zone_append": false, 00:21:46.128 "compare": false, 00:21:46.128 "compare_and_write": false, 00:21:46.128 "abort": true, 00:21:46.128 "seek_hole": false, 00:21:46.128 "seek_data": false, 00:21:46.128 "copy": true, 00:21:46.128 "nvme_iov_md": false 00:21:46.128 }, 00:21:46.128 "memory_domains": [ 00:21:46.128 { 00:21:46.128 "dma_device_id": "system", 00:21:46.128 "dma_device_type": 1 00:21:46.128 }, 00:21:46.128 { 00:21:46.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:46.128 "dma_device_type": 2 00:21:46.128 } 00:21:46.128 ], 00:21:46.128 "driver_specific": {} 00:21:46.128 } 00:21:46.128 ] 00:21:46.128 21:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:46.128 21:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:46.128 21:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:46.128 21:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:46.128 21:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:46.128 21:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:46.128 21:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:46.128 21:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:46.128 21:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:46.128 21:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:46.128 21:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:46.128 21:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.128 21:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:46.389 21:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:46.389 "name": "Existed_Raid", 00:21:46.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.389 "strip_size_kb": 0, 00:21:46.389 "state": "configuring", 00:21:46.389 "raid_level": "raid1", 00:21:46.389 "superblock": false, 00:21:46.389 "num_base_bdevs": 3, 00:21:46.389 "num_base_bdevs_discovered": 2, 00:21:46.389 "num_base_bdevs_operational": 3, 00:21:46.389 "base_bdevs_list": [ 00:21:46.389 { 00:21:46.389 "name": "BaseBdev1", 00:21:46.389 "uuid": "6efa6bae-a0c7-4541-af13-2c72aa1f17b4", 00:21:46.389 "is_configured": true, 00:21:46.389 "data_offset": 0, 00:21:46.389 "data_size": 65536 00:21:46.389 }, 00:21:46.389 { 00:21:46.389 "name": null, 00:21:46.389 "uuid": "8aea9315-d69f-4890-83a6-4bed2fc0d9d6", 00:21:46.389 "is_configured": false, 00:21:46.389 "data_offset": 0, 00:21:46.389 "data_size": 65536 00:21:46.389 }, 00:21:46.389 { 00:21:46.389 "name": "BaseBdev3", 00:21:46.389 "uuid": "c4ded8e3-d55d-4353-9c17-2d127ce28580", 00:21:46.389 "is_configured": true, 00:21:46.389 "data_offset": 0, 00:21:46.389 "data_size": 65536 00:21:46.389 } 00:21:46.389 ] 00:21:46.389 }' 00:21:46.389 21:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:46.389 21:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.958 21:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.958 21:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:47.217 21:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:21:47.217 21:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:21:47.217 [2024-07-15 21:35:20.576164] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:47.477 21:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:47.477 21:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:47.477 21:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:47.477 21:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:47.477 21:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:47.477 21:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:47.477 21:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:47.477 21:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:47.477 21:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:47.477 21:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:47.477 21:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.477 21:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:47.477 21:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:47.477 "name": "Existed_Raid", 00:21:47.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.477 "strip_size_kb": 0, 00:21:47.477 "state": "configuring", 00:21:47.477 "raid_level": "raid1", 00:21:47.477 "superblock": false, 00:21:47.477 "num_base_bdevs": 3, 00:21:47.477 "num_base_bdevs_discovered": 1, 00:21:47.477 "num_base_bdevs_operational": 3, 00:21:47.477 "base_bdevs_list": [ 00:21:47.477 { 00:21:47.477 "name": "BaseBdev1", 00:21:47.477 "uuid": "6efa6bae-a0c7-4541-af13-2c72aa1f17b4", 00:21:47.477 "is_configured": true, 00:21:47.477 "data_offset": 0, 00:21:47.477 "data_size": 65536 00:21:47.477 }, 00:21:47.477 { 00:21:47.477 "name": null, 00:21:47.477 "uuid": "8aea9315-d69f-4890-83a6-4bed2fc0d9d6", 00:21:47.477 "is_configured": false, 00:21:47.477 "data_offset": 0, 00:21:47.477 "data_size": 65536 00:21:47.477 }, 00:21:47.477 { 00:21:47.477 "name": null, 00:21:47.477 "uuid": "c4ded8e3-d55d-4353-9c17-2d127ce28580", 00:21:47.477 "is_configured": false, 00:21:47.477 "data_offset": 0, 00:21:47.477 "data_size": 65536 00:21:47.477 } 00:21:47.477 ] 00:21:47.477 }' 00:21:47.477 21:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:47.477 21:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.415 21:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.415 21:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:48.415 21:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:21:48.416 21:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:48.675 [2024-07-15 21:35:21.810181] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:48.675 21:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:48.675 21:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:48.675 21:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:48.675 21:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:48.675 21:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:48.675 21:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:48.675 21:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:48.675 21:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:48.675 21:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:48.675 21:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:48.675 21:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.675 21:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:48.675 21:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:48.675 "name": "Existed_Raid", 00:21:48.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.675 "strip_size_kb": 0, 00:21:48.675 "state": "configuring", 00:21:48.675 "raid_level": "raid1", 00:21:48.675 "superblock": false, 00:21:48.675 "num_base_bdevs": 3, 00:21:48.675 "num_base_bdevs_discovered": 2, 00:21:48.675 "num_base_bdevs_operational": 3, 00:21:48.675 "base_bdevs_list": [ 00:21:48.675 { 00:21:48.675 "name": "BaseBdev1", 00:21:48.675 "uuid": "6efa6bae-a0c7-4541-af13-2c72aa1f17b4", 00:21:48.675 "is_configured": true, 00:21:48.675 "data_offset": 0, 00:21:48.675 "data_size": 65536 00:21:48.675 }, 00:21:48.675 { 00:21:48.675 "name": null, 00:21:48.675 "uuid": "8aea9315-d69f-4890-83a6-4bed2fc0d9d6", 00:21:48.675 "is_configured": false, 00:21:48.675 "data_offset": 0, 00:21:48.675 "data_size": 65536 00:21:48.675 }, 00:21:48.675 { 00:21:48.675 "name": "BaseBdev3", 00:21:48.675 "uuid": "c4ded8e3-d55d-4353-9c17-2d127ce28580", 00:21:48.675 "is_configured": true, 00:21:48.675 "data_offset": 0, 00:21:48.675 "data_size": 65536 00:21:48.675 } 00:21:48.675 ] 00:21:48.675 }' 00:21:48.675 21:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:48.675 21:35:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.609 21:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.609 21:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:49.609 21:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:21:49.609 21:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:49.609 [2024-07-15 21:35:22.968449] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:49.867 21:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:49.867 21:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:49.867 21:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:49.867 21:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:49.867 21:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:49.867 21:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:49.867 21:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:49.867 21:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:49.867 21:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:49.867 21:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:49.867 21:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.867 21:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:50.124 21:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:50.124 "name": "Existed_Raid", 00:21:50.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.124 "strip_size_kb": 0, 00:21:50.124 "state": "configuring", 00:21:50.124 "raid_level": "raid1", 00:21:50.124 "superblock": false, 00:21:50.124 "num_base_bdevs": 3, 00:21:50.124 "num_base_bdevs_discovered": 1, 00:21:50.124 "num_base_bdevs_operational": 3, 00:21:50.124 "base_bdevs_list": [ 00:21:50.124 { 00:21:50.124 "name": null, 00:21:50.124 "uuid": "6efa6bae-a0c7-4541-af13-2c72aa1f17b4", 00:21:50.124 "is_configured": false, 00:21:50.124 "data_offset": 0, 00:21:50.124 "data_size": 65536 00:21:50.124 }, 00:21:50.124 { 00:21:50.124 "name": null, 00:21:50.124 "uuid": "8aea9315-d69f-4890-83a6-4bed2fc0d9d6", 00:21:50.124 "is_configured": false, 00:21:50.124 "data_offset": 0, 00:21:50.124 "data_size": 65536 00:21:50.124 }, 00:21:50.124 { 00:21:50.124 "name": "BaseBdev3", 00:21:50.124 "uuid": "c4ded8e3-d55d-4353-9c17-2d127ce28580", 00:21:50.124 "is_configured": true, 00:21:50.124 "data_offset": 0, 00:21:50.124 "data_size": 65536 00:21:50.124 } 00:21:50.124 ] 00:21:50.124 }' 00:21:50.124 21:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:50.124 21:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.690 21:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:50.690 21:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.948 21:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:21:50.948 21:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:50.948 [2024-07-15 21:35:24.262063] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:50.948 21:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:50.948 21:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:50.948 21:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:50.948 21:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:50.948 21:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:50.948 21:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:50.948 21:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:50.948 21:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:50.948 21:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:50.948 21:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:50.948 21:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.948 21:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:51.207 21:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:51.207 "name": "Existed_Raid", 00:21:51.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.207 "strip_size_kb": 0, 00:21:51.207 "state": "configuring", 00:21:51.207 "raid_level": "raid1", 00:21:51.207 "superblock": false, 00:21:51.207 "num_base_bdevs": 3, 00:21:51.207 "num_base_bdevs_discovered": 2, 00:21:51.207 "num_base_bdevs_operational": 3, 00:21:51.207 "base_bdevs_list": [ 00:21:51.207 { 00:21:51.207 "name": null, 00:21:51.207 "uuid": "6efa6bae-a0c7-4541-af13-2c72aa1f17b4", 00:21:51.207 "is_configured": false, 00:21:51.207 "data_offset": 0, 00:21:51.207 "data_size": 65536 00:21:51.207 }, 00:21:51.207 { 00:21:51.207 "name": "BaseBdev2", 00:21:51.207 "uuid": "8aea9315-d69f-4890-83a6-4bed2fc0d9d6", 00:21:51.207 "is_configured": true, 00:21:51.207 "data_offset": 0, 00:21:51.207 "data_size": 65536 00:21:51.207 }, 00:21:51.207 { 00:21:51.207 "name": "BaseBdev3", 00:21:51.207 "uuid": "c4ded8e3-d55d-4353-9c17-2d127ce28580", 00:21:51.207 "is_configured": true, 00:21:51.207 "data_offset": 0, 00:21:51.207 "data_size": 65536 00:21:51.207 } 00:21:51.207 ] 00:21:51.207 }' 00:21:51.207 21:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:51.207 21:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.774 21:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.774 21:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:52.032 21:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:21:52.032 21:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.032 21:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:52.291 21:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 6efa6bae-a0c7-4541-af13-2c72aa1f17b4 00:21:52.291 [2024-07-15 21:35:25.629751] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:52.291 [2024-07-15 21:35:25.629879] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:21:52.291 [2024-07-15 21:35:25.629902] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:52.291 [2024-07-15 21:35:25.630064] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:21:52.291 [2024-07-15 21:35:25.630400] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:21:52.291 [2024-07-15 21:35:25.630448] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008d80 00:21:52.291 [2024-07-15 21:35:25.630689] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:52.291 NewBaseBdev 00:21:52.291 21:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:21:52.291 21:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:21:52.291 21:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:52.291 21:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:52.291 21:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:52.291 21:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:52.291 21:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:52.550 21:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:52.810 [ 00:21:52.810 { 00:21:52.810 "name": "NewBaseBdev", 00:21:52.810 "aliases": [ 00:21:52.810 "6efa6bae-a0c7-4541-af13-2c72aa1f17b4" 00:21:52.810 ], 00:21:52.810 "product_name": "Malloc disk", 00:21:52.810 "block_size": 512, 00:21:52.810 "num_blocks": 65536, 00:21:52.810 "uuid": "6efa6bae-a0c7-4541-af13-2c72aa1f17b4", 00:21:52.810 "assigned_rate_limits": { 00:21:52.810 "rw_ios_per_sec": 0, 00:21:52.810 "rw_mbytes_per_sec": 0, 00:21:52.810 "r_mbytes_per_sec": 0, 00:21:52.810 "w_mbytes_per_sec": 0 00:21:52.810 }, 00:21:52.810 "claimed": true, 00:21:52.810 "claim_type": "exclusive_write", 00:21:52.810 "zoned": false, 00:21:52.810 "supported_io_types": { 00:21:52.810 "read": true, 00:21:52.810 "write": true, 00:21:52.810 "unmap": true, 00:21:52.810 "flush": true, 00:21:52.810 "reset": true, 00:21:52.810 "nvme_admin": false, 00:21:52.810 "nvme_io": false, 00:21:52.810 "nvme_io_md": false, 00:21:52.810 "write_zeroes": true, 00:21:52.810 "zcopy": true, 00:21:52.810 "get_zone_info": false, 00:21:52.810 "zone_management": false, 00:21:52.810 "zone_append": false, 00:21:52.810 "compare": false, 00:21:52.810 "compare_and_write": false, 00:21:52.810 "abort": true, 00:21:52.810 "seek_hole": false, 00:21:52.810 "seek_data": false, 00:21:52.810 "copy": true, 00:21:52.810 "nvme_iov_md": false 00:21:52.810 }, 00:21:52.810 "memory_domains": [ 00:21:52.810 { 00:21:52.810 "dma_device_id": "system", 00:21:52.810 "dma_device_type": 1 00:21:52.810 }, 00:21:52.810 { 00:21:52.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.810 "dma_device_type": 2 00:21:52.810 } 00:21:52.810 ], 00:21:52.810 "driver_specific": {} 00:21:52.810 } 00:21:52.810 ] 00:21:52.810 21:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:52.810 21:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:21:52.810 21:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:52.810 21:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:52.810 21:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:52.810 21:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:52.810 21:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:52.810 21:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:52.810 21:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:52.810 21:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:52.810 21:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:52.810 21:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.810 21:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:53.069 21:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:53.069 "name": "Existed_Raid", 00:21:53.069 "uuid": "e0bc585e-c4ec-4115-82ed-0ff08f0ad656", 00:21:53.069 "strip_size_kb": 0, 00:21:53.069 "state": "online", 00:21:53.069 "raid_level": "raid1", 00:21:53.069 "superblock": false, 00:21:53.069 "num_base_bdevs": 3, 00:21:53.069 "num_base_bdevs_discovered": 3, 00:21:53.069 "num_base_bdevs_operational": 3, 00:21:53.069 "base_bdevs_list": [ 00:21:53.069 { 00:21:53.069 "name": "NewBaseBdev", 00:21:53.069 "uuid": "6efa6bae-a0c7-4541-af13-2c72aa1f17b4", 00:21:53.069 "is_configured": true, 00:21:53.069 "data_offset": 0, 00:21:53.069 "data_size": 65536 00:21:53.069 }, 00:21:53.069 { 00:21:53.069 "name": "BaseBdev2", 00:21:53.069 "uuid": "8aea9315-d69f-4890-83a6-4bed2fc0d9d6", 00:21:53.069 "is_configured": true, 00:21:53.069 "data_offset": 0, 00:21:53.069 "data_size": 65536 00:21:53.069 }, 00:21:53.069 { 00:21:53.069 "name": "BaseBdev3", 00:21:53.069 "uuid": "c4ded8e3-d55d-4353-9c17-2d127ce28580", 00:21:53.069 "is_configured": true, 00:21:53.069 "data_offset": 0, 00:21:53.069 "data_size": 65536 00:21:53.069 } 00:21:53.069 ] 00:21:53.069 }' 00:21:53.069 21:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:53.069 21:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.638 21:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:21:53.638 21:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:53.638 21:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:53.638 21:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:53.638 21:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:53.638 21:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:53.638 21:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:53.638 21:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:53.897 [2024-07-15 21:35:27.033214] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:53.897 21:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:53.897 "name": "Existed_Raid", 00:21:53.897 "aliases": [ 00:21:53.897 "e0bc585e-c4ec-4115-82ed-0ff08f0ad656" 00:21:53.897 ], 00:21:53.897 "product_name": "Raid Volume", 00:21:53.897 "block_size": 512, 00:21:53.897 "num_blocks": 65536, 00:21:53.897 "uuid": "e0bc585e-c4ec-4115-82ed-0ff08f0ad656", 00:21:53.897 "assigned_rate_limits": { 00:21:53.897 "rw_ios_per_sec": 0, 00:21:53.897 "rw_mbytes_per_sec": 0, 00:21:53.897 "r_mbytes_per_sec": 0, 00:21:53.897 "w_mbytes_per_sec": 0 00:21:53.897 }, 00:21:53.897 "claimed": false, 00:21:53.897 "zoned": false, 00:21:53.897 "supported_io_types": { 00:21:53.897 "read": true, 00:21:53.897 "write": true, 00:21:53.897 "unmap": false, 00:21:53.897 "flush": false, 00:21:53.897 "reset": true, 00:21:53.897 "nvme_admin": false, 00:21:53.897 "nvme_io": false, 00:21:53.897 "nvme_io_md": false, 00:21:53.897 "write_zeroes": true, 00:21:53.897 "zcopy": false, 00:21:53.897 "get_zone_info": false, 00:21:53.897 "zone_management": false, 00:21:53.897 "zone_append": false, 00:21:53.897 "compare": false, 00:21:53.897 "compare_and_write": false, 00:21:53.897 "abort": false, 00:21:53.897 "seek_hole": false, 00:21:53.897 "seek_data": false, 00:21:53.897 "copy": false, 00:21:53.897 "nvme_iov_md": false 00:21:53.897 }, 00:21:53.897 "memory_domains": [ 00:21:53.897 { 00:21:53.897 "dma_device_id": "system", 00:21:53.897 "dma_device_type": 1 00:21:53.897 }, 00:21:53.897 { 00:21:53.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:53.897 "dma_device_type": 2 00:21:53.897 }, 00:21:53.897 { 00:21:53.897 "dma_device_id": "system", 00:21:53.897 "dma_device_type": 1 00:21:53.897 }, 00:21:53.897 { 00:21:53.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:53.897 "dma_device_type": 2 00:21:53.897 }, 00:21:53.897 { 00:21:53.897 "dma_device_id": "system", 00:21:53.897 "dma_device_type": 1 00:21:53.897 }, 00:21:53.897 { 00:21:53.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:53.897 "dma_device_type": 2 00:21:53.897 } 00:21:53.897 ], 00:21:53.897 "driver_specific": { 00:21:53.897 "raid": { 00:21:53.897 "uuid": "e0bc585e-c4ec-4115-82ed-0ff08f0ad656", 00:21:53.897 "strip_size_kb": 0, 00:21:53.897 "state": "online", 00:21:53.897 "raid_level": "raid1", 00:21:53.897 "superblock": false, 00:21:53.897 "num_base_bdevs": 3, 00:21:53.897 "num_base_bdevs_discovered": 3, 00:21:53.897 "num_base_bdevs_operational": 3, 00:21:53.897 "base_bdevs_list": [ 00:21:53.897 { 00:21:53.897 "name": "NewBaseBdev", 00:21:53.897 "uuid": "6efa6bae-a0c7-4541-af13-2c72aa1f17b4", 00:21:53.897 "is_configured": true, 00:21:53.897 "data_offset": 0, 00:21:53.897 "data_size": 65536 00:21:53.897 }, 00:21:53.897 { 00:21:53.897 "name": "BaseBdev2", 00:21:53.897 "uuid": "8aea9315-d69f-4890-83a6-4bed2fc0d9d6", 00:21:53.897 "is_configured": true, 00:21:53.897 "data_offset": 0, 00:21:53.897 "data_size": 65536 00:21:53.897 }, 00:21:53.897 { 00:21:53.897 "name": "BaseBdev3", 00:21:53.897 "uuid": "c4ded8e3-d55d-4353-9c17-2d127ce28580", 00:21:53.897 "is_configured": true, 00:21:53.897 "data_offset": 0, 00:21:53.897 "data_size": 65536 00:21:53.897 } 00:21:53.897 ] 00:21:53.897 } 00:21:53.897 } 00:21:53.897 }' 00:21:53.897 21:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:53.897 21:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:21:53.897 BaseBdev2 00:21:53.897 BaseBdev3' 00:21:53.897 21:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:53.897 21:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:21:53.897 21:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:54.156 21:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:54.156 "name": "NewBaseBdev", 00:21:54.156 "aliases": [ 00:21:54.156 "6efa6bae-a0c7-4541-af13-2c72aa1f17b4" 00:21:54.156 ], 00:21:54.156 "product_name": "Malloc disk", 00:21:54.156 "block_size": 512, 00:21:54.156 "num_blocks": 65536, 00:21:54.156 "uuid": "6efa6bae-a0c7-4541-af13-2c72aa1f17b4", 00:21:54.156 "assigned_rate_limits": { 00:21:54.156 "rw_ios_per_sec": 0, 00:21:54.156 "rw_mbytes_per_sec": 0, 00:21:54.156 "r_mbytes_per_sec": 0, 00:21:54.156 "w_mbytes_per_sec": 0 00:21:54.156 }, 00:21:54.156 "claimed": true, 00:21:54.156 "claim_type": "exclusive_write", 00:21:54.156 "zoned": false, 00:21:54.156 "supported_io_types": { 00:21:54.156 "read": true, 00:21:54.156 "write": true, 00:21:54.156 "unmap": true, 00:21:54.156 "flush": true, 00:21:54.156 "reset": true, 00:21:54.156 "nvme_admin": false, 00:21:54.156 "nvme_io": false, 00:21:54.156 "nvme_io_md": false, 00:21:54.156 "write_zeroes": true, 00:21:54.156 "zcopy": true, 00:21:54.156 "get_zone_info": false, 00:21:54.156 "zone_management": false, 00:21:54.156 "zone_append": false, 00:21:54.156 "compare": false, 00:21:54.156 "compare_and_write": false, 00:21:54.156 "abort": true, 00:21:54.156 "seek_hole": false, 00:21:54.156 "seek_data": false, 00:21:54.156 "copy": true, 00:21:54.156 "nvme_iov_md": false 00:21:54.156 }, 00:21:54.156 "memory_domains": [ 00:21:54.156 { 00:21:54.156 "dma_device_id": "system", 00:21:54.156 "dma_device_type": 1 00:21:54.156 }, 00:21:54.156 { 00:21:54.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:54.156 "dma_device_type": 2 00:21:54.156 } 00:21:54.156 ], 00:21:54.156 "driver_specific": {} 00:21:54.156 }' 00:21:54.156 21:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:54.156 21:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:54.156 21:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:54.156 21:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:54.156 21:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:54.415 21:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:54.415 21:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:54.415 21:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:54.415 21:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:54.415 21:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:54.415 21:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:54.415 21:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:54.415 21:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:54.415 21:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:54.415 21:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:54.674 21:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:54.674 "name": "BaseBdev2", 00:21:54.674 "aliases": [ 00:21:54.674 "8aea9315-d69f-4890-83a6-4bed2fc0d9d6" 00:21:54.674 ], 00:21:54.674 "product_name": "Malloc disk", 00:21:54.674 "block_size": 512, 00:21:54.674 "num_blocks": 65536, 00:21:54.674 "uuid": "8aea9315-d69f-4890-83a6-4bed2fc0d9d6", 00:21:54.674 "assigned_rate_limits": { 00:21:54.674 "rw_ios_per_sec": 0, 00:21:54.674 "rw_mbytes_per_sec": 0, 00:21:54.674 "r_mbytes_per_sec": 0, 00:21:54.674 "w_mbytes_per_sec": 0 00:21:54.674 }, 00:21:54.674 "claimed": true, 00:21:54.674 "claim_type": "exclusive_write", 00:21:54.674 "zoned": false, 00:21:54.674 "supported_io_types": { 00:21:54.674 "read": true, 00:21:54.674 "write": true, 00:21:54.674 "unmap": true, 00:21:54.674 "flush": true, 00:21:54.674 "reset": true, 00:21:54.674 "nvme_admin": false, 00:21:54.674 "nvme_io": false, 00:21:54.674 "nvme_io_md": false, 00:21:54.674 "write_zeroes": true, 00:21:54.674 "zcopy": true, 00:21:54.674 "get_zone_info": false, 00:21:54.674 "zone_management": false, 00:21:54.674 "zone_append": false, 00:21:54.674 "compare": false, 00:21:54.674 "compare_and_write": false, 00:21:54.674 "abort": true, 00:21:54.674 "seek_hole": false, 00:21:54.674 "seek_data": false, 00:21:54.674 "copy": true, 00:21:54.674 "nvme_iov_md": false 00:21:54.674 }, 00:21:54.674 "memory_domains": [ 00:21:54.674 { 00:21:54.674 "dma_device_id": "system", 00:21:54.674 "dma_device_type": 1 00:21:54.674 }, 00:21:54.674 { 00:21:54.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:54.674 "dma_device_type": 2 00:21:54.674 } 00:21:54.674 ], 00:21:54.674 "driver_specific": {} 00:21:54.674 }' 00:21:54.674 21:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:54.674 21:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:54.931 21:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:54.931 21:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:54.931 21:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:54.931 21:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:54.931 21:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:54.931 21:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:55.191 21:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:55.191 21:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:55.191 21:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:55.191 21:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:55.191 21:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:55.191 21:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:55.191 21:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:55.450 21:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:55.450 "name": "BaseBdev3", 00:21:55.450 "aliases": [ 00:21:55.450 "c4ded8e3-d55d-4353-9c17-2d127ce28580" 00:21:55.450 ], 00:21:55.450 "product_name": "Malloc disk", 00:21:55.450 "block_size": 512, 00:21:55.450 "num_blocks": 65536, 00:21:55.450 "uuid": "c4ded8e3-d55d-4353-9c17-2d127ce28580", 00:21:55.450 "assigned_rate_limits": { 00:21:55.450 "rw_ios_per_sec": 0, 00:21:55.450 "rw_mbytes_per_sec": 0, 00:21:55.450 "r_mbytes_per_sec": 0, 00:21:55.451 "w_mbytes_per_sec": 0 00:21:55.451 }, 00:21:55.451 "claimed": true, 00:21:55.451 "claim_type": "exclusive_write", 00:21:55.451 "zoned": false, 00:21:55.451 "supported_io_types": { 00:21:55.451 "read": true, 00:21:55.451 "write": true, 00:21:55.451 "unmap": true, 00:21:55.451 "flush": true, 00:21:55.451 "reset": true, 00:21:55.451 "nvme_admin": false, 00:21:55.451 "nvme_io": false, 00:21:55.451 "nvme_io_md": false, 00:21:55.451 "write_zeroes": true, 00:21:55.451 "zcopy": true, 00:21:55.451 "get_zone_info": false, 00:21:55.451 "zone_management": false, 00:21:55.451 "zone_append": false, 00:21:55.451 "compare": false, 00:21:55.451 "compare_and_write": false, 00:21:55.451 "abort": true, 00:21:55.451 "seek_hole": false, 00:21:55.451 "seek_data": false, 00:21:55.451 "copy": true, 00:21:55.451 "nvme_iov_md": false 00:21:55.451 }, 00:21:55.451 "memory_domains": [ 00:21:55.451 { 00:21:55.451 "dma_device_id": "system", 00:21:55.451 "dma_device_type": 1 00:21:55.451 }, 00:21:55.451 { 00:21:55.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:55.451 "dma_device_type": 2 00:21:55.451 } 00:21:55.451 ], 00:21:55.451 "driver_specific": {} 00:21:55.451 }' 00:21:55.451 21:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:55.451 21:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:55.451 21:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:55.451 21:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:55.451 21:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:55.451 21:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:55.451 21:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:55.710 21:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:55.710 21:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:55.710 21:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:55.710 21:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:55.710 21:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:55.710 21:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:55.970 [2024-07-15 21:35:29.189339] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:55.970 [2024-07-15 21:35:29.189440] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:55.970 [2024-07-15 21:35:29.189562] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:55.970 [2024-07-15 21:35:29.189846] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:55.970 [2024-07-15 21:35:29.189878] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name Existed_Raid, state offline 00:21:55.970 21:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 132019 00:21:55.970 21:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 132019 ']' 00:21:55.970 21:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 132019 00:21:55.970 21:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:21:55.970 21:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:55.970 21:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 132019 00:21:55.970 killing process with pid 132019 00:21:55.970 21:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:55.970 21:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:55.970 21:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 132019' 00:21:55.970 21:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 132019 00:21:55.970 21:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 132019 00:21:55.970 [2024-07-15 21:35:29.230730] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:56.229 [2024-07-15 21:35:29.512354] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:57.607 ************************************ 00:21:57.607 END TEST raid_state_function_test 00:21:57.607 ************************************ 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:21:57.607 00:21:57.607 real 0m27.299s 00:21:57.607 user 0m50.336s 00:21:57.607 sys 0m3.420s 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.607 21:35:30 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:57.607 21:35:30 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:21:57.607 21:35:30 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:57.607 21:35:30 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:57.607 21:35:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:57.607 ************************************ 00:21:57.607 START TEST raid_state_function_test_sb 00:21:57.607 ************************************ 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 true 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=133001 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 133001' 00:21:57.607 Process raid pid: 133001 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 133001 /var/tmp/spdk-raid.sock 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 133001 ']' 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:57.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:57.607 21:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:57.607 [2024-07-15 21:35:30.840648] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:21:57.607 [2024-07-15 21:35:30.840843] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:57.867 [2024-07-15 21:35:30.999698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.867 [2024-07-15 21:35:31.192483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.126 [2024-07-15 21:35:31.376662] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:58.385 21:35:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:58.385 21:35:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:21:58.385 21:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:58.645 [2024-07-15 21:35:31.814298] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:58.645 [2024-07-15 21:35:31.814444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:58.645 [2024-07-15 21:35:31.814474] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:58.645 [2024-07-15 21:35:31.814506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:58.645 [2024-07-15 21:35:31.814523] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:58.645 [2024-07-15 21:35:31.814543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:58.645 21:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:58.645 21:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:58.645 21:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:58.645 21:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:58.645 21:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:58.645 21:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:58.645 21:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:58.645 21:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:58.645 21:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:58.645 21:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:58.645 21:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.645 21:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:58.905 21:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:58.905 "name": "Existed_Raid", 00:21:58.905 "uuid": "9d71df42-1df1-4ed3-b7f9-4158657e7bf4", 00:21:58.905 "strip_size_kb": 0, 00:21:58.905 "state": "configuring", 00:21:58.905 "raid_level": "raid1", 00:21:58.905 "superblock": true, 00:21:58.905 "num_base_bdevs": 3, 00:21:58.905 "num_base_bdevs_discovered": 0, 00:21:58.905 "num_base_bdevs_operational": 3, 00:21:58.905 "base_bdevs_list": [ 00:21:58.905 { 00:21:58.905 "name": "BaseBdev1", 00:21:58.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.905 "is_configured": false, 00:21:58.905 "data_offset": 0, 00:21:58.905 "data_size": 0 00:21:58.905 }, 00:21:58.905 { 00:21:58.905 "name": "BaseBdev2", 00:21:58.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.905 "is_configured": false, 00:21:58.905 "data_offset": 0, 00:21:58.905 "data_size": 0 00:21:58.905 }, 00:21:58.905 { 00:21:58.905 "name": "BaseBdev3", 00:21:58.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.905 "is_configured": false, 00:21:58.905 "data_offset": 0, 00:21:58.905 "data_size": 0 00:21:58.905 } 00:21:58.905 ] 00:21:58.905 }' 00:21:58.905 21:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:58.905 21:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:59.475 21:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:59.475 [2024-07-15 21:35:32.800499] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:59.475 [2024-07-15 21:35:32.800611] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:21:59.475 21:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:59.735 [2024-07-15 21:35:32.992205] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:59.735 [2024-07-15 21:35:32.992330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:59.735 [2024-07-15 21:35:32.992361] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:59.735 [2024-07-15 21:35:32.992385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:59.735 [2024-07-15 21:35:32.992399] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:59.735 [2024-07-15 21:35:32.992426] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:59.735 21:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:59.994 [2024-07-15 21:35:33.215539] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:59.994 BaseBdev1 00:21:59.994 21:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:59.994 21:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:59.994 21:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:59.994 21:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:59.994 21:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:59.994 21:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:59.994 21:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:00.253 21:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:00.253 [ 00:22:00.253 { 00:22:00.253 "name": "BaseBdev1", 00:22:00.253 "aliases": [ 00:22:00.253 "7ff727c9-7118-495d-96a6-3a08bc8ac6f6" 00:22:00.253 ], 00:22:00.253 "product_name": "Malloc disk", 00:22:00.253 "block_size": 512, 00:22:00.253 "num_blocks": 65536, 00:22:00.253 "uuid": "7ff727c9-7118-495d-96a6-3a08bc8ac6f6", 00:22:00.253 "assigned_rate_limits": { 00:22:00.253 "rw_ios_per_sec": 0, 00:22:00.253 "rw_mbytes_per_sec": 0, 00:22:00.253 "r_mbytes_per_sec": 0, 00:22:00.253 "w_mbytes_per_sec": 0 00:22:00.253 }, 00:22:00.253 "claimed": true, 00:22:00.253 "claim_type": "exclusive_write", 00:22:00.253 "zoned": false, 00:22:00.253 "supported_io_types": { 00:22:00.253 "read": true, 00:22:00.253 "write": true, 00:22:00.253 "unmap": true, 00:22:00.253 "flush": true, 00:22:00.253 "reset": true, 00:22:00.253 "nvme_admin": false, 00:22:00.253 "nvme_io": false, 00:22:00.253 "nvme_io_md": false, 00:22:00.253 "write_zeroes": true, 00:22:00.253 "zcopy": true, 00:22:00.253 "get_zone_info": false, 00:22:00.253 "zone_management": false, 00:22:00.253 "zone_append": false, 00:22:00.253 "compare": false, 00:22:00.253 "compare_and_write": false, 00:22:00.253 "abort": true, 00:22:00.253 "seek_hole": false, 00:22:00.253 "seek_data": false, 00:22:00.253 "copy": true, 00:22:00.253 "nvme_iov_md": false 00:22:00.253 }, 00:22:00.253 "memory_domains": [ 00:22:00.253 { 00:22:00.253 "dma_device_id": "system", 00:22:00.253 "dma_device_type": 1 00:22:00.253 }, 00:22:00.253 { 00:22:00.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.253 "dma_device_type": 2 00:22:00.253 } 00:22:00.253 ], 00:22:00.253 "driver_specific": {} 00:22:00.253 } 00:22:00.253 ] 00:22:00.253 21:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:00.253 21:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:00.253 21:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:00.253 21:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:00.253 21:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:00.253 21:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:00.253 21:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:00.253 21:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:00.253 21:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:00.253 21:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:00.253 21:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:00.253 21:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.253 21:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:00.511 21:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:00.511 "name": "Existed_Raid", 00:22:00.511 "uuid": "52cfabd4-6f13-443e-8710-91c355ff607b", 00:22:00.511 "strip_size_kb": 0, 00:22:00.511 "state": "configuring", 00:22:00.511 "raid_level": "raid1", 00:22:00.511 "superblock": true, 00:22:00.511 "num_base_bdevs": 3, 00:22:00.511 "num_base_bdevs_discovered": 1, 00:22:00.511 "num_base_bdevs_operational": 3, 00:22:00.511 "base_bdevs_list": [ 00:22:00.511 { 00:22:00.511 "name": "BaseBdev1", 00:22:00.511 "uuid": "7ff727c9-7118-495d-96a6-3a08bc8ac6f6", 00:22:00.511 "is_configured": true, 00:22:00.511 "data_offset": 2048, 00:22:00.511 "data_size": 63488 00:22:00.511 }, 00:22:00.511 { 00:22:00.511 "name": "BaseBdev2", 00:22:00.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.511 "is_configured": false, 00:22:00.511 "data_offset": 0, 00:22:00.511 "data_size": 0 00:22:00.511 }, 00:22:00.511 { 00:22:00.511 "name": "BaseBdev3", 00:22:00.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.511 "is_configured": false, 00:22:00.511 "data_offset": 0, 00:22:00.511 "data_size": 0 00:22:00.511 } 00:22:00.511 ] 00:22:00.511 }' 00:22:00.511 21:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:00.511 21:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:01.080 21:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:01.339 [2024-07-15 21:35:34.573423] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:01.339 [2024-07-15 21:35:34.573535] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:22:01.339 21:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:01.599 [2024-07-15 21:35:34.749102] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:01.599 [2024-07-15 21:35:34.750552] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:01.599 [2024-07-15 21:35:34.750647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:01.599 [2024-07-15 21:35:34.750686] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:01.599 [2024-07-15 21:35:34.750723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:01.599 21:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:22:01.599 21:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:01.599 21:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:01.599 21:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:01.599 21:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:01.599 21:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:01.599 21:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:01.599 21:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:01.599 21:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:01.599 21:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:01.599 21:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:01.599 21:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:01.599 21:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.599 21:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:01.599 21:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:01.599 "name": "Existed_Raid", 00:22:01.599 "uuid": "a7405df7-86b9-429e-bd54-2cc1c8e57ead", 00:22:01.599 "strip_size_kb": 0, 00:22:01.599 "state": "configuring", 00:22:01.599 "raid_level": "raid1", 00:22:01.599 "superblock": true, 00:22:01.599 "num_base_bdevs": 3, 00:22:01.599 "num_base_bdevs_discovered": 1, 00:22:01.599 "num_base_bdevs_operational": 3, 00:22:01.599 "base_bdevs_list": [ 00:22:01.599 { 00:22:01.599 "name": "BaseBdev1", 00:22:01.599 "uuid": "7ff727c9-7118-495d-96a6-3a08bc8ac6f6", 00:22:01.599 "is_configured": true, 00:22:01.599 "data_offset": 2048, 00:22:01.599 "data_size": 63488 00:22:01.599 }, 00:22:01.599 { 00:22:01.599 "name": "BaseBdev2", 00:22:01.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.599 "is_configured": false, 00:22:01.599 "data_offset": 0, 00:22:01.599 "data_size": 0 00:22:01.599 }, 00:22:01.599 { 00:22:01.599 "name": "BaseBdev3", 00:22:01.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.599 "is_configured": false, 00:22:01.599 "data_offset": 0, 00:22:01.599 "data_size": 0 00:22:01.599 } 00:22:01.599 ] 00:22:01.599 }' 00:22:01.599 21:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:01.599 21:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:02.174 21:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:02.444 [2024-07-15 21:35:35.733717] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:02.444 BaseBdev2 00:22:02.444 21:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:22:02.444 21:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:02.444 21:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:02.444 21:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:02.444 21:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:02.444 21:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:02.444 21:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:02.704 21:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:02.964 [ 00:22:02.964 { 00:22:02.964 "name": "BaseBdev2", 00:22:02.964 "aliases": [ 00:22:02.964 "25a44904-6972-4bfc-905f-471d729c8e34" 00:22:02.964 ], 00:22:02.964 "product_name": "Malloc disk", 00:22:02.964 "block_size": 512, 00:22:02.964 "num_blocks": 65536, 00:22:02.964 "uuid": "25a44904-6972-4bfc-905f-471d729c8e34", 00:22:02.964 "assigned_rate_limits": { 00:22:02.964 "rw_ios_per_sec": 0, 00:22:02.964 "rw_mbytes_per_sec": 0, 00:22:02.964 "r_mbytes_per_sec": 0, 00:22:02.964 "w_mbytes_per_sec": 0 00:22:02.964 }, 00:22:02.964 "claimed": true, 00:22:02.964 "claim_type": "exclusive_write", 00:22:02.964 "zoned": false, 00:22:02.964 "supported_io_types": { 00:22:02.964 "read": true, 00:22:02.964 "write": true, 00:22:02.964 "unmap": true, 00:22:02.964 "flush": true, 00:22:02.964 "reset": true, 00:22:02.964 "nvme_admin": false, 00:22:02.964 "nvme_io": false, 00:22:02.964 "nvme_io_md": false, 00:22:02.964 "write_zeroes": true, 00:22:02.964 "zcopy": true, 00:22:02.964 "get_zone_info": false, 00:22:02.964 "zone_management": false, 00:22:02.964 "zone_append": false, 00:22:02.964 "compare": false, 00:22:02.964 "compare_and_write": false, 00:22:02.964 "abort": true, 00:22:02.964 "seek_hole": false, 00:22:02.964 "seek_data": false, 00:22:02.964 "copy": true, 00:22:02.964 "nvme_iov_md": false 00:22:02.964 }, 00:22:02.964 "memory_domains": [ 00:22:02.964 { 00:22:02.964 "dma_device_id": "system", 00:22:02.964 "dma_device_type": 1 00:22:02.964 }, 00:22:02.964 { 00:22:02.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:02.964 "dma_device_type": 2 00:22:02.964 } 00:22:02.964 ], 00:22:02.964 "driver_specific": {} 00:22:02.964 } 00:22:02.964 ] 00:22:02.964 21:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:02.964 21:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:02.964 21:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:02.964 21:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:02.964 21:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:02.964 21:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:02.964 21:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:02.964 21:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:02.964 21:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:02.964 21:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:02.964 21:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:02.964 21:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:02.964 21:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:02.964 21:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.964 21:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:02.964 21:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:02.964 "name": "Existed_Raid", 00:22:02.964 "uuid": "a7405df7-86b9-429e-bd54-2cc1c8e57ead", 00:22:02.964 "strip_size_kb": 0, 00:22:02.964 "state": "configuring", 00:22:02.964 "raid_level": "raid1", 00:22:02.964 "superblock": true, 00:22:02.964 "num_base_bdevs": 3, 00:22:02.964 "num_base_bdevs_discovered": 2, 00:22:02.964 "num_base_bdevs_operational": 3, 00:22:02.964 "base_bdevs_list": [ 00:22:02.964 { 00:22:02.964 "name": "BaseBdev1", 00:22:02.964 "uuid": "7ff727c9-7118-495d-96a6-3a08bc8ac6f6", 00:22:02.964 "is_configured": true, 00:22:02.964 "data_offset": 2048, 00:22:02.964 "data_size": 63488 00:22:02.964 }, 00:22:02.964 { 00:22:02.964 "name": "BaseBdev2", 00:22:02.964 "uuid": "25a44904-6972-4bfc-905f-471d729c8e34", 00:22:02.964 "is_configured": true, 00:22:02.964 "data_offset": 2048, 00:22:02.964 "data_size": 63488 00:22:02.964 }, 00:22:02.964 { 00:22:02.964 "name": "BaseBdev3", 00:22:02.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.964 "is_configured": false, 00:22:02.964 "data_offset": 0, 00:22:02.964 "data_size": 0 00:22:02.964 } 00:22:02.964 ] 00:22:02.964 }' 00:22:02.964 21:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:02.964 21:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:03.550 21:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:03.809 [2024-07-15 21:35:37.118786] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:03.809 [2024-07-15 21:35:37.119103] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:22:03.809 [2024-07-15 21:35:37.119136] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:03.809 [2024-07-15 21:35:37.119291] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:22:03.809 BaseBdev3 00:22:03.809 [2024-07-15 21:35:37.119629] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:22:03.809 [2024-07-15 21:35:37.119670] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:22:03.809 [2024-07-15 21:35:37.119828] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:03.809 21:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:22:03.809 21:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:22:03.809 21:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:03.809 21:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:03.809 21:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:03.809 21:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:03.809 21:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:04.069 21:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:04.328 [ 00:22:04.328 { 00:22:04.328 "name": "BaseBdev3", 00:22:04.328 "aliases": [ 00:22:04.328 "3da5a12c-c8be-438c-a5b8-5e92ad3c6019" 00:22:04.328 ], 00:22:04.328 "product_name": "Malloc disk", 00:22:04.328 "block_size": 512, 00:22:04.328 "num_blocks": 65536, 00:22:04.328 "uuid": "3da5a12c-c8be-438c-a5b8-5e92ad3c6019", 00:22:04.328 "assigned_rate_limits": { 00:22:04.328 "rw_ios_per_sec": 0, 00:22:04.328 "rw_mbytes_per_sec": 0, 00:22:04.328 "r_mbytes_per_sec": 0, 00:22:04.328 "w_mbytes_per_sec": 0 00:22:04.328 }, 00:22:04.328 "claimed": true, 00:22:04.328 "claim_type": "exclusive_write", 00:22:04.328 "zoned": false, 00:22:04.328 "supported_io_types": { 00:22:04.328 "read": true, 00:22:04.328 "write": true, 00:22:04.328 "unmap": true, 00:22:04.328 "flush": true, 00:22:04.328 "reset": true, 00:22:04.328 "nvme_admin": false, 00:22:04.328 "nvme_io": false, 00:22:04.328 "nvme_io_md": false, 00:22:04.328 "write_zeroes": true, 00:22:04.328 "zcopy": true, 00:22:04.328 "get_zone_info": false, 00:22:04.328 "zone_management": false, 00:22:04.328 "zone_append": false, 00:22:04.328 "compare": false, 00:22:04.328 "compare_and_write": false, 00:22:04.328 "abort": true, 00:22:04.328 "seek_hole": false, 00:22:04.328 "seek_data": false, 00:22:04.328 "copy": true, 00:22:04.328 "nvme_iov_md": false 00:22:04.328 }, 00:22:04.328 "memory_domains": [ 00:22:04.328 { 00:22:04.328 "dma_device_id": "system", 00:22:04.328 "dma_device_type": 1 00:22:04.328 }, 00:22:04.328 { 00:22:04.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:04.328 "dma_device_type": 2 00:22:04.328 } 00:22:04.328 ], 00:22:04.328 "driver_specific": {} 00:22:04.328 } 00:22:04.328 ] 00:22:04.328 21:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:04.328 21:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:04.328 21:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:04.328 21:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:04.328 21:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:04.328 21:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:04.328 21:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:04.328 21:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:04.328 21:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:04.328 21:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:04.328 21:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:04.328 21:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:04.328 21:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:04.328 21:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.328 21:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:04.328 21:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:04.328 "name": "Existed_Raid", 00:22:04.328 "uuid": "a7405df7-86b9-429e-bd54-2cc1c8e57ead", 00:22:04.328 "strip_size_kb": 0, 00:22:04.328 "state": "online", 00:22:04.328 "raid_level": "raid1", 00:22:04.328 "superblock": true, 00:22:04.328 "num_base_bdevs": 3, 00:22:04.328 "num_base_bdevs_discovered": 3, 00:22:04.328 "num_base_bdevs_operational": 3, 00:22:04.328 "base_bdevs_list": [ 00:22:04.328 { 00:22:04.328 "name": "BaseBdev1", 00:22:04.329 "uuid": "7ff727c9-7118-495d-96a6-3a08bc8ac6f6", 00:22:04.329 "is_configured": true, 00:22:04.329 "data_offset": 2048, 00:22:04.329 "data_size": 63488 00:22:04.329 }, 00:22:04.329 { 00:22:04.329 "name": "BaseBdev2", 00:22:04.329 "uuid": "25a44904-6972-4bfc-905f-471d729c8e34", 00:22:04.329 "is_configured": true, 00:22:04.329 "data_offset": 2048, 00:22:04.329 "data_size": 63488 00:22:04.329 }, 00:22:04.329 { 00:22:04.329 "name": "BaseBdev3", 00:22:04.329 "uuid": "3da5a12c-c8be-438c-a5b8-5e92ad3c6019", 00:22:04.329 "is_configured": true, 00:22:04.329 "data_offset": 2048, 00:22:04.329 "data_size": 63488 00:22:04.329 } 00:22:04.329 ] 00:22:04.329 }' 00:22:04.329 21:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:04.329 21:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.898 21:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:22:04.898 21:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:04.898 21:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:04.898 21:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:04.898 21:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:04.898 21:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:22:04.898 21:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:04.898 21:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:05.158 [2024-07-15 21:35:38.404945] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:05.158 21:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:05.158 "name": "Existed_Raid", 00:22:05.158 "aliases": [ 00:22:05.158 "a7405df7-86b9-429e-bd54-2cc1c8e57ead" 00:22:05.158 ], 00:22:05.158 "product_name": "Raid Volume", 00:22:05.158 "block_size": 512, 00:22:05.158 "num_blocks": 63488, 00:22:05.158 "uuid": "a7405df7-86b9-429e-bd54-2cc1c8e57ead", 00:22:05.158 "assigned_rate_limits": { 00:22:05.158 "rw_ios_per_sec": 0, 00:22:05.158 "rw_mbytes_per_sec": 0, 00:22:05.158 "r_mbytes_per_sec": 0, 00:22:05.158 "w_mbytes_per_sec": 0 00:22:05.158 }, 00:22:05.158 "claimed": false, 00:22:05.158 "zoned": false, 00:22:05.158 "supported_io_types": { 00:22:05.158 "read": true, 00:22:05.158 "write": true, 00:22:05.158 "unmap": false, 00:22:05.158 "flush": false, 00:22:05.158 "reset": true, 00:22:05.158 "nvme_admin": false, 00:22:05.158 "nvme_io": false, 00:22:05.158 "nvme_io_md": false, 00:22:05.158 "write_zeroes": true, 00:22:05.158 "zcopy": false, 00:22:05.158 "get_zone_info": false, 00:22:05.158 "zone_management": false, 00:22:05.158 "zone_append": false, 00:22:05.158 "compare": false, 00:22:05.158 "compare_and_write": false, 00:22:05.158 "abort": false, 00:22:05.158 "seek_hole": false, 00:22:05.158 "seek_data": false, 00:22:05.158 "copy": false, 00:22:05.158 "nvme_iov_md": false 00:22:05.158 }, 00:22:05.158 "memory_domains": [ 00:22:05.158 { 00:22:05.158 "dma_device_id": "system", 00:22:05.158 "dma_device_type": 1 00:22:05.158 }, 00:22:05.158 { 00:22:05.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.158 "dma_device_type": 2 00:22:05.158 }, 00:22:05.158 { 00:22:05.158 "dma_device_id": "system", 00:22:05.158 "dma_device_type": 1 00:22:05.158 }, 00:22:05.158 { 00:22:05.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.158 "dma_device_type": 2 00:22:05.158 }, 00:22:05.158 { 00:22:05.158 "dma_device_id": "system", 00:22:05.158 "dma_device_type": 1 00:22:05.158 }, 00:22:05.158 { 00:22:05.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.158 "dma_device_type": 2 00:22:05.158 } 00:22:05.158 ], 00:22:05.158 "driver_specific": { 00:22:05.158 "raid": { 00:22:05.158 "uuid": "a7405df7-86b9-429e-bd54-2cc1c8e57ead", 00:22:05.158 "strip_size_kb": 0, 00:22:05.158 "state": "online", 00:22:05.158 "raid_level": "raid1", 00:22:05.158 "superblock": true, 00:22:05.158 "num_base_bdevs": 3, 00:22:05.158 "num_base_bdevs_discovered": 3, 00:22:05.158 "num_base_bdevs_operational": 3, 00:22:05.158 "base_bdevs_list": [ 00:22:05.158 { 00:22:05.158 "name": "BaseBdev1", 00:22:05.158 "uuid": "7ff727c9-7118-495d-96a6-3a08bc8ac6f6", 00:22:05.158 "is_configured": true, 00:22:05.158 "data_offset": 2048, 00:22:05.158 "data_size": 63488 00:22:05.158 }, 00:22:05.158 { 00:22:05.158 "name": "BaseBdev2", 00:22:05.158 "uuid": "25a44904-6972-4bfc-905f-471d729c8e34", 00:22:05.158 "is_configured": true, 00:22:05.158 "data_offset": 2048, 00:22:05.158 "data_size": 63488 00:22:05.158 }, 00:22:05.158 { 00:22:05.158 "name": "BaseBdev3", 00:22:05.158 "uuid": "3da5a12c-c8be-438c-a5b8-5e92ad3c6019", 00:22:05.158 "is_configured": true, 00:22:05.158 "data_offset": 2048, 00:22:05.158 "data_size": 63488 00:22:05.158 } 00:22:05.158 ] 00:22:05.158 } 00:22:05.158 } 00:22:05.158 }' 00:22:05.158 21:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:05.158 21:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:22:05.158 BaseBdev2 00:22:05.158 BaseBdev3' 00:22:05.158 21:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:05.158 21:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:05.158 21:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:05.418 21:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:05.418 "name": "BaseBdev1", 00:22:05.418 "aliases": [ 00:22:05.418 "7ff727c9-7118-495d-96a6-3a08bc8ac6f6" 00:22:05.418 ], 00:22:05.418 "product_name": "Malloc disk", 00:22:05.418 "block_size": 512, 00:22:05.418 "num_blocks": 65536, 00:22:05.418 "uuid": "7ff727c9-7118-495d-96a6-3a08bc8ac6f6", 00:22:05.418 "assigned_rate_limits": { 00:22:05.418 "rw_ios_per_sec": 0, 00:22:05.418 "rw_mbytes_per_sec": 0, 00:22:05.418 "r_mbytes_per_sec": 0, 00:22:05.418 "w_mbytes_per_sec": 0 00:22:05.418 }, 00:22:05.418 "claimed": true, 00:22:05.418 "claim_type": "exclusive_write", 00:22:05.418 "zoned": false, 00:22:05.418 "supported_io_types": { 00:22:05.418 "read": true, 00:22:05.418 "write": true, 00:22:05.418 "unmap": true, 00:22:05.418 "flush": true, 00:22:05.418 "reset": true, 00:22:05.418 "nvme_admin": false, 00:22:05.418 "nvme_io": false, 00:22:05.418 "nvme_io_md": false, 00:22:05.418 "write_zeroes": true, 00:22:05.418 "zcopy": true, 00:22:05.418 "get_zone_info": false, 00:22:05.418 "zone_management": false, 00:22:05.418 "zone_append": false, 00:22:05.418 "compare": false, 00:22:05.418 "compare_and_write": false, 00:22:05.418 "abort": true, 00:22:05.418 "seek_hole": false, 00:22:05.418 "seek_data": false, 00:22:05.418 "copy": true, 00:22:05.418 "nvme_iov_md": false 00:22:05.418 }, 00:22:05.418 "memory_domains": [ 00:22:05.418 { 00:22:05.418 "dma_device_id": "system", 00:22:05.418 "dma_device_type": 1 00:22:05.418 }, 00:22:05.418 { 00:22:05.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.418 "dma_device_type": 2 00:22:05.418 } 00:22:05.418 ], 00:22:05.418 "driver_specific": {} 00:22:05.418 }' 00:22:05.418 21:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:05.418 21:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:05.418 21:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:05.418 21:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:05.677 21:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:05.677 21:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:05.677 21:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:05.677 21:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:05.677 21:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:05.677 21:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:05.677 21:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:05.935 21:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:05.935 21:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:05.935 21:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:05.935 21:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:05.936 21:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:05.936 "name": "BaseBdev2", 00:22:05.936 "aliases": [ 00:22:05.936 "25a44904-6972-4bfc-905f-471d729c8e34" 00:22:05.936 ], 00:22:05.936 "product_name": "Malloc disk", 00:22:05.936 "block_size": 512, 00:22:05.936 "num_blocks": 65536, 00:22:05.936 "uuid": "25a44904-6972-4bfc-905f-471d729c8e34", 00:22:05.936 "assigned_rate_limits": { 00:22:05.936 "rw_ios_per_sec": 0, 00:22:05.936 "rw_mbytes_per_sec": 0, 00:22:05.936 "r_mbytes_per_sec": 0, 00:22:05.936 "w_mbytes_per_sec": 0 00:22:05.936 }, 00:22:05.936 "claimed": true, 00:22:05.936 "claim_type": "exclusive_write", 00:22:05.936 "zoned": false, 00:22:05.936 "supported_io_types": { 00:22:05.936 "read": true, 00:22:05.936 "write": true, 00:22:05.936 "unmap": true, 00:22:05.936 "flush": true, 00:22:05.936 "reset": true, 00:22:05.936 "nvme_admin": false, 00:22:05.936 "nvme_io": false, 00:22:05.936 "nvme_io_md": false, 00:22:05.936 "write_zeroes": true, 00:22:05.936 "zcopy": true, 00:22:05.936 "get_zone_info": false, 00:22:05.936 "zone_management": false, 00:22:05.936 "zone_append": false, 00:22:05.936 "compare": false, 00:22:05.936 "compare_and_write": false, 00:22:05.936 "abort": true, 00:22:05.936 "seek_hole": false, 00:22:05.936 "seek_data": false, 00:22:05.936 "copy": true, 00:22:05.936 "nvme_iov_md": false 00:22:05.936 }, 00:22:05.936 "memory_domains": [ 00:22:05.936 { 00:22:05.936 "dma_device_id": "system", 00:22:05.936 "dma_device_type": 1 00:22:05.936 }, 00:22:05.936 { 00:22:05.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.936 "dma_device_type": 2 00:22:05.936 } 00:22:05.936 ], 00:22:05.936 "driver_specific": {} 00:22:05.936 }' 00:22:05.936 21:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:06.195 21:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:06.195 21:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:06.195 21:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:06.195 21:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:06.195 21:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:06.195 21:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:06.195 21:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:06.453 21:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:06.453 21:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:06.453 21:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:06.453 21:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:06.453 21:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:06.453 21:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:06.453 21:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:06.711 21:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:06.711 "name": "BaseBdev3", 00:22:06.711 "aliases": [ 00:22:06.711 "3da5a12c-c8be-438c-a5b8-5e92ad3c6019" 00:22:06.711 ], 00:22:06.711 "product_name": "Malloc disk", 00:22:06.711 "block_size": 512, 00:22:06.711 "num_blocks": 65536, 00:22:06.711 "uuid": "3da5a12c-c8be-438c-a5b8-5e92ad3c6019", 00:22:06.711 "assigned_rate_limits": { 00:22:06.711 "rw_ios_per_sec": 0, 00:22:06.711 "rw_mbytes_per_sec": 0, 00:22:06.711 "r_mbytes_per_sec": 0, 00:22:06.711 "w_mbytes_per_sec": 0 00:22:06.711 }, 00:22:06.711 "claimed": true, 00:22:06.711 "claim_type": "exclusive_write", 00:22:06.711 "zoned": false, 00:22:06.711 "supported_io_types": { 00:22:06.711 "read": true, 00:22:06.711 "write": true, 00:22:06.711 "unmap": true, 00:22:06.711 "flush": true, 00:22:06.711 "reset": true, 00:22:06.711 "nvme_admin": false, 00:22:06.711 "nvme_io": false, 00:22:06.711 "nvme_io_md": false, 00:22:06.711 "write_zeroes": true, 00:22:06.711 "zcopy": true, 00:22:06.711 "get_zone_info": false, 00:22:06.711 "zone_management": false, 00:22:06.711 "zone_append": false, 00:22:06.711 "compare": false, 00:22:06.711 "compare_and_write": false, 00:22:06.711 "abort": true, 00:22:06.711 "seek_hole": false, 00:22:06.711 "seek_data": false, 00:22:06.711 "copy": true, 00:22:06.711 "nvme_iov_md": false 00:22:06.711 }, 00:22:06.711 "memory_domains": [ 00:22:06.711 { 00:22:06.711 "dma_device_id": "system", 00:22:06.711 "dma_device_type": 1 00:22:06.711 }, 00:22:06.711 { 00:22:06.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:06.711 "dma_device_type": 2 00:22:06.711 } 00:22:06.711 ], 00:22:06.711 "driver_specific": {} 00:22:06.711 }' 00:22:06.711 21:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:06.711 21:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:06.711 21:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:06.711 21:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:06.711 21:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:06.971 21:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:06.971 21:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:06.971 21:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:06.971 21:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:06.971 21:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:06.971 21:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:06.971 21:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:06.971 21:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:07.229 [2024-07-15 21:35:40.485143] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:07.229 21:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:22:07.229 21:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:22:07.229 21:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:07.229 21:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:22:07.229 21:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:22:07.229 21:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:07.229 21:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:07.229 21:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:07.229 21:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:07.229 21:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:07.229 21:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:07.229 21:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:07.229 21:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:07.229 21:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:07.229 21:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:07.229 21:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.229 21:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:07.488 21:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:07.488 "name": "Existed_Raid", 00:22:07.488 "uuid": "a7405df7-86b9-429e-bd54-2cc1c8e57ead", 00:22:07.488 "strip_size_kb": 0, 00:22:07.488 "state": "online", 00:22:07.488 "raid_level": "raid1", 00:22:07.488 "superblock": true, 00:22:07.488 "num_base_bdevs": 3, 00:22:07.488 "num_base_bdevs_discovered": 2, 00:22:07.488 "num_base_bdevs_operational": 2, 00:22:07.488 "base_bdevs_list": [ 00:22:07.488 { 00:22:07.488 "name": null, 00:22:07.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.488 "is_configured": false, 00:22:07.488 "data_offset": 2048, 00:22:07.488 "data_size": 63488 00:22:07.488 }, 00:22:07.488 { 00:22:07.488 "name": "BaseBdev2", 00:22:07.488 "uuid": "25a44904-6972-4bfc-905f-471d729c8e34", 00:22:07.488 "is_configured": true, 00:22:07.488 "data_offset": 2048, 00:22:07.488 "data_size": 63488 00:22:07.488 }, 00:22:07.488 { 00:22:07.488 "name": "BaseBdev3", 00:22:07.488 "uuid": "3da5a12c-c8be-438c-a5b8-5e92ad3c6019", 00:22:07.488 "is_configured": true, 00:22:07.488 "data_offset": 2048, 00:22:07.488 "data_size": 63488 00:22:07.488 } 00:22:07.488 ] 00:22:07.488 }' 00:22:07.488 21:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:07.488 21:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.057 21:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:22:08.057 21:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:08.057 21:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.057 21:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:08.316 21:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:08.316 21:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:08.316 21:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:08.576 [2024-07-15 21:35:41.746077] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:08.576 21:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:08.576 21:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:08.576 21:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.576 21:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:08.835 21:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:08.835 21:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:08.835 21:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:08.835 [2024-07-15 21:35:42.201395] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:08.835 [2024-07-15 21:35:42.201557] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:09.110 [2024-07-15 21:35:42.291940] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:09.110 [2024-07-15 21:35:42.292053] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:09.110 [2024-07-15 21:35:42.292092] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:22:09.110 21:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:09.110 21:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:09.110 21:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.110 21:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:22:09.369 21:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:22:09.369 21:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:22:09.369 21:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:22:09.369 21:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:22:09.369 21:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:09.369 21:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:09.369 BaseBdev2 00:22:09.369 21:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:22:09.369 21:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:09.369 21:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:09.369 21:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:09.369 21:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:09.369 21:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:09.369 21:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:09.628 21:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:09.887 [ 00:22:09.888 { 00:22:09.888 "name": "BaseBdev2", 00:22:09.888 "aliases": [ 00:22:09.888 "c244f358-e5b2-4efe-9aae-c370b3f34417" 00:22:09.888 ], 00:22:09.888 "product_name": "Malloc disk", 00:22:09.888 "block_size": 512, 00:22:09.888 "num_blocks": 65536, 00:22:09.888 "uuid": "c244f358-e5b2-4efe-9aae-c370b3f34417", 00:22:09.888 "assigned_rate_limits": { 00:22:09.888 "rw_ios_per_sec": 0, 00:22:09.888 "rw_mbytes_per_sec": 0, 00:22:09.888 "r_mbytes_per_sec": 0, 00:22:09.888 "w_mbytes_per_sec": 0 00:22:09.888 }, 00:22:09.888 "claimed": false, 00:22:09.888 "zoned": false, 00:22:09.888 "supported_io_types": { 00:22:09.888 "read": true, 00:22:09.888 "write": true, 00:22:09.888 "unmap": true, 00:22:09.888 "flush": true, 00:22:09.888 "reset": true, 00:22:09.888 "nvme_admin": false, 00:22:09.888 "nvme_io": false, 00:22:09.888 "nvme_io_md": false, 00:22:09.888 "write_zeroes": true, 00:22:09.888 "zcopy": true, 00:22:09.888 "get_zone_info": false, 00:22:09.888 "zone_management": false, 00:22:09.888 "zone_append": false, 00:22:09.888 "compare": false, 00:22:09.888 "compare_and_write": false, 00:22:09.888 "abort": true, 00:22:09.888 "seek_hole": false, 00:22:09.888 "seek_data": false, 00:22:09.888 "copy": true, 00:22:09.888 "nvme_iov_md": false 00:22:09.888 }, 00:22:09.888 "memory_domains": [ 00:22:09.888 { 00:22:09.888 "dma_device_id": "system", 00:22:09.888 "dma_device_type": 1 00:22:09.888 }, 00:22:09.888 { 00:22:09.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.888 "dma_device_type": 2 00:22:09.888 } 00:22:09.888 ], 00:22:09.888 "driver_specific": {} 00:22:09.888 } 00:22:09.888 ] 00:22:09.888 21:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:09.888 21:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:09.888 21:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:09.888 21:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:09.888 BaseBdev3 00:22:10.146 21:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:22:10.146 21:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:22:10.146 21:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:10.146 21:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:10.146 21:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:10.146 21:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:10.146 21:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:10.146 21:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:10.404 [ 00:22:10.404 { 00:22:10.404 "name": "BaseBdev3", 00:22:10.404 "aliases": [ 00:22:10.404 "9d8ad90c-24be-4d70-a939-fe26e951b7b4" 00:22:10.404 ], 00:22:10.404 "product_name": "Malloc disk", 00:22:10.404 "block_size": 512, 00:22:10.404 "num_blocks": 65536, 00:22:10.404 "uuid": "9d8ad90c-24be-4d70-a939-fe26e951b7b4", 00:22:10.404 "assigned_rate_limits": { 00:22:10.404 "rw_ios_per_sec": 0, 00:22:10.404 "rw_mbytes_per_sec": 0, 00:22:10.404 "r_mbytes_per_sec": 0, 00:22:10.404 "w_mbytes_per_sec": 0 00:22:10.404 }, 00:22:10.404 "claimed": false, 00:22:10.404 "zoned": false, 00:22:10.404 "supported_io_types": { 00:22:10.404 "read": true, 00:22:10.404 "write": true, 00:22:10.404 "unmap": true, 00:22:10.404 "flush": true, 00:22:10.404 "reset": true, 00:22:10.404 "nvme_admin": false, 00:22:10.404 "nvme_io": false, 00:22:10.404 "nvme_io_md": false, 00:22:10.404 "write_zeroes": true, 00:22:10.404 "zcopy": true, 00:22:10.404 "get_zone_info": false, 00:22:10.404 "zone_management": false, 00:22:10.404 "zone_append": false, 00:22:10.404 "compare": false, 00:22:10.404 "compare_and_write": false, 00:22:10.404 "abort": true, 00:22:10.404 "seek_hole": false, 00:22:10.404 "seek_data": false, 00:22:10.404 "copy": true, 00:22:10.404 "nvme_iov_md": false 00:22:10.404 }, 00:22:10.404 "memory_domains": [ 00:22:10.404 { 00:22:10.404 "dma_device_id": "system", 00:22:10.404 "dma_device_type": 1 00:22:10.404 }, 00:22:10.404 { 00:22:10.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:10.404 "dma_device_type": 2 00:22:10.404 } 00:22:10.404 ], 00:22:10.404 "driver_specific": {} 00:22:10.404 } 00:22:10.404 ] 00:22:10.404 21:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:10.404 21:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:10.404 21:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:10.404 21:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:10.404 [2024-07-15 21:35:43.749147] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:10.404 [2024-07-15 21:35:43.749251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:10.404 [2024-07-15 21:35:43.749310] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:10.404 [2024-07-15 21:35:43.750809] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:10.404 21:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:10.404 21:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:10.404 21:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:10.404 21:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:10.404 21:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:10.404 21:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:10.404 21:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:10.404 21:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:10.404 21:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:10.404 21:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:10.404 21:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.404 21:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:10.663 21:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:10.663 "name": "Existed_Raid", 00:22:10.663 "uuid": "f8c51be3-1168-4336-9cf7-5edaf6198047", 00:22:10.663 "strip_size_kb": 0, 00:22:10.663 "state": "configuring", 00:22:10.663 "raid_level": "raid1", 00:22:10.663 "superblock": true, 00:22:10.663 "num_base_bdevs": 3, 00:22:10.663 "num_base_bdevs_discovered": 2, 00:22:10.663 "num_base_bdevs_operational": 3, 00:22:10.663 "base_bdevs_list": [ 00:22:10.663 { 00:22:10.663 "name": "BaseBdev1", 00:22:10.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.663 "is_configured": false, 00:22:10.663 "data_offset": 0, 00:22:10.663 "data_size": 0 00:22:10.663 }, 00:22:10.663 { 00:22:10.663 "name": "BaseBdev2", 00:22:10.663 "uuid": "c244f358-e5b2-4efe-9aae-c370b3f34417", 00:22:10.663 "is_configured": true, 00:22:10.663 "data_offset": 2048, 00:22:10.663 "data_size": 63488 00:22:10.663 }, 00:22:10.663 { 00:22:10.663 "name": "BaseBdev3", 00:22:10.663 "uuid": "9d8ad90c-24be-4d70-a939-fe26e951b7b4", 00:22:10.663 "is_configured": true, 00:22:10.663 "data_offset": 2048, 00:22:10.663 "data_size": 63488 00:22:10.663 } 00:22:10.663 ] 00:22:10.663 }' 00:22:10.663 21:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:10.663 21:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.256 21:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:11.522 [2024-07-15 21:35:44.691464] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:11.522 21:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:11.522 21:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:11.522 21:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:11.522 21:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:11.522 21:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:11.522 21:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:11.522 21:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:11.522 21:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:11.522 21:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:11.522 21:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:11.522 21:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.522 21:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:11.781 21:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:11.781 "name": "Existed_Raid", 00:22:11.781 "uuid": "f8c51be3-1168-4336-9cf7-5edaf6198047", 00:22:11.781 "strip_size_kb": 0, 00:22:11.781 "state": "configuring", 00:22:11.781 "raid_level": "raid1", 00:22:11.781 "superblock": true, 00:22:11.781 "num_base_bdevs": 3, 00:22:11.781 "num_base_bdevs_discovered": 1, 00:22:11.781 "num_base_bdevs_operational": 3, 00:22:11.781 "base_bdevs_list": [ 00:22:11.781 { 00:22:11.781 "name": "BaseBdev1", 00:22:11.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.781 "is_configured": false, 00:22:11.781 "data_offset": 0, 00:22:11.781 "data_size": 0 00:22:11.781 }, 00:22:11.781 { 00:22:11.781 "name": null, 00:22:11.781 "uuid": "c244f358-e5b2-4efe-9aae-c370b3f34417", 00:22:11.781 "is_configured": false, 00:22:11.781 "data_offset": 2048, 00:22:11.781 "data_size": 63488 00:22:11.781 }, 00:22:11.781 { 00:22:11.781 "name": "BaseBdev3", 00:22:11.781 "uuid": "9d8ad90c-24be-4d70-a939-fe26e951b7b4", 00:22:11.781 "is_configured": true, 00:22:11.781 "data_offset": 2048, 00:22:11.781 "data_size": 63488 00:22:11.781 } 00:22:11.781 ] 00:22:11.781 }' 00:22:11.781 21:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:11.781 21:35:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.350 21:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.350 21:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:12.350 21:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:22:12.350 21:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:12.610 [2024-07-15 21:35:45.883256] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:12.610 BaseBdev1 00:22:12.610 21:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:22:12.610 21:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:12.610 21:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:12.610 21:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:12.610 21:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:12.610 21:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:12.610 21:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:12.869 21:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:13.129 [ 00:22:13.129 { 00:22:13.129 "name": "BaseBdev1", 00:22:13.129 "aliases": [ 00:22:13.129 "d8ab1a2f-d4f7-4789-8ed7-d71238866ec5" 00:22:13.129 ], 00:22:13.129 "product_name": "Malloc disk", 00:22:13.129 "block_size": 512, 00:22:13.129 "num_blocks": 65536, 00:22:13.129 "uuid": "d8ab1a2f-d4f7-4789-8ed7-d71238866ec5", 00:22:13.129 "assigned_rate_limits": { 00:22:13.129 "rw_ios_per_sec": 0, 00:22:13.129 "rw_mbytes_per_sec": 0, 00:22:13.129 "r_mbytes_per_sec": 0, 00:22:13.129 "w_mbytes_per_sec": 0 00:22:13.129 }, 00:22:13.129 "claimed": true, 00:22:13.129 "claim_type": "exclusive_write", 00:22:13.129 "zoned": false, 00:22:13.129 "supported_io_types": { 00:22:13.129 "read": true, 00:22:13.129 "write": true, 00:22:13.129 "unmap": true, 00:22:13.129 "flush": true, 00:22:13.129 "reset": true, 00:22:13.129 "nvme_admin": false, 00:22:13.129 "nvme_io": false, 00:22:13.129 "nvme_io_md": false, 00:22:13.129 "write_zeroes": true, 00:22:13.129 "zcopy": true, 00:22:13.129 "get_zone_info": false, 00:22:13.129 "zone_management": false, 00:22:13.129 "zone_append": false, 00:22:13.129 "compare": false, 00:22:13.129 "compare_and_write": false, 00:22:13.129 "abort": true, 00:22:13.129 "seek_hole": false, 00:22:13.129 "seek_data": false, 00:22:13.129 "copy": true, 00:22:13.129 "nvme_iov_md": false 00:22:13.129 }, 00:22:13.129 "memory_domains": [ 00:22:13.129 { 00:22:13.129 "dma_device_id": "system", 00:22:13.129 "dma_device_type": 1 00:22:13.129 }, 00:22:13.129 { 00:22:13.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.129 "dma_device_type": 2 00:22:13.129 } 00:22:13.129 ], 00:22:13.129 "driver_specific": {} 00:22:13.129 } 00:22:13.129 ] 00:22:13.129 21:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:13.129 21:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:13.129 21:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:13.129 21:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:13.129 21:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:13.129 21:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:13.129 21:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:13.129 21:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:13.129 21:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:13.129 21:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:13.129 21:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:13.129 21:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.129 21:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:13.129 21:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:13.129 "name": "Existed_Raid", 00:22:13.129 "uuid": "f8c51be3-1168-4336-9cf7-5edaf6198047", 00:22:13.129 "strip_size_kb": 0, 00:22:13.129 "state": "configuring", 00:22:13.129 "raid_level": "raid1", 00:22:13.129 "superblock": true, 00:22:13.129 "num_base_bdevs": 3, 00:22:13.129 "num_base_bdevs_discovered": 2, 00:22:13.129 "num_base_bdevs_operational": 3, 00:22:13.129 "base_bdevs_list": [ 00:22:13.129 { 00:22:13.129 "name": "BaseBdev1", 00:22:13.129 "uuid": "d8ab1a2f-d4f7-4789-8ed7-d71238866ec5", 00:22:13.129 "is_configured": true, 00:22:13.129 "data_offset": 2048, 00:22:13.129 "data_size": 63488 00:22:13.129 }, 00:22:13.129 { 00:22:13.129 "name": null, 00:22:13.129 "uuid": "c244f358-e5b2-4efe-9aae-c370b3f34417", 00:22:13.129 "is_configured": false, 00:22:13.129 "data_offset": 2048, 00:22:13.129 "data_size": 63488 00:22:13.129 }, 00:22:13.129 { 00:22:13.129 "name": "BaseBdev3", 00:22:13.129 "uuid": "9d8ad90c-24be-4d70-a939-fe26e951b7b4", 00:22:13.129 "is_configured": true, 00:22:13.129 "data_offset": 2048, 00:22:13.129 "data_size": 63488 00:22:13.129 } 00:22:13.129 ] 00:22:13.129 }' 00:22:13.129 21:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:13.129 21:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.698 21:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:13.698 21:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.957 21:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:22:13.957 21:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:22:14.216 [2024-07-15 21:35:47.344367] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:14.216 21:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:14.216 21:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:14.216 21:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:14.216 21:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:14.216 21:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:14.216 21:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:14.216 21:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:14.216 21:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:14.216 21:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:14.216 21:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:14.216 21:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.216 21:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:14.216 21:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:14.216 "name": "Existed_Raid", 00:22:14.216 "uuid": "f8c51be3-1168-4336-9cf7-5edaf6198047", 00:22:14.216 "strip_size_kb": 0, 00:22:14.216 "state": "configuring", 00:22:14.216 "raid_level": "raid1", 00:22:14.216 "superblock": true, 00:22:14.216 "num_base_bdevs": 3, 00:22:14.217 "num_base_bdevs_discovered": 1, 00:22:14.217 "num_base_bdevs_operational": 3, 00:22:14.217 "base_bdevs_list": [ 00:22:14.217 { 00:22:14.217 "name": "BaseBdev1", 00:22:14.217 "uuid": "d8ab1a2f-d4f7-4789-8ed7-d71238866ec5", 00:22:14.217 "is_configured": true, 00:22:14.217 "data_offset": 2048, 00:22:14.217 "data_size": 63488 00:22:14.217 }, 00:22:14.217 { 00:22:14.217 "name": null, 00:22:14.217 "uuid": "c244f358-e5b2-4efe-9aae-c370b3f34417", 00:22:14.217 "is_configured": false, 00:22:14.217 "data_offset": 2048, 00:22:14.217 "data_size": 63488 00:22:14.217 }, 00:22:14.217 { 00:22:14.217 "name": null, 00:22:14.217 "uuid": "9d8ad90c-24be-4d70-a939-fe26e951b7b4", 00:22:14.217 "is_configured": false, 00:22:14.217 "data_offset": 2048, 00:22:14.217 "data_size": 63488 00:22:14.217 } 00:22:14.217 ] 00:22:14.217 }' 00:22:14.217 21:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:14.217 21:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.785 21:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.785 21:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:15.044 21:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:22:15.044 21:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:15.304 [2024-07-15 21:35:48.486457] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:15.304 21:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:15.304 21:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:15.304 21:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:15.304 21:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:15.304 21:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:15.304 21:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:15.304 21:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:15.304 21:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:15.304 21:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:15.304 21:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:15.304 21:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.304 21:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:15.563 21:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:15.563 "name": "Existed_Raid", 00:22:15.563 "uuid": "f8c51be3-1168-4336-9cf7-5edaf6198047", 00:22:15.563 "strip_size_kb": 0, 00:22:15.563 "state": "configuring", 00:22:15.563 "raid_level": "raid1", 00:22:15.563 "superblock": true, 00:22:15.563 "num_base_bdevs": 3, 00:22:15.563 "num_base_bdevs_discovered": 2, 00:22:15.563 "num_base_bdevs_operational": 3, 00:22:15.563 "base_bdevs_list": [ 00:22:15.563 { 00:22:15.563 "name": "BaseBdev1", 00:22:15.563 "uuid": "d8ab1a2f-d4f7-4789-8ed7-d71238866ec5", 00:22:15.563 "is_configured": true, 00:22:15.563 "data_offset": 2048, 00:22:15.563 "data_size": 63488 00:22:15.563 }, 00:22:15.563 { 00:22:15.563 "name": null, 00:22:15.563 "uuid": "c244f358-e5b2-4efe-9aae-c370b3f34417", 00:22:15.563 "is_configured": false, 00:22:15.563 "data_offset": 2048, 00:22:15.563 "data_size": 63488 00:22:15.563 }, 00:22:15.563 { 00:22:15.563 "name": "BaseBdev3", 00:22:15.563 "uuid": "9d8ad90c-24be-4d70-a939-fe26e951b7b4", 00:22:15.563 "is_configured": true, 00:22:15.563 "data_offset": 2048, 00:22:15.563 "data_size": 63488 00:22:15.563 } 00:22:15.563 ] 00:22:15.563 }' 00:22:15.563 21:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:15.563 21:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:16.130 21:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:16.130 21:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.431 21:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:22:16.431 21:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:16.431 [2024-07-15 21:35:49.660489] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:16.431 21:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:16.431 21:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:16.431 21:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:16.431 21:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:16.431 21:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:16.431 21:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:16.431 21:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:16.431 21:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:16.431 21:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:16.431 21:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:16.431 21:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:16.431 21:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.688 21:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:16.688 "name": "Existed_Raid", 00:22:16.688 "uuid": "f8c51be3-1168-4336-9cf7-5edaf6198047", 00:22:16.688 "strip_size_kb": 0, 00:22:16.688 "state": "configuring", 00:22:16.688 "raid_level": "raid1", 00:22:16.688 "superblock": true, 00:22:16.688 "num_base_bdevs": 3, 00:22:16.688 "num_base_bdevs_discovered": 1, 00:22:16.688 "num_base_bdevs_operational": 3, 00:22:16.688 "base_bdevs_list": [ 00:22:16.688 { 00:22:16.688 "name": null, 00:22:16.688 "uuid": "d8ab1a2f-d4f7-4789-8ed7-d71238866ec5", 00:22:16.688 "is_configured": false, 00:22:16.688 "data_offset": 2048, 00:22:16.688 "data_size": 63488 00:22:16.688 }, 00:22:16.688 { 00:22:16.688 "name": null, 00:22:16.688 "uuid": "c244f358-e5b2-4efe-9aae-c370b3f34417", 00:22:16.688 "is_configured": false, 00:22:16.688 "data_offset": 2048, 00:22:16.688 "data_size": 63488 00:22:16.688 }, 00:22:16.688 { 00:22:16.688 "name": "BaseBdev3", 00:22:16.688 "uuid": "9d8ad90c-24be-4d70-a939-fe26e951b7b4", 00:22:16.688 "is_configured": true, 00:22:16.688 "data_offset": 2048, 00:22:16.688 "data_size": 63488 00:22:16.688 } 00:22:16.688 ] 00:22:16.688 }' 00:22:16.688 21:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:16.688 21:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.256 21:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.256 21:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:17.515 21:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:22:17.515 21:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:17.774 [2024-07-15 21:35:50.900353] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:17.774 21:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:17.774 21:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:17.774 21:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:17.774 21:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:17.774 21:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:17.774 21:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:17.774 21:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:17.774 21:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:17.774 21:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:17.774 21:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:17.774 21:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.774 21:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:17.774 21:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:17.774 "name": "Existed_Raid", 00:22:17.774 "uuid": "f8c51be3-1168-4336-9cf7-5edaf6198047", 00:22:17.774 "strip_size_kb": 0, 00:22:17.774 "state": "configuring", 00:22:17.774 "raid_level": "raid1", 00:22:17.774 "superblock": true, 00:22:17.774 "num_base_bdevs": 3, 00:22:17.774 "num_base_bdevs_discovered": 2, 00:22:17.774 "num_base_bdevs_operational": 3, 00:22:17.774 "base_bdevs_list": [ 00:22:17.774 { 00:22:17.774 "name": null, 00:22:17.774 "uuid": "d8ab1a2f-d4f7-4789-8ed7-d71238866ec5", 00:22:17.774 "is_configured": false, 00:22:17.774 "data_offset": 2048, 00:22:17.774 "data_size": 63488 00:22:17.774 }, 00:22:17.774 { 00:22:17.774 "name": "BaseBdev2", 00:22:17.774 "uuid": "c244f358-e5b2-4efe-9aae-c370b3f34417", 00:22:17.774 "is_configured": true, 00:22:17.774 "data_offset": 2048, 00:22:17.774 "data_size": 63488 00:22:17.774 }, 00:22:17.774 { 00:22:17.774 "name": "BaseBdev3", 00:22:17.774 "uuid": "9d8ad90c-24be-4d70-a939-fe26e951b7b4", 00:22:17.774 "is_configured": true, 00:22:17.774 "data_offset": 2048, 00:22:17.774 "data_size": 63488 00:22:17.774 } 00:22:17.774 ] 00:22:17.774 }' 00:22:17.774 21:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:17.774 21:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.341 21:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.341 21:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:18.599 21:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:22:18.599 21:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.599 21:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:18.859 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u d8ab1a2f-d4f7-4789-8ed7-d71238866ec5 00:22:19.118 [2024-07-15 21:35:52.240316] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:19.118 [2024-07-15 21:35:52.240559] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:22:19.118 [2024-07-15 21:35:52.240599] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:19.118 [2024-07-15 21:35:52.240717] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:22:19.118 NewBaseBdev 00:22:19.118 [2024-07-15 21:35:52.241009] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:22:19.118 [2024-07-15 21:35:52.241052] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008d80 00:22:19.118 [2024-07-15 21:35:52.241208] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:19.118 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:22:19.118 21:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:22:19.118 21:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:19.118 21:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:19.118 21:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:19.119 21:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:19.119 21:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:19.119 21:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:19.377 [ 00:22:19.377 { 00:22:19.377 "name": "NewBaseBdev", 00:22:19.377 "aliases": [ 00:22:19.377 "d8ab1a2f-d4f7-4789-8ed7-d71238866ec5" 00:22:19.377 ], 00:22:19.377 "product_name": "Malloc disk", 00:22:19.377 "block_size": 512, 00:22:19.377 "num_blocks": 65536, 00:22:19.377 "uuid": "d8ab1a2f-d4f7-4789-8ed7-d71238866ec5", 00:22:19.377 "assigned_rate_limits": { 00:22:19.377 "rw_ios_per_sec": 0, 00:22:19.377 "rw_mbytes_per_sec": 0, 00:22:19.377 "r_mbytes_per_sec": 0, 00:22:19.377 "w_mbytes_per_sec": 0 00:22:19.377 }, 00:22:19.377 "claimed": true, 00:22:19.377 "claim_type": "exclusive_write", 00:22:19.377 "zoned": false, 00:22:19.377 "supported_io_types": { 00:22:19.377 "read": true, 00:22:19.377 "write": true, 00:22:19.377 "unmap": true, 00:22:19.377 "flush": true, 00:22:19.377 "reset": true, 00:22:19.377 "nvme_admin": false, 00:22:19.377 "nvme_io": false, 00:22:19.377 "nvme_io_md": false, 00:22:19.377 "write_zeroes": true, 00:22:19.377 "zcopy": true, 00:22:19.377 "get_zone_info": false, 00:22:19.377 "zone_management": false, 00:22:19.377 "zone_append": false, 00:22:19.377 "compare": false, 00:22:19.377 "compare_and_write": false, 00:22:19.377 "abort": true, 00:22:19.377 "seek_hole": false, 00:22:19.377 "seek_data": false, 00:22:19.377 "copy": true, 00:22:19.377 "nvme_iov_md": false 00:22:19.377 }, 00:22:19.377 "memory_domains": [ 00:22:19.377 { 00:22:19.377 "dma_device_id": "system", 00:22:19.377 "dma_device_type": 1 00:22:19.377 }, 00:22:19.377 { 00:22:19.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.377 "dma_device_type": 2 00:22:19.377 } 00:22:19.377 ], 00:22:19.377 "driver_specific": {} 00:22:19.377 } 00:22:19.377 ] 00:22:19.377 21:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:19.377 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:19.377 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:19.377 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:19.377 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:19.377 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:19.377 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:19.377 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:19.377 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:19.377 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:19.377 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:19.377 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:19.377 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:19.635 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:19.635 "name": "Existed_Raid", 00:22:19.635 "uuid": "f8c51be3-1168-4336-9cf7-5edaf6198047", 00:22:19.636 "strip_size_kb": 0, 00:22:19.636 "state": "online", 00:22:19.636 "raid_level": "raid1", 00:22:19.636 "superblock": true, 00:22:19.636 "num_base_bdevs": 3, 00:22:19.636 "num_base_bdevs_discovered": 3, 00:22:19.636 "num_base_bdevs_operational": 3, 00:22:19.636 "base_bdevs_list": [ 00:22:19.636 { 00:22:19.636 "name": "NewBaseBdev", 00:22:19.636 "uuid": "d8ab1a2f-d4f7-4789-8ed7-d71238866ec5", 00:22:19.636 "is_configured": true, 00:22:19.636 "data_offset": 2048, 00:22:19.636 "data_size": 63488 00:22:19.636 }, 00:22:19.636 { 00:22:19.636 "name": "BaseBdev2", 00:22:19.636 "uuid": "c244f358-e5b2-4efe-9aae-c370b3f34417", 00:22:19.636 "is_configured": true, 00:22:19.636 "data_offset": 2048, 00:22:19.636 "data_size": 63488 00:22:19.636 }, 00:22:19.636 { 00:22:19.636 "name": "BaseBdev3", 00:22:19.636 "uuid": "9d8ad90c-24be-4d70-a939-fe26e951b7b4", 00:22:19.636 "is_configured": true, 00:22:19.636 "data_offset": 2048, 00:22:19.636 "data_size": 63488 00:22:19.636 } 00:22:19.636 ] 00:22:19.636 }' 00:22:19.636 21:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:19.636 21:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.204 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:22:20.204 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:20.204 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:20.204 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:20.204 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:20.204 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:22:20.204 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:20.204 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:20.204 [2024-07-15 21:35:53.554319] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:20.204 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:20.204 "name": "Existed_Raid", 00:22:20.204 "aliases": [ 00:22:20.204 "f8c51be3-1168-4336-9cf7-5edaf6198047" 00:22:20.204 ], 00:22:20.204 "product_name": "Raid Volume", 00:22:20.204 "block_size": 512, 00:22:20.204 "num_blocks": 63488, 00:22:20.204 "uuid": "f8c51be3-1168-4336-9cf7-5edaf6198047", 00:22:20.204 "assigned_rate_limits": { 00:22:20.204 "rw_ios_per_sec": 0, 00:22:20.204 "rw_mbytes_per_sec": 0, 00:22:20.204 "r_mbytes_per_sec": 0, 00:22:20.204 "w_mbytes_per_sec": 0 00:22:20.204 }, 00:22:20.204 "claimed": false, 00:22:20.204 "zoned": false, 00:22:20.204 "supported_io_types": { 00:22:20.204 "read": true, 00:22:20.204 "write": true, 00:22:20.204 "unmap": false, 00:22:20.204 "flush": false, 00:22:20.204 "reset": true, 00:22:20.204 "nvme_admin": false, 00:22:20.204 "nvme_io": false, 00:22:20.204 "nvme_io_md": false, 00:22:20.204 "write_zeroes": true, 00:22:20.204 "zcopy": false, 00:22:20.204 "get_zone_info": false, 00:22:20.204 "zone_management": false, 00:22:20.204 "zone_append": false, 00:22:20.204 "compare": false, 00:22:20.204 "compare_and_write": false, 00:22:20.204 "abort": false, 00:22:20.204 "seek_hole": false, 00:22:20.204 "seek_data": false, 00:22:20.204 "copy": false, 00:22:20.204 "nvme_iov_md": false 00:22:20.204 }, 00:22:20.204 "memory_domains": [ 00:22:20.204 { 00:22:20.204 "dma_device_id": "system", 00:22:20.204 "dma_device_type": 1 00:22:20.204 }, 00:22:20.204 { 00:22:20.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.204 "dma_device_type": 2 00:22:20.204 }, 00:22:20.204 { 00:22:20.204 "dma_device_id": "system", 00:22:20.204 "dma_device_type": 1 00:22:20.204 }, 00:22:20.204 { 00:22:20.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.204 "dma_device_type": 2 00:22:20.204 }, 00:22:20.204 { 00:22:20.204 "dma_device_id": "system", 00:22:20.204 "dma_device_type": 1 00:22:20.204 }, 00:22:20.204 { 00:22:20.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.204 "dma_device_type": 2 00:22:20.204 } 00:22:20.204 ], 00:22:20.204 "driver_specific": { 00:22:20.204 "raid": { 00:22:20.204 "uuid": "f8c51be3-1168-4336-9cf7-5edaf6198047", 00:22:20.204 "strip_size_kb": 0, 00:22:20.204 "state": "online", 00:22:20.204 "raid_level": "raid1", 00:22:20.204 "superblock": true, 00:22:20.204 "num_base_bdevs": 3, 00:22:20.204 "num_base_bdevs_discovered": 3, 00:22:20.204 "num_base_bdevs_operational": 3, 00:22:20.204 "base_bdevs_list": [ 00:22:20.204 { 00:22:20.204 "name": "NewBaseBdev", 00:22:20.204 "uuid": "d8ab1a2f-d4f7-4789-8ed7-d71238866ec5", 00:22:20.204 "is_configured": true, 00:22:20.204 "data_offset": 2048, 00:22:20.204 "data_size": 63488 00:22:20.204 }, 00:22:20.204 { 00:22:20.204 "name": "BaseBdev2", 00:22:20.204 "uuid": "c244f358-e5b2-4efe-9aae-c370b3f34417", 00:22:20.204 "is_configured": true, 00:22:20.204 "data_offset": 2048, 00:22:20.204 "data_size": 63488 00:22:20.204 }, 00:22:20.204 { 00:22:20.204 "name": "BaseBdev3", 00:22:20.204 "uuid": "9d8ad90c-24be-4d70-a939-fe26e951b7b4", 00:22:20.204 "is_configured": true, 00:22:20.204 "data_offset": 2048, 00:22:20.204 "data_size": 63488 00:22:20.204 } 00:22:20.204 ] 00:22:20.204 } 00:22:20.204 } 00:22:20.204 }' 00:22:20.204 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:20.462 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:22:20.462 BaseBdev2 00:22:20.462 BaseBdev3' 00:22:20.462 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:20.462 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:20.462 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:20.462 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:20.462 "name": "NewBaseBdev", 00:22:20.462 "aliases": [ 00:22:20.462 "d8ab1a2f-d4f7-4789-8ed7-d71238866ec5" 00:22:20.462 ], 00:22:20.462 "product_name": "Malloc disk", 00:22:20.462 "block_size": 512, 00:22:20.462 "num_blocks": 65536, 00:22:20.462 "uuid": "d8ab1a2f-d4f7-4789-8ed7-d71238866ec5", 00:22:20.462 "assigned_rate_limits": { 00:22:20.462 "rw_ios_per_sec": 0, 00:22:20.462 "rw_mbytes_per_sec": 0, 00:22:20.462 "r_mbytes_per_sec": 0, 00:22:20.462 "w_mbytes_per_sec": 0 00:22:20.462 }, 00:22:20.462 "claimed": true, 00:22:20.462 "claim_type": "exclusive_write", 00:22:20.462 "zoned": false, 00:22:20.462 "supported_io_types": { 00:22:20.462 "read": true, 00:22:20.462 "write": true, 00:22:20.462 "unmap": true, 00:22:20.462 "flush": true, 00:22:20.462 "reset": true, 00:22:20.462 "nvme_admin": false, 00:22:20.462 "nvme_io": false, 00:22:20.462 "nvme_io_md": false, 00:22:20.462 "write_zeroes": true, 00:22:20.462 "zcopy": true, 00:22:20.462 "get_zone_info": false, 00:22:20.462 "zone_management": false, 00:22:20.462 "zone_append": false, 00:22:20.462 "compare": false, 00:22:20.462 "compare_and_write": false, 00:22:20.462 "abort": true, 00:22:20.462 "seek_hole": false, 00:22:20.462 "seek_data": false, 00:22:20.462 "copy": true, 00:22:20.462 "nvme_iov_md": false 00:22:20.462 }, 00:22:20.463 "memory_domains": [ 00:22:20.463 { 00:22:20.463 "dma_device_id": "system", 00:22:20.463 "dma_device_type": 1 00:22:20.463 }, 00:22:20.463 { 00:22:20.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.463 "dma_device_type": 2 00:22:20.463 } 00:22:20.463 ], 00:22:20.463 "driver_specific": {} 00:22:20.463 }' 00:22:20.463 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:20.721 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:20.721 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:20.721 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:20.721 21:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:20.721 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:20.721 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:20.721 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:20.980 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:20.980 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:20.980 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:20.980 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:20.980 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:20.980 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:20.980 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:21.244 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:21.244 "name": "BaseBdev2", 00:22:21.244 "aliases": [ 00:22:21.244 "c244f358-e5b2-4efe-9aae-c370b3f34417" 00:22:21.244 ], 00:22:21.244 "product_name": "Malloc disk", 00:22:21.244 "block_size": 512, 00:22:21.244 "num_blocks": 65536, 00:22:21.244 "uuid": "c244f358-e5b2-4efe-9aae-c370b3f34417", 00:22:21.244 "assigned_rate_limits": { 00:22:21.244 "rw_ios_per_sec": 0, 00:22:21.244 "rw_mbytes_per_sec": 0, 00:22:21.244 "r_mbytes_per_sec": 0, 00:22:21.244 "w_mbytes_per_sec": 0 00:22:21.244 }, 00:22:21.244 "claimed": true, 00:22:21.244 "claim_type": "exclusive_write", 00:22:21.244 "zoned": false, 00:22:21.244 "supported_io_types": { 00:22:21.244 "read": true, 00:22:21.244 "write": true, 00:22:21.244 "unmap": true, 00:22:21.244 "flush": true, 00:22:21.244 "reset": true, 00:22:21.244 "nvme_admin": false, 00:22:21.244 "nvme_io": false, 00:22:21.244 "nvme_io_md": false, 00:22:21.244 "write_zeroes": true, 00:22:21.244 "zcopy": true, 00:22:21.244 "get_zone_info": false, 00:22:21.244 "zone_management": false, 00:22:21.244 "zone_append": false, 00:22:21.244 "compare": false, 00:22:21.244 "compare_and_write": false, 00:22:21.244 "abort": true, 00:22:21.244 "seek_hole": false, 00:22:21.244 "seek_data": false, 00:22:21.244 "copy": true, 00:22:21.244 "nvme_iov_md": false 00:22:21.244 }, 00:22:21.244 "memory_domains": [ 00:22:21.244 { 00:22:21.244 "dma_device_id": "system", 00:22:21.244 "dma_device_type": 1 00:22:21.244 }, 00:22:21.244 { 00:22:21.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:21.244 "dma_device_type": 2 00:22:21.244 } 00:22:21.244 ], 00:22:21.244 "driver_specific": {} 00:22:21.244 }' 00:22:21.244 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:21.244 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:21.244 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:21.244 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:21.244 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:21.244 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:21.244 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:21.510 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:21.510 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:21.510 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:21.510 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:21.510 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:21.510 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:21.510 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:21.510 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:21.767 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:21.767 "name": "BaseBdev3", 00:22:21.767 "aliases": [ 00:22:21.767 "9d8ad90c-24be-4d70-a939-fe26e951b7b4" 00:22:21.767 ], 00:22:21.767 "product_name": "Malloc disk", 00:22:21.767 "block_size": 512, 00:22:21.767 "num_blocks": 65536, 00:22:21.767 "uuid": "9d8ad90c-24be-4d70-a939-fe26e951b7b4", 00:22:21.767 "assigned_rate_limits": { 00:22:21.767 "rw_ios_per_sec": 0, 00:22:21.767 "rw_mbytes_per_sec": 0, 00:22:21.767 "r_mbytes_per_sec": 0, 00:22:21.767 "w_mbytes_per_sec": 0 00:22:21.767 }, 00:22:21.767 "claimed": true, 00:22:21.767 "claim_type": "exclusive_write", 00:22:21.767 "zoned": false, 00:22:21.767 "supported_io_types": { 00:22:21.767 "read": true, 00:22:21.767 "write": true, 00:22:21.767 "unmap": true, 00:22:21.767 "flush": true, 00:22:21.767 "reset": true, 00:22:21.767 "nvme_admin": false, 00:22:21.767 "nvme_io": false, 00:22:21.767 "nvme_io_md": false, 00:22:21.767 "write_zeroes": true, 00:22:21.767 "zcopy": true, 00:22:21.767 "get_zone_info": false, 00:22:21.767 "zone_management": false, 00:22:21.767 "zone_append": false, 00:22:21.767 "compare": false, 00:22:21.767 "compare_and_write": false, 00:22:21.767 "abort": true, 00:22:21.767 "seek_hole": false, 00:22:21.767 "seek_data": false, 00:22:21.767 "copy": true, 00:22:21.767 "nvme_iov_md": false 00:22:21.767 }, 00:22:21.767 "memory_domains": [ 00:22:21.767 { 00:22:21.767 "dma_device_id": "system", 00:22:21.767 "dma_device_type": 1 00:22:21.767 }, 00:22:21.767 { 00:22:21.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:21.767 "dma_device_type": 2 00:22:21.767 } 00:22:21.767 ], 00:22:21.767 "driver_specific": {} 00:22:21.767 }' 00:22:21.767 21:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:21.767 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:21.767 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:21.768 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:21.768 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:22.026 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:22.026 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:22.026 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:22.026 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:22.026 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:22.026 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:22.285 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:22.285 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:22.285 [2024-07-15 21:35:55.574563] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:22.285 [2024-07-15 21:35:55.574629] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:22.285 [2024-07-15 21:35:55.574716] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:22.285 [2024-07-15 21:35:55.574996] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:22.285 [2024-07-15 21:35:55.575028] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name Existed_Raid, state offline 00:22:22.285 21:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 133001 00:22:22.285 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 133001 ']' 00:22:22.285 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 133001 00:22:22.285 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:22:22.285 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:22.285 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 133001 00:22:22.285 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:22.285 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:22.285 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 133001' 00:22:22.285 killing process with pid 133001 00:22:22.285 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 133001 00:22:22.285 21:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 133001 00:22:22.285 [2024-07-15 21:35:55.614806] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:22.544 [2024-07-15 21:35:55.880881] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:23.920 ************************************ 00:22:23.920 END TEST raid_state_function_test_sb 00:22:23.920 ************************************ 00:22:23.920 21:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:22:23.920 00:22:23.920 real 0m26.258s 00:22:23.920 user 0m48.478s 00:22:23.920 sys 0m3.391s 00:22:23.920 21:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:23.920 21:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.920 21:35:57 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:23.920 21:35:57 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:22:23.920 21:35:57 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:23.920 21:35:57 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:23.920 21:35:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:23.920 ************************************ 00:22:23.920 START TEST raid_superblock_test 00:22:23.920 ************************************ 00:22:23.920 21:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 3 00:22:23.920 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:22:23.920 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:22:23.920 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:22:23.920 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:22:23.920 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:22:23.920 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:22:23.920 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:22:23.920 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:22:23.920 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:22:23.920 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:22:23.920 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:22:23.920 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:22:23.920 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:22:23.920 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:22:23.920 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:22:23.920 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=133990 00:22:23.920 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:23.920 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 133990 /var/tmp/spdk-raid.sock 00:22:23.920 21:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 133990 ']' 00:22:23.920 21:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:23.920 21:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:23.920 21:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:23.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:23.920 21:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:23.920 21:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.920 [2024-07-15 21:35:57.160951] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:22:23.920 [2024-07-15 21:35:57.161154] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133990 ] 00:22:24.179 [2024-07-15 21:35:57.318535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.179 [2024-07-15 21:35:57.497164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.439 [2024-07-15 21:35:57.687922] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:24.698 21:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:24.698 21:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:22:24.698 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:22:24.698 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:24.698 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:22:24.698 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:22:24.698 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:24.698 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:24.698 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:24.698 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:24.698 21:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:22:24.957 malloc1 00:22:24.957 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:24.957 [2024-07-15 21:35:58.307934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:24.957 [2024-07-15 21:35:58.308076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:24.957 [2024-07-15 21:35:58.308138] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:22:24.957 [2024-07-15 21:35:58.308174] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:24.957 [2024-07-15 21:35:58.310084] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:24.957 [2024-07-15 21:35:58.310159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:24.957 pt1 00:22:24.957 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:24.957 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:24.957 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:22:24.957 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:22:24.957 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:24.957 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:24.957 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:24.957 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:24.957 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:22:25.216 malloc2 00:22:25.216 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:25.475 [2024-07-15 21:35:58.723686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:25.475 [2024-07-15 21:35:58.723829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:25.475 [2024-07-15 21:35:58.723887] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:22:25.475 [2024-07-15 21:35:58.723920] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.475 [2024-07-15 21:35:58.725580] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.475 [2024-07-15 21:35:58.725650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:25.475 pt2 00:22:25.475 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:25.475 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:25.475 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:22:25.475 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:22:25.475 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:25.475 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:25.475 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:25.475 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:25.476 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:22:25.734 malloc3 00:22:25.734 21:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:25.994 [2024-07-15 21:35:59.107005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:25.994 [2024-07-15 21:35:59.107175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:25.994 [2024-07-15 21:35:59.107218] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:22:25.994 [2024-07-15 21:35:59.107254] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.994 [2024-07-15 21:35:59.109186] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.994 [2024-07-15 21:35:59.109261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:25.994 pt3 00:22:25.994 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:25.994 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:25.994 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:22:25.994 [2024-07-15 21:35:59.290735] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:25.994 [2024-07-15 21:35:59.292363] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:25.994 [2024-07-15 21:35:59.292462] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:25.994 [2024-07-15 21:35:59.292661] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:22:25.994 [2024-07-15 21:35:59.292697] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:25.994 [2024-07-15 21:35:59.292847] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:22:25.994 [2024-07-15 21:35:59.293173] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:22:25.994 [2024-07-15 21:35:59.293213] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:22:25.994 [2024-07-15 21:35:59.293398] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:25.994 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:25.994 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:25.994 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:25.994 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:25.994 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:25.994 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:25.994 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:25.994 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:25.994 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:25.994 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:25.994 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.994 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.272 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:26.272 "name": "raid_bdev1", 00:22:26.272 "uuid": "12834c46-3a98-4fc8-97fe-ecc3a9c5c7ca", 00:22:26.272 "strip_size_kb": 0, 00:22:26.272 "state": "online", 00:22:26.272 "raid_level": "raid1", 00:22:26.272 "superblock": true, 00:22:26.272 "num_base_bdevs": 3, 00:22:26.272 "num_base_bdevs_discovered": 3, 00:22:26.272 "num_base_bdevs_operational": 3, 00:22:26.272 "base_bdevs_list": [ 00:22:26.272 { 00:22:26.272 "name": "pt1", 00:22:26.272 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:26.272 "is_configured": true, 00:22:26.272 "data_offset": 2048, 00:22:26.272 "data_size": 63488 00:22:26.272 }, 00:22:26.272 { 00:22:26.272 "name": "pt2", 00:22:26.272 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:26.272 "is_configured": true, 00:22:26.272 "data_offset": 2048, 00:22:26.272 "data_size": 63488 00:22:26.272 }, 00:22:26.272 { 00:22:26.272 "name": "pt3", 00:22:26.272 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:26.272 "is_configured": true, 00:22:26.272 "data_offset": 2048, 00:22:26.272 "data_size": 63488 00:22:26.272 } 00:22:26.272 ] 00:22:26.272 }' 00:22:26.272 21:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:26.272 21:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.841 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:22:26.841 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:26.841 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:26.841 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:26.841 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:26.841 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:26.841 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:26.841 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:27.100 [2024-07-15 21:36:00.277192] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:27.100 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:27.100 "name": "raid_bdev1", 00:22:27.100 "aliases": [ 00:22:27.100 "12834c46-3a98-4fc8-97fe-ecc3a9c5c7ca" 00:22:27.100 ], 00:22:27.100 "product_name": "Raid Volume", 00:22:27.100 "block_size": 512, 00:22:27.100 "num_blocks": 63488, 00:22:27.100 "uuid": "12834c46-3a98-4fc8-97fe-ecc3a9c5c7ca", 00:22:27.101 "assigned_rate_limits": { 00:22:27.101 "rw_ios_per_sec": 0, 00:22:27.101 "rw_mbytes_per_sec": 0, 00:22:27.101 "r_mbytes_per_sec": 0, 00:22:27.101 "w_mbytes_per_sec": 0 00:22:27.101 }, 00:22:27.101 "claimed": false, 00:22:27.101 "zoned": false, 00:22:27.101 "supported_io_types": { 00:22:27.101 "read": true, 00:22:27.101 "write": true, 00:22:27.101 "unmap": false, 00:22:27.101 "flush": false, 00:22:27.101 "reset": true, 00:22:27.101 "nvme_admin": false, 00:22:27.101 "nvme_io": false, 00:22:27.101 "nvme_io_md": false, 00:22:27.101 "write_zeroes": true, 00:22:27.101 "zcopy": false, 00:22:27.101 "get_zone_info": false, 00:22:27.101 "zone_management": false, 00:22:27.101 "zone_append": false, 00:22:27.101 "compare": false, 00:22:27.101 "compare_and_write": false, 00:22:27.101 "abort": false, 00:22:27.101 "seek_hole": false, 00:22:27.101 "seek_data": false, 00:22:27.101 "copy": false, 00:22:27.101 "nvme_iov_md": false 00:22:27.101 }, 00:22:27.101 "memory_domains": [ 00:22:27.101 { 00:22:27.101 "dma_device_id": "system", 00:22:27.101 "dma_device_type": 1 00:22:27.101 }, 00:22:27.101 { 00:22:27.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.101 "dma_device_type": 2 00:22:27.101 }, 00:22:27.101 { 00:22:27.101 "dma_device_id": "system", 00:22:27.101 "dma_device_type": 1 00:22:27.101 }, 00:22:27.101 { 00:22:27.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.101 "dma_device_type": 2 00:22:27.101 }, 00:22:27.101 { 00:22:27.101 "dma_device_id": "system", 00:22:27.101 "dma_device_type": 1 00:22:27.101 }, 00:22:27.101 { 00:22:27.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.101 "dma_device_type": 2 00:22:27.101 } 00:22:27.101 ], 00:22:27.101 "driver_specific": { 00:22:27.101 "raid": { 00:22:27.101 "uuid": "12834c46-3a98-4fc8-97fe-ecc3a9c5c7ca", 00:22:27.101 "strip_size_kb": 0, 00:22:27.101 "state": "online", 00:22:27.101 "raid_level": "raid1", 00:22:27.101 "superblock": true, 00:22:27.101 "num_base_bdevs": 3, 00:22:27.101 "num_base_bdevs_discovered": 3, 00:22:27.101 "num_base_bdevs_operational": 3, 00:22:27.101 "base_bdevs_list": [ 00:22:27.101 { 00:22:27.101 "name": "pt1", 00:22:27.101 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:27.101 "is_configured": true, 00:22:27.101 "data_offset": 2048, 00:22:27.101 "data_size": 63488 00:22:27.101 }, 00:22:27.101 { 00:22:27.101 "name": "pt2", 00:22:27.101 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:27.101 "is_configured": true, 00:22:27.101 "data_offset": 2048, 00:22:27.101 "data_size": 63488 00:22:27.101 }, 00:22:27.101 { 00:22:27.101 "name": "pt3", 00:22:27.101 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:27.101 "is_configured": true, 00:22:27.101 "data_offset": 2048, 00:22:27.101 "data_size": 63488 00:22:27.101 } 00:22:27.101 ] 00:22:27.101 } 00:22:27.101 } 00:22:27.101 }' 00:22:27.101 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:27.101 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:27.101 pt2 00:22:27.101 pt3' 00:22:27.101 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:27.101 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:27.101 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:27.361 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:27.361 "name": "pt1", 00:22:27.361 "aliases": [ 00:22:27.361 "00000000-0000-0000-0000-000000000001" 00:22:27.361 ], 00:22:27.361 "product_name": "passthru", 00:22:27.361 "block_size": 512, 00:22:27.361 "num_blocks": 65536, 00:22:27.361 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:27.361 "assigned_rate_limits": { 00:22:27.361 "rw_ios_per_sec": 0, 00:22:27.361 "rw_mbytes_per_sec": 0, 00:22:27.361 "r_mbytes_per_sec": 0, 00:22:27.361 "w_mbytes_per_sec": 0 00:22:27.361 }, 00:22:27.361 "claimed": true, 00:22:27.361 "claim_type": "exclusive_write", 00:22:27.361 "zoned": false, 00:22:27.361 "supported_io_types": { 00:22:27.361 "read": true, 00:22:27.361 "write": true, 00:22:27.361 "unmap": true, 00:22:27.361 "flush": true, 00:22:27.361 "reset": true, 00:22:27.361 "nvme_admin": false, 00:22:27.361 "nvme_io": false, 00:22:27.361 "nvme_io_md": false, 00:22:27.361 "write_zeroes": true, 00:22:27.361 "zcopy": true, 00:22:27.361 "get_zone_info": false, 00:22:27.361 "zone_management": false, 00:22:27.361 "zone_append": false, 00:22:27.361 "compare": false, 00:22:27.361 "compare_and_write": false, 00:22:27.361 "abort": true, 00:22:27.361 "seek_hole": false, 00:22:27.361 "seek_data": false, 00:22:27.361 "copy": true, 00:22:27.361 "nvme_iov_md": false 00:22:27.361 }, 00:22:27.361 "memory_domains": [ 00:22:27.361 { 00:22:27.361 "dma_device_id": "system", 00:22:27.361 "dma_device_type": 1 00:22:27.361 }, 00:22:27.361 { 00:22:27.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.361 "dma_device_type": 2 00:22:27.361 } 00:22:27.361 ], 00:22:27.361 "driver_specific": { 00:22:27.361 "passthru": { 00:22:27.361 "name": "pt1", 00:22:27.361 "base_bdev_name": "malloc1" 00:22:27.361 } 00:22:27.361 } 00:22:27.361 }' 00:22:27.361 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:27.361 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:27.361 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:27.361 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:27.361 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:27.621 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:27.621 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:27.621 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:27.621 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:27.621 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:27.621 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:27.621 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:27.621 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:27.621 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:27.621 21:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:27.880 21:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:27.880 "name": "pt2", 00:22:27.880 "aliases": [ 00:22:27.880 "00000000-0000-0000-0000-000000000002" 00:22:27.880 ], 00:22:27.880 "product_name": "passthru", 00:22:27.880 "block_size": 512, 00:22:27.880 "num_blocks": 65536, 00:22:27.880 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:27.880 "assigned_rate_limits": { 00:22:27.880 "rw_ios_per_sec": 0, 00:22:27.880 "rw_mbytes_per_sec": 0, 00:22:27.880 "r_mbytes_per_sec": 0, 00:22:27.880 "w_mbytes_per_sec": 0 00:22:27.880 }, 00:22:27.880 "claimed": true, 00:22:27.880 "claim_type": "exclusive_write", 00:22:27.880 "zoned": false, 00:22:27.880 "supported_io_types": { 00:22:27.880 "read": true, 00:22:27.880 "write": true, 00:22:27.880 "unmap": true, 00:22:27.880 "flush": true, 00:22:27.880 "reset": true, 00:22:27.880 "nvme_admin": false, 00:22:27.880 "nvme_io": false, 00:22:27.880 "nvme_io_md": false, 00:22:27.880 "write_zeroes": true, 00:22:27.880 "zcopy": true, 00:22:27.880 "get_zone_info": false, 00:22:27.880 "zone_management": false, 00:22:27.880 "zone_append": false, 00:22:27.880 "compare": false, 00:22:27.880 "compare_and_write": false, 00:22:27.880 "abort": true, 00:22:27.880 "seek_hole": false, 00:22:27.880 "seek_data": false, 00:22:27.880 "copy": true, 00:22:27.880 "nvme_iov_md": false 00:22:27.880 }, 00:22:27.880 "memory_domains": [ 00:22:27.880 { 00:22:27.880 "dma_device_id": "system", 00:22:27.880 "dma_device_type": 1 00:22:27.880 }, 00:22:27.880 { 00:22:27.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.880 "dma_device_type": 2 00:22:27.880 } 00:22:27.880 ], 00:22:27.880 "driver_specific": { 00:22:27.880 "passthru": { 00:22:27.880 "name": "pt2", 00:22:27.880 "base_bdev_name": "malloc2" 00:22:27.880 } 00:22:27.880 } 00:22:27.880 }' 00:22:27.880 21:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:27.880 21:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:27.880 21:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:27.880 21:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:28.138 21:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:28.138 21:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:28.138 21:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:28.138 21:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:28.138 21:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:28.138 21:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:28.138 21:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:28.396 21:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:28.396 21:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:28.396 21:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:28.396 21:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:28.396 21:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:28.396 "name": "pt3", 00:22:28.396 "aliases": [ 00:22:28.397 "00000000-0000-0000-0000-000000000003" 00:22:28.397 ], 00:22:28.397 "product_name": "passthru", 00:22:28.397 "block_size": 512, 00:22:28.397 "num_blocks": 65536, 00:22:28.397 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:28.397 "assigned_rate_limits": { 00:22:28.397 "rw_ios_per_sec": 0, 00:22:28.397 "rw_mbytes_per_sec": 0, 00:22:28.397 "r_mbytes_per_sec": 0, 00:22:28.397 "w_mbytes_per_sec": 0 00:22:28.397 }, 00:22:28.397 "claimed": true, 00:22:28.397 "claim_type": "exclusive_write", 00:22:28.397 "zoned": false, 00:22:28.397 "supported_io_types": { 00:22:28.397 "read": true, 00:22:28.397 "write": true, 00:22:28.397 "unmap": true, 00:22:28.397 "flush": true, 00:22:28.397 "reset": true, 00:22:28.397 "nvme_admin": false, 00:22:28.397 "nvme_io": false, 00:22:28.397 "nvme_io_md": false, 00:22:28.397 "write_zeroes": true, 00:22:28.397 "zcopy": true, 00:22:28.397 "get_zone_info": false, 00:22:28.397 "zone_management": false, 00:22:28.397 "zone_append": false, 00:22:28.397 "compare": false, 00:22:28.397 "compare_and_write": false, 00:22:28.397 "abort": true, 00:22:28.397 "seek_hole": false, 00:22:28.397 "seek_data": false, 00:22:28.397 "copy": true, 00:22:28.397 "nvme_iov_md": false 00:22:28.397 }, 00:22:28.397 "memory_domains": [ 00:22:28.397 { 00:22:28.397 "dma_device_id": "system", 00:22:28.397 "dma_device_type": 1 00:22:28.397 }, 00:22:28.397 { 00:22:28.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:28.397 "dma_device_type": 2 00:22:28.397 } 00:22:28.397 ], 00:22:28.397 "driver_specific": { 00:22:28.397 "passthru": { 00:22:28.397 "name": "pt3", 00:22:28.397 "base_bdev_name": "malloc3" 00:22:28.397 } 00:22:28.397 } 00:22:28.397 }' 00:22:28.397 21:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:28.656 21:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:28.656 21:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:28.656 21:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:28.656 21:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:28.656 21:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:28.656 21:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:28.656 21:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:28.915 21:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:28.915 21:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:28.915 21:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:28.915 21:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:28.915 21:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:28.916 21:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:22:29.175 [2024-07-15 21:36:02.337558] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:29.175 21:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=12834c46-3a98-4fc8-97fe-ecc3a9c5c7ca 00:22:29.175 21:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 12834c46-3a98-4fc8-97fe-ecc3a9c5c7ca ']' 00:22:29.175 21:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:29.175 [2024-07-15 21:36:02.521034] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:29.175 [2024-07-15 21:36:02.521128] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:29.175 [2024-07-15 21:36:02.521237] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:29.175 [2024-07-15 21:36:02.521328] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:29.175 [2024-07-15 21:36:02.521347] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:22:29.175 21:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.175 21:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:22:29.433 21:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:22:29.433 21:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:22:29.433 21:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:29.433 21:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:29.691 21:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:29.691 21:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:29.951 21:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:29.951 21:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:29.951 21:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:29.951 21:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:30.211 21:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:22:30.211 21:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:22:30.211 21:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:22:30.211 21:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:22:30.211 21:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:30.211 21:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:30.211 21:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:30.211 21:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:30.211 21:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:30.211 21:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:30.211 21:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:30.211 21:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:30.211 21:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:22:30.471 [2024-07-15 21:36:03.604496] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:30.471 [2024-07-15 21:36:03.606176] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:30.471 [2024-07-15 21:36:03.606263] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:30.471 [2024-07-15 21:36:03.606342] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:30.471 [2024-07-15 21:36:03.606466] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:30.471 [2024-07-15 21:36:03.606519] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:30.471 [2024-07-15 21:36:03.606565] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:30.471 [2024-07-15 21:36:03.606603] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state configuring 00:22:30.471 request: 00:22:30.471 { 00:22:30.471 "name": "raid_bdev1", 00:22:30.471 "raid_level": "raid1", 00:22:30.471 "base_bdevs": [ 00:22:30.471 "malloc1", 00:22:30.471 "malloc2", 00:22:30.471 "malloc3" 00:22:30.471 ], 00:22:30.471 "superblock": false, 00:22:30.471 "method": "bdev_raid_create", 00:22:30.471 "req_id": 1 00:22:30.471 } 00:22:30.471 Got JSON-RPC error response 00:22:30.471 response: 00:22:30.471 { 00:22:30.471 "code": -17, 00:22:30.471 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:30.471 } 00:22:30.471 21:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:22:30.471 21:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:30.471 21:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:30.471 21:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:30.471 21:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.472 21:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:22:30.472 21:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:22:30.472 21:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:22:30.472 21:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:30.732 [2024-07-15 21:36:03.963765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:30.732 [2024-07-15 21:36:03.963875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:30.732 [2024-07-15 21:36:03.963933] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:22:30.732 [2024-07-15 21:36:03.963969] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:30.732 [2024-07-15 21:36:03.965675] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:30.732 [2024-07-15 21:36:03.965746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:30.732 [2024-07-15 21:36:03.965866] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:30.732 [2024-07-15 21:36:03.965960] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:30.732 pt1 00:22:30.732 21:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:30.732 21:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:30.732 21:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:30.732 21:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:30.732 21:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:30.732 21:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:30.732 21:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:30.732 21:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:30.732 21:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:30.732 21:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:30.732 21:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.732 21:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.991 21:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:30.991 "name": "raid_bdev1", 00:22:30.991 "uuid": "12834c46-3a98-4fc8-97fe-ecc3a9c5c7ca", 00:22:30.991 "strip_size_kb": 0, 00:22:30.991 "state": "configuring", 00:22:30.991 "raid_level": "raid1", 00:22:30.991 "superblock": true, 00:22:30.991 "num_base_bdevs": 3, 00:22:30.991 "num_base_bdevs_discovered": 1, 00:22:30.991 "num_base_bdevs_operational": 3, 00:22:30.991 "base_bdevs_list": [ 00:22:30.991 { 00:22:30.991 "name": "pt1", 00:22:30.991 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:30.991 "is_configured": true, 00:22:30.991 "data_offset": 2048, 00:22:30.991 "data_size": 63488 00:22:30.991 }, 00:22:30.991 { 00:22:30.991 "name": null, 00:22:30.991 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:30.991 "is_configured": false, 00:22:30.991 "data_offset": 2048, 00:22:30.991 "data_size": 63488 00:22:30.991 }, 00:22:30.991 { 00:22:30.991 "name": null, 00:22:30.991 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:30.991 "is_configured": false, 00:22:30.991 "data_offset": 2048, 00:22:30.991 "data_size": 63488 00:22:30.991 } 00:22:30.991 ] 00:22:30.991 }' 00:22:30.991 21:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:30.991 21:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.560 21:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:22:31.560 21:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:31.819 [2024-07-15 21:36:04.958041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:31.819 [2024-07-15 21:36:04.958186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:31.819 [2024-07-15 21:36:04.958231] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:31.819 [2024-07-15 21:36:04.958268] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:31.819 [2024-07-15 21:36:04.958709] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:31.819 [2024-07-15 21:36:04.958775] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:31.819 [2024-07-15 21:36:04.958911] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:31.819 [2024-07-15 21:36:04.958965] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:31.819 pt2 00:22:31.819 21:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:31.819 [2024-07-15 21:36:05.141759] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:31.819 21:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:31.819 21:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:31.819 21:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:31.819 21:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:31.819 21:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:31.819 21:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:31.819 21:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:31.819 21:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:31.819 21:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:31.819 21:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:31.819 21:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.819 21:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.078 21:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:32.078 "name": "raid_bdev1", 00:22:32.078 "uuid": "12834c46-3a98-4fc8-97fe-ecc3a9c5c7ca", 00:22:32.078 "strip_size_kb": 0, 00:22:32.078 "state": "configuring", 00:22:32.078 "raid_level": "raid1", 00:22:32.078 "superblock": true, 00:22:32.078 "num_base_bdevs": 3, 00:22:32.078 "num_base_bdevs_discovered": 1, 00:22:32.078 "num_base_bdevs_operational": 3, 00:22:32.078 "base_bdevs_list": [ 00:22:32.078 { 00:22:32.078 "name": "pt1", 00:22:32.078 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:32.078 "is_configured": true, 00:22:32.078 "data_offset": 2048, 00:22:32.078 "data_size": 63488 00:22:32.078 }, 00:22:32.078 { 00:22:32.078 "name": null, 00:22:32.078 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:32.078 "is_configured": false, 00:22:32.078 "data_offset": 2048, 00:22:32.078 "data_size": 63488 00:22:32.078 }, 00:22:32.078 { 00:22:32.078 "name": null, 00:22:32.078 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:32.078 "is_configured": false, 00:22:32.078 "data_offset": 2048, 00:22:32.078 "data_size": 63488 00:22:32.078 } 00:22:32.078 ] 00:22:32.078 }' 00:22:32.078 21:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:32.078 21:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.646 21:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:22:32.646 21:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:32.646 21:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:32.906 [2024-07-15 21:36:06.152028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:32.906 [2024-07-15 21:36:06.152210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:32.906 [2024-07-15 21:36:06.152255] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:22:32.906 [2024-07-15 21:36:06.152290] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:32.906 [2024-07-15 21:36:06.152898] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:32.906 [2024-07-15 21:36:06.152963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:32.906 [2024-07-15 21:36:06.153114] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:32.906 [2024-07-15 21:36:06.153165] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:32.906 pt2 00:22:32.906 21:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:22:32.906 21:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:32.906 21:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:33.165 [2024-07-15 21:36:06.311718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:33.165 [2024-07-15 21:36:06.311861] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:33.165 [2024-07-15 21:36:06.311899] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:22:33.165 [2024-07-15 21:36:06.311934] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:33.165 [2024-07-15 21:36:06.312393] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:33.165 [2024-07-15 21:36:06.312449] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:33.165 [2024-07-15 21:36:06.312569] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:33.165 [2024-07-15 21:36:06.312613] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:33.166 [2024-07-15 21:36:06.312762] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:22:33.166 [2024-07-15 21:36:06.312794] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:33.166 [2024-07-15 21:36:06.312911] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:33.166 [2024-07-15 21:36:06.313205] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:22:33.166 [2024-07-15 21:36:06.313244] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:22:33.166 [2024-07-15 21:36:06.313432] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:33.166 pt3 00:22:33.166 21:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:22:33.166 21:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:33.166 21:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:33.166 21:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:33.166 21:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:33.166 21:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:33.166 21:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:33.166 21:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:33.166 21:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:33.166 21:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:33.166 21:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:33.166 21:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:33.166 21:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.166 21:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.166 21:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:33.166 "name": "raid_bdev1", 00:22:33.166 "uuid": "12834c46-3a98-4fc8-97fe-ecc3a9c5c7ca", 00:22:33.166 "strip_size_kb": 0, 00:22:33.166 "state": "online", 00:22:33.166 "raid_level": "raid1", 00:22:33.166 "superblock": true, 00:22:33.166 "num_base_bdevs": 3, 00:22:33.166 "num_base_bdevs_discovered": 3, 00:22:33.166 "num_base_bdevs_operational": 3, 00:22:33.166 "base_bdevs_list": [ 00:22:33.166 { 00:22:33.166 "name": "pt1", 00:22:33.166 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:33.166 "is_configured": true, 00:22:33.166 "data_offset": 2048, 00:22:33.166 "data_size": 63488 00:22:33.166 }, 00:22:33.166 { 00:22:33.166 "name": "pt2", 00:22:33.166 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:33.166 "is_configured": true, 00:22:33.166 "data_offset": 2048, 00:22:33.166 "data_size": 63488 00:22:33.166 }, 00:22:33.166 { 00:22:33.166 "name": "pt3", 00:22:33.166 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:33.166 "is_configured": true, 00:22:33.166 "data_offset": 2048, 00:22:33.166 "data_size": 63488 00:22:33.166 } 00:22:33.166 ] 00:22:33.166 }' 00:22:33.166 21:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:33.166 21:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.734 21:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:22:33.734 21:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:33.734 21:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:33.734 21:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:33.734 21:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:33.734 21:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:33.734 21:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:33.734 21:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:33.993 [2024-07-15 21:36:07.262340] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:33.993 21:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:33.993 "name": "raid_bdev1", 00:22:33.993 "aliases": [ 00:22:33.993 "12834c46-3a98-4fc8-97fe-ecc3a9c5c7ca" 00:22:33.993 ], 00:22:33.993 "product_name": "Raid Volume", 00:22:33.993 "block_size": 512, 00:22:33.993 "num_blocks": 63488, 00:22:33.993 "uuid": "12834c46-3a98-4fc8-97fe-ecc3a9c5c7ca", 00:22:33.993 "assigned_rate_limits": { 00:22:33.993 "rw_ios_per_sec": 0, 00:22:33.993 "rw_mbytes_per_sec": 0, 00:22:33.993 "r_mbytes_per_sec": 0, 00:22:33.993 "w_mbytes_per_sec": 0 00:22:33.993 }, 00:22:33.993 "claimed": false, 00:22:33.993 "zoned": false, 00:22:33.993 "supported_io_types": { 00:22:33.993 "read": true, 00:22:33.993 "write": true, 00:22:33.993 "unmap": false, 00:22:33.993 "flush": false, 00:22:33.993 "reset": true, 00:22:33.993 "nvme_admin": false, 00:22:33.993 "nvme_io": false, 00:22:33.993 "nvme_io_md": false, 00:22:33.993 "write_zeroes": true, 00:22:33.993 "zcopy": false, 00:22:33.993 "get_zone_info": false, 00:22:33.993 "zone_management": false, 00:22:33.993 "zone_append": false, 00:22:33.993 "compare": false, 00:22:33.993 "compare_and_write": false, 00:22:33.993 "abort": false, 00:22:33.993 "seek_hole": false, 00:22:33.993 "seek_data": false, 00:22:33.993 "copy": false, 00:22:33.993 "nvme_iov_md": false 00:22:33.993 }, 00:22:33.993 "memory_domains": [ 00:22:33.993 { 00:22:33.993 "dma_device_id": "system", 00:22:33.993 "dma_device_type": 1 00:22:33.993 }, 00:22:33.993 { 00:22:33.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:33.993 "dma_device_type": 2 00:22:33.993 }, 00:22:33.993 { 00:22:33.993 "dma_device_id": "system", 00:22:33.993 "dma_device_type": 1 00:22:33.993 }, 00:22:33.993 { 00:22:33.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:33.993 "dma_device_type": 2 00:22:33.993 }, 00:22:33.993 { 00:22:33.994 "dma_device_id": "system", 00:22:33.994 "dma_device_type": 1 00:22:33.994 }, 00:22:33.994 { 00:22:33.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:33.994 "dma_device_type": 2 00:22:33.994 } 00:22:33.994 ], 00:22:33.994 "driver_specific": { 00:22:33.994 "raid": { 00:22:33.994 "uuid": "12834c46-3a98-4fc8-97fe-ecc3a9c5c7ca", 00:22:33.994 "strip_size_kb": 0, 00:22:33.994 "state": "online", 00:22:33.994 "raid_level": "raid1", 00:22:33.994 "superblock": true, 00:22:33.994 "num_base_bdevs": 3, 00:22:33.994 "num_base_bdevs_discovered": 3, 00:22:33.994 "num_base_bdevs_operational": 3, 00:22:33.994 "base_bdevs_list": [ 00:22:33.994 { 00:22:33.994 "name": "pt1", 00:22:33.994 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:33.994 "is_configured": true, 00:22:33.994 "data_offset": 2048, 00:22:33.994 "data_size": 63488 00:22:33.994 }, 00:22:33.994 { 00:22:33.994 "name": "pt2", 00:22:33.994 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:33.994 "is_configured": true, 00:22:33.994 "data_offset": 2048, 00:22:33.994 "data_size": 63488 00:22:33.994 }, 00:22:33.994 { 00:22:33.994 "name": "pt3", 00:22:33.994 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:33.994 "is_configured": true, 00:22:33.994 "data_offset": 2048, 00:22:33.994 "data_size": 63488 00:22:33.994 } 00:22:33.994 ] 00:22:33.994 } 00:22:33.994 } 00:22:33.994 }' 00:22:33.994 21:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:33.994 21:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:33.994 pt2 00:22:33.994 pt3' 00:22:33.994 21:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:33.994 21:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:33.994 21:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:34.253 21:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:34.253 "name": "pt1", 00:22:34.253 "aliases": [ 00:22:34.253 "00000000-0000-0000-0000-000000000001" 00:22:34.253 ], 00:22:34.253 "product_name": "passthru", 00:22:34.253 "block_size": 512, 00:22:34.253 "num_blocks": 65536, 00:22:34.253 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:34.253 "assigned_rate_limits": { 00:22:34.253 "rw_ios_per_sec": 0, 00:22:34.253 "rw_mbytes_per_sec": 0, 00:22:34.253 "r_mbytes_per_sec": 0, 00:22:34.253 "w_mbytes_per_sec": 0 00:22:34.253 }, 00:22:34.253 "claimed": true, 00:22:34.253 "claim_type": "exclusive_write", 00:22:34.253 "zoned": false, 00:22:34.253 "supported_io_types": { 00:22:34.253 "read": true, 00:22:34.253 "write": true, 00:22:34.253 "unmap": true, 00:22:34.253 "flush": true, 00:22:34.253 "reset": true, 00:22:34.253 "nvme_admin": false, 00:22:34.253 "nvme_io": false, 00:22:34.253 "nvme_io_md": false, 00:22:34.253 "write_zeroes": true, 00:22:34.253 "zcopy": true, 00:22:34.253 "get_zone_info": false, 00:22:34.253 "zone_management": false, 00:22:34.253 "zone_append": false, 00:22:34.253 "compare": false, 00:22:34.253 "compare_and_write": false, 00:22:34.253 "abort": true, 00:22:34.253 "seek_hole": false, 00:22:34.253 "seek_data": false, 00:22:34.253 "copy": true, 00:22:34.253 "nvme_iov_md": false 00:22:34.253 }, 00:22:34.253 "memory_domains": [ 00:22:34.253 { 00:22:34.253 "dma_device_id": "system", 00:22:34.253 "dma_device_type": 1 00:22:34.253 }, 00:22:34.253 { 00:22:34.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.253 "dma_device_type": 2 00:22:34.253 } 00:22:34.253 ], 00:22:34.253 "driver_specific": { 00:22:34.253 "passthru": { 00:22:34.253 "name": "pt1", 00:22:34.253 "base_bdev_name": "malloc1" 00:22:34.253 } 00:22:34.253 } 00:22:34.253 }' 00:22:34.253 21:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:34.253 21:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:34.253 21:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:34.253 21:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:34.512 21:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:34.512 21:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:34.512 21:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:34.512 21:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:34.512 21:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:34.512 21:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:34.512 21:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:34.771 21:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:34.771 21:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:34.771 21:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:34.771 21:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:34.771 21:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:34.771 "name": "pt2", 00:22:34.771 "aliases": [ 00:22:34.771 "00000000-0000-0000-0000-000000000002" 00:22:34.771 ], 00:22:34.771 "product_name": "passthru", 00:22:34.771 "block_size": 512, 00:22:34.771 "num_blocks": 65536, 00:22:34.771 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:34.771 "assigned_rate_limits": { 00:22:34.771 "rw_ios_per_sec": 0, 00:22:34.771 "rw_mbytes_per_sec": 0, 00:22:34.771 "r_mbytes_per_sec": 0, 00:22:34.771 "w_mbytes_per_sec": 0 00:22:34.771 }, 00:22:34.771 "claimed": true, 00:22:34.771 "claim_type": "exclusive_write", 00:22:34.771 "zoned": false, 00:22:34.771 "supported_io_types": { 00:22:34.771 "read": true, 00:22:34.771 "write": true, 00:22:34.771 "unmap": true, 00:22:34.771 "flush": true, 00:22:34.771 "reset": true, 00:22:34.771 "nvme_admin": false, 00:22:34.771 "nvme_io": false, 00:22:34.771 "nvme_io_md": false, 00:22:34.771 "write_zeroes": true, 00:22:34.771 "zcopy": true, 00:22:34.771 "get_zone_info": false, 00:22:34.771 "zone_management": false, 00:22:34.771 "zone_append": false, 00:22:34.771 "compare": false, 00:22:34.771 "compare_and_write": false, 00:22:34.771 "abort": true, 00:22:34.771 "seek_hole": false, 00:22:34.771 "seek_data": false, 00:22:34.771 "copy": true, 00:22:34.771 "nvme_iov_md": false 00:22:34.771 }, 00:22:34.771 "memory_domains": [ 00:22:34.771 { 00:22:34.771 "dma_device_id": "system", 00:22:34.771 "dma_device_type": 1 00:22:34.771 }, 00:22:34.771 { 00:22:34.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.771 "dma_device_type": 2 00:22:34.771 } 00:22:34.771 ], 00:22:34.771 "driver_specific": { 00:22:34.771 "passthru": { 00:22:34.771 "name": "pt2", 00:22:34.771 "base_bdev_name": "malloc2" 00:22:34.771 } 00:22:34.771 } 00:22:34.771 }' 00:22:34.771 21:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:35.030 21:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:35.030 21:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:35.030 21:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:35.030 21:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:35.030 21:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:35.030 21:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:35.030 21:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:35.289 21:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:35.289 21:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:35.289 21:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:35.289 21:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:35.289 21:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:35.289 21:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:35.289 21:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:35.548 21:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:35.548 "name": "pt3", 00:22:35.548 "aliases": [ 00:22:35.548 "00000000-0000-0000-0000-000000000003" 00:22:35.548 ], 00:22:35.548 "product_name": "passthru", 00:22:35.548 "block_size": 512, 00:22:35.548 "num_blocks": 65536, 00:22:35.548 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:35.548 "assigned_rate_limits": { 00:22:35.548 "rw_ios_per_sec": 0, 00:22:35.548 "rw_mbytes_per_sec": 0, 00:22:35.548 "r_mbytes_per_sec": 0, 00:22:35.548 "w_mbytes_per_sec": 0 00:22:35.548 }, 00:22:35.548 "claimed": true, 00:22:35.548 "claim_type": "exclusive_write", 00:22:35.548 "zoned": false, 00:22:35.548 "supported_io_types": { 00:22:35.548 "read": true, 00:22:35.548 "write": true, 00:22:35.548 "unmap": true, 00:22:35.548 "flush": true, 00:22:35.548 "reset": true, 00:22:35.548 "nvme_admin": false, 00:22:35.548 "nvme_io": false, 00:22:35.548 "nvme_io_md": false, 00:22:35.548 "write_zeroes": true, 00:22:35.548 "zcopy": true, 00:22:35.548 "get_zone_info": false, 00:22:35.548 "zone_management": false, 00:22:35.548 "zone_append": false, 00:22:35.548 "compare": false, 00:22:35.548 "compare_and_write": false, 00:22:35.548 "abort": true, 00:22:35.548 "seek_hole": false, 00:22:35.548 "seek_data": false, 00:22:35.548 "copy": true, 00:22:35.548 "nvme_iov_md": false 00:22:35.548 }, 00:22:35.548 "memory_domains": [ 00:22:35.548 { 00:22:35.548 "dma_device_id": "system", 00:22:35.548 "dma_device_type": 1 00:22:35.548 }, 00:22:35.548 { 00:22:35.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:35.549 "dma_device_type": 2 00:22:35.549 } 00:22:35.549 ], 00:22:35.549 "driver_specific": { 00:22:35.549 "passthru": { 00:22:35.549 "name": "pt3", 00:22:35.549 "base_bdev_name": "malloc3" 00:22:35.549 } 00:22:35.549 } 00:22:35.549 }' 00:22:35.549 21:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:35.549 21:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:35.549 21:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:35.549 21:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:35.549 21:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:35.826 21:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:35.826 21:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:35.826 21:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:35.826 21:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:35.826 21:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:35.826 21:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:35.826 21:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:35.826 21:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:35.826 21:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:22:36.084 [2024-07-15 21:36:09.342788] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:36.085 21:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 12834c46-3a98-4fc8-97fe-ecc3a9c5c7ca '!=' 12834c46-3a98-4fc8-97fe-ecc3a9c5c7ca ']' 00:22:36.085 21:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:22:36.085 21:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:36.085 21:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:22:36.085 21:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:36.343 [2024-07-15 21:36:09.522266] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:36.343 21:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:36.343 21:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:36.343 21:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:36.343 21:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:36.343 21:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:36.343 21:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:36.343 21:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:36.343 21:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:36.343 21:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:36.343 21:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:36.343 21:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.343 21:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:36.601 21:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:36.601 "name": "raid_bdev1", 00:22:36.601 "uuid": "12834c46-3a98-4fc8-97fe-ecc3a9c5c7ca", 00:22:36.601 "strip_size_kb": 0, 00:22:36.601 "state": "online", 00:22:36.601 "raid_level": "raid1", 00:22:36.601 "superblock": true, 00:22:36.601 "num_base_bdevs": 3, 00:22:36.601 "num_base_bdevs_discovered": 2, 00:22:36.601 "num_base_bdevs_operational": 2, 00:22:36.601 "base_bdevs_list": [ 00:22:36.601 { 00:22:36.601 "name": null, 00:22:36.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.601 "is_configured": false, 00:22:36.601 "data_offset": 2048, 00:22:36.601 "data_size": 63488 00:22:36.601 }, 00:22:36.601 { 00:22:36.601 "name": "pt2", 00:22:36.601 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:36.601 "is_configured": true, 00:22:36.601 "data_offset": 2048, 00:22:36.601 "data_size": 63488 00:22:36.601 }, 00:22:36.601 { 00:22:36.601 "name": "pt3", 00:22:36.601 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:36.601 "is_configured": true, 00:22:36.601 "data_offset": 2048, 00:22:36.602 "data_size": 63488 00:22:36.602 } 00:22:36.602 ] 00:22:36.602 }' 00:22:36.602 21:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:36.602 21:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.170 21:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:37.170 [2024-07-15 21:36:10.501015] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:37.170 [2024-07-15 21:36:10.501098] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:37.170 [2024-07-15 21:36:10.501212] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:37.170 [2024-07-15 21:36:10.501299] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:37.170 [2024-07-15 21:36:10.501318] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:22:37.170 21:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.170 21:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:22:37.430 21:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:22:37.430 21:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:22:37.430 21:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:22:37.430 21:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:22:37.430 21:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:37.689 21:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:22:37.689 21:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:22:37.689 21:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:37.689 21:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:22:37.689 21:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:22:37.689 21:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:22:37.689 21:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:22:37.689 21:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:37.949 [2024-07-15 21:36:11.215696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:37.949 [2024-07-15 21:36:11.215837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:37.949 [2024-07-15 21:36:11.215881] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:37.949 [2024-07-15 21:36:11.215912] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:37.949 [2024-07-15 21:36:11.217669] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:37.949 [2024-07-15 21:36:11.217743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:37.949 [2024-07-15 21:36:11.217858] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:37.949 [2024-07-15 21:36:11.217935] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:37.949 pt2 00:22:37.949 21:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:22:37.949 21:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:37.949 21:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:37.949 21:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:37.949 21:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:37.949 21:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:37.949 21:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:37.949 21:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:37.949 21:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:37.949 21:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:37.949 21:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.949 21:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:38.209 21:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:38.209 "name": "raid_bdev1", 00:22:38.209 "uuid": "12834c46-3a98-4fc8-97fe-ecc3a9c5c7ca", 00:22:38.209 "strip_size_kb": 0, 00:22:38.209 "state": "configuring", 00:22:38.209 "raid_level": "raid1", 00:22:38.209 "superblock": true, 00:22:38.209 "num_base_bdevs": 3, 00:22:38.209 "num_base_bdevs_discovered": 1, 00:22:38.209 "num_base_bdevs_operational": 2, 00:22:38.209 "base_bdevs_list": [ 00:22:38.209 { 00:22:38.209 "name": null, 00:22:38.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.209 "is_configured": false, 00:22:38.209 "data_offset": 2048, 00:22:38.209 "data_size": 63488 00:22:38.209 }, 00:22:38.209 { 00:22:38.209 "name": "pt2", 00:22:38.209 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:38.209 "is_configured": true, 00:22:38.209 "data_offset": 2048, 00:22:38.209 "data_size": 63488 00:22:38.209 }, 00:22:38.209 { 00:22:38.209 "name": null, 00:22:38.209 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:38.209 "is_configured": false, 00:22:38.209 "data_offset": 2048, 00:22:38.209 "data_size": 63488 00:22:38.209 } 00:22:38.209 ] 00:22:38.209 }' 00:22:38.209 21:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:38.209 21:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.779 21:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:22:38.779 21:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:22:38.779 21:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:22:38.779 21:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:38.779 [2024-07-15 21:36:12.142128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:38.779 [2024-07-15 21:36:12.142270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:38.779 [2024-07-15 21:36:12.142316] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:22:38.779 [2024-07-15 21:36:12.142361] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:38.779 [2024-07-15 21:36:12.142825] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:38.779 [2024-07-15 21:36:12.142885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:38.779 [2024-07-15 21:36:12.143004] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:38.779 [2024-07-15 21:36:12.143049] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:38.779 [2024-07-15 21:36:12.143178] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b180 00:22:38.779 [2024-07-15 21:36:12.143206] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:38.779 [2024-07-15 21:36:12.143330] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:22:38.779 [2024-07-15 21:36:12.143613] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b180 00:22:38.779 [2024-07-15 21:36:12.143652] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b180 00:22:38.779 [2024-07-15 21:36:12.143802] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:38.779 pt3 00:22:39.039 21:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:39.039 21:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:39.039 21:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:39.039 21:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:39.039 21:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:39.039 21:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:39.039 21:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:39.039 21:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:39.039 21:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:39.039 21:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:39.039 21:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.039 21:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.039 21:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:39.039 "name": "raid_bdev1", 00:22:39.039 "uuid": "12834c46-3a98-4fc8-97fe-ecc3a9c5c7ca", 00:22:39.039 "strip_size_kb": 0, 00:22:39.039 "state": "online", 00:22:39.039 "raid_level": "raid1", 00:22:39.039 "superblock": true, 00:22:39.039 "num_base_bdevs": 3, 00:22:39.039 "num_base_bdevs_discovered": 2, 00:22:39.039 "num_base_bdevs_operational": 2, 00:22:39.039 "base_bdevs_list": [ 00:22:39.039 { 00:22:39.039 "name": null, 00:22:39.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.039 "is_configured": false, 00:22:39.039 "data_offset": 2048, 00:22:39.039 "data_size": 63488 00:22:39.039 }, 00:22:39.039 { 00:22:39.039 "name": "pt2", 00:22:39.039 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:39.039 "is_configured": true, 00:22:39.039 "data_offset": 2048, 00:22:39.039 "data_size": 63488 00:22:39.039 }, 00:22:39.039 { 00:22:39.039 "name": "pt3", 00:22:39.039 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:39.039 "is_configured": true, 00:22:39.039 "data_offset": 2048, 00:22:39.039 "data_size": 63488 00:22:39.039 } 00:22:39.039 ] 00:22:39.039 }' 00:22:39.039 21:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:39.039 21:36:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.608 21:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:39.868 [2024-07-15 21:36:13.108398] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:39.868 [2024-07-15 21:36:13.108485] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:39.868 [2024-07-15 21:36:13.108561] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:39.868 [2024-07-15 21:36:13.108623] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:39.868 [2024-07-15 21:36:13.108639] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b180 name raid_bdev1, state offline 00:22:39.868 21:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.868 21:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:22:40.127 21:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:22:40.127 21:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:22:40.127 21:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:22:40.127 21:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:22:40.127 21:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:40.127 21:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:40.386 [2024-07-15 21:36:13.619535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:40.386 [2024-07-15 21:36:13.619670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:40.386 [2024-07-15 21:36:13.619736] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:22:40.386 [2024-07-15 21:36:13.619770] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:40.386 [2024-07-15 21:36:13.621697] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:40.386 [2024-07-15 21:36:13.621776] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:40.386 [2024-07-15 21:36:13.621891] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:40.386 [2024-07-15 21:36:13.621958] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:40.386 [2024-07-15 21:36:13.622136] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:40.386 [2024-07-15 21:36:13.622174] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:40.386 [2024-07-15 21:36:13.622212] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state configuring 00:22:40.386 [2024-07-15 21:36:13.622298] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:40.386 pt1 00:22:40.386 21:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:22:40.386 21:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:22:40.386 21:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:40.386 21:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:40.386 21:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:40.386 21:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:40.386 21:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:40.386 21:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:40.386 21:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:40.386 21:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:40.386 21:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:40.386 21:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.386 21:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.646 21:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:40.646 "name": "raid_bdev1", 00:22:40.646 "uuid": "12834c46-3a98-4fc8-97fe-ecc3a9c5c7ca", 00:22:40.646 "strip_size_kb": 0, 00:22:40.646 "state": "configuring", 00:22:40.646 "raid_level": "raid1", 00:22:40.646 "superblock": true, 00:22:40.646 "num_base_bdevs": 3, 00:22:40.646 "num_base_bdevs_discovered": 1, 00:22:40.646 "num_base_bdevs_operational": 2, 00:22:40.646 "base_bdevs_list": [ 00:22:40.646 { 00:22:40.646 "name": null, 00:22:40.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.646 "is_configured": false, 00:22:40.646 "data_offset": 2048, 00:22:40.646 "data_size": 63488 00:22:40.646 }, 00:22:40.646 { 00:22:40.646 "name": "pt2", 00:22:40.646 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:40.646 "is_configured": true, 00:22:40.646 "data_offset": 2048, 00:22:40.646 "data_size": 63488 00:22:40.646 }, 00:22:40.646 { 00:22:40.646 "name": null, 00:22:40.646 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:40.646 "is_configured": false, 00:22:40.646 "data_offset": 2048, 00:22:40.646 "data_size": 63488 00:22:40.646 } 00:22:40.646 ] 00:22:40.646 }' 00:22:40.646 21:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:40.646 21:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.224 21:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:22:41.224 21:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:41.484 21:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:22:41.484 21:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:41.484 [2024-07-15 21:36:14.773756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:41.484 [2024-07-15 21:36:14.773904] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:41.484 [2024-07-15 21:36:14.773947] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:22:41.484 [2024-07-15 21:36:14.773982] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:41.484 [2024-07-15 21:36:14.774435] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:41.484 [2024-07-15 21:36:14.774498] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:41.484 [2024-07-15 21:36:14.774610] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:41.484 [2024-07-15 21:36:14.774655] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:41.484 [2024-07-15 21:36:14.774795] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:22:41.484 [2024-07-15 21:36:14.774826] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:41.484 [2024-07-15 21:36:14.774936] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:22:41.484 [2024-07-15 21:36:14.775216] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:22:41.484 [2024-07-15 21:36:14.775255] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:22:41.484 [2024-07-15 21:36:14.775397] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:41.484 pt3 00:22:41.484 21:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:41.484 21:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:41.484 21:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:41.484 21:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:41.484 21:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:41.484 21:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:41.484 21:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:41.484 21:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:41.484 21:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:41.484 21:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:41.484 21:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.484 21:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.744 21:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:41.744 "name": "raid_bdev1", 00:22:41.744 "uuid": "12834c46-3a98-4fc8-97fe-ecc3a9c5c7ca", 00:22:41.744 "strip_size_kb": 0, 00:22:41.744 "state": "online", 00:22:41.744 "raid_level": "raid1", 00:22:41.744 "superblock": true, 00:22:41.744 "num_base_bdevs": 3, 00:22:41.744 "num_base_bdevs_discovered": 2, 00:22:41.744 "num_base_bdevs_operational": 2, 00:22:41.744 "base_bdevs_list": [ 00:22:41.744 { 00:22:41.744 "name": null, 00:22:41.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.744 "is_configured": false, 00:22:41.744 "data_offset": 2048, 00:22:41.744 "data_size": 63488 00:22:41.744 }, 00:22:41.744 { 00:22:41.744 "name": "pt2", 00:22:41.744 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:41.744 "is_configured": true, 00:22:41.744 "data_offset": 2048, 00:22:41.744 "data_size": 63488 00:22:41.744 }, 00:22:41.744 { 00:22:41.744 "name": "pt3", 00:22:41.744 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:41.744 "is_configured": true, 00:22:41.744 "data_offset": 2048, 00:22:41.744 "data_size": 63488 00:22:41.744 } 00:22:41.744 ] 00:22:41.744 }' 00:22:41.744 21:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:41.744 21:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.310 21:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:22:42.310 21:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:42.569 21:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:22:42.569 21:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:42.569 21:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:22:42.569 [2024-07-15 21:36:15.919971] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:42.569 21:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 12834c46-3a98-4fc8-97fe-ecc3a9c5c7ca '!=' 12834c46-3a98-4fc8-97fe-ecc3a9c5c7ca ']' 00:22:42.569 21:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 133990 00:22:42.569 21:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 133990 ']' 00:22:42.569 21:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 133990 00:22:42.569 21:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:22:42.827 21:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:42.827 21:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 133990 00:22:42.827 killing process with pid 133990 00:22:42.827 21:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:42.827 21:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:42.827 21:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 133990' 00:22:42.827 21:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 133990 00:22:42.827 21:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 133990 00:22:42.827 [2024-07-15 21:36:15.953521] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:42.827 [2024-07-15 21:36:15.953607] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:42.827 [2024-07-15 21:36:15.953697] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:42.827 [2024-07-15 21:36:15.953766] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:22:43.085 [2024-07-15 21:36:16.228658] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:44.464 ************************************ 00:22:44.464 END TEST raid_superblock_test 00:22:44.464 ************************************ 00:22:44.464 21:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:22:44.464 00:22:44.464 real 0m20.331s 00:22:44.464 user 0m37.515s 00:22:44.464 sys 0m2.520s 00:22:44.464 21:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:44.464 21:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.464 21:36:17 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:44.464 21:36:17 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:22:44.464 21:36:17 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:44.464 21:36:17 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:44.464 21:36:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:44.464 ************************************ 00:22:44.464 START TEST raid_read_error_test 00:22:44.464 ************************************ 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 read 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.qNeiIK4mQK 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=134735 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 134735 /var/tmp/spdk-raid.sock 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 134735 ']' 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:44.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:44.464 21:36:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.464 [2024-07-15 21:36:17.576896] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:22:44.464 [2024-07-15 21:36:17.577094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134735 ] 00:22:44.464 [2024-07-15 21:36:17.732077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.725 [2024-07-15 21:36:17.899939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.725 [2024-07-15 21:36:18.076071] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:45.291 21:36:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:45.291 21:36:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:22:45.291 21:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:45.291 21:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:45.291 BaseBdev1_malloc 00:22:45.291 21:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:22:45.550 true 00:22:45.550 21:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:45.814 [2024-07-15 21:36:18.936060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:45.814 [2024-07-15 21:36:18.936241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:45.814 [2024-07-15 21:36:18.936302] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:45.814 [2024-07-15 21:36:18.936335] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:45.814 [2024-07-15 21:36:18.938262] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:45.814 [2024-07-15 21:36:18.938345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:45.814 BaseBdev1 00:22:45.814 21:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:45.814 21:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:45.814 BaseBdev2_malloc 00:22:46.079 21:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:22:46.079 true 00:22:46.079 21:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:46.338 [2024-07-15 21:36:19.538911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:46.338 [2024-07-15 21:36:19.539073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:46.338 [2024-07-15 21:36:19.539121] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:22:46.338 [2024-07-15 21:36:19.539157] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:46.338 [2024-07-15 21:36:19.541049] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:46.338 [2024-07-15 21:36:19.541134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:46.338 BaseBdev2 00:22:46.338 21:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:46.338 21:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:46.596 BaseBdev3_malloc 00:22:46.596 21:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:22:46.596 true 00:22:46.596 21:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:46.854 [2024-07-15 21:36:20.115379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:46.854 [2024-07-15 21:36:20.115502] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:46.854 [2024-07-15 21:36:20.115564] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:46.854 [2024-07-15 21:36:20.115600] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:46.854 [2024-07-15 21:36:20.117338] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:46.854 [2024-07-15 21:36:20.117409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:46.854 BaseBdev3 00:22:46.854 21:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:22:47.112 [2024-07-15 21:36:20.295115] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:47.112 [2024-07-15 21:36:20.296730] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:47.112 [2024-07-15 21:36:20.296835] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:47.112 [2024-07-15 21:36:20.297086] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:22:47.112 [2024-07-15 21:36:20.297131] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:47.112 [2024-07-15 21:36:20.297312] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:22:47.112 [2024-07-15 21:36:20.297639] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:22:47.112 [2024-07-15 21:36:20.297677] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:22:47.112 [2024-07-15 21:36:20.297847] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:47.112 21:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:47.112 21:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:47.112 21:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:47.112 21:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:47.112 21:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:47.113 21:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:47.113 21:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:47.113 21:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:47.113 21:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:47.113 21:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:47.113 21:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.113 21:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.371 21:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:47.371 "name": "raid_bdev1", 00:22:47.371 "uuid": "52a99247-b242-43fb-a65b-3b346a3b940a", 00:22:47.371 "strip_size_kb": 0, 00:22:47.371 "state": "online", 00:22:47.371 "raid_level": "raid1", 00:22:47.371 "superblock": true, 00:22:47.371 "num_base_bdevs": 3, 00:22:47.371 "num_base_bdevs_discovered": 3, 00:22:47.371 "num_base_bdevs_operational": 3, 00:22:47.371 "base_bdevs_list": [ 00:22:47.371 { 00:22:47.371 "name": "BaseBdev1", 00:22:47.371 "uuid": "c6ff1ce6-2a96-5063-8d71-00775ac4ce14", 00:22:47.371 "is_configured": true, 00:22:47.371 "data_offset": 2048, 00:22:47.371 "data_size": 63488 00:22:47.371 }, 00:22:47.371 { 00:22:47.371 "name": "BaseBdev2", 00:22:47.371 "uuid": "3c40f8e5-f470-5530-b174-d37255fff030", 00:22:47.371 "is_configured": true, 00:22:47.371 "data_offset": 2048, 00:22:47.371 "data_size": 63488 00:22:47.371 }, 00:22:47.371 { 00:22:47.371 "name": "BaseBdev3", 00:22:47.371 "uuid": "79e5b41f-8e8a-5f1a-9cc1-22be81237bb4", 00:22:47.371 "is_configured": true, 00:22:47.371 "data_offset": 2048, 00:22:47.371 "data_size": 63488 00:22:47.371 } 00:22:47.371 ] 00:22:47.371 }' 00:22:47.371 21:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:47.371 21:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.938 21:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:22:47.938 21:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:47.938 [2024-07-15 21:36:21.166662] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:22:48.875 21:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:22:49.133 21:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:22:49.133 21:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:22:49.133 21:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:22:49.133 21:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:22:49.133 21:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:49.133 21:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:49.133 21:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:49.133 21:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:49.133 21:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:49.133 21:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:49.133 21:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:49.133 21:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:49.133 21:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:49.133 21:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:49.133 21:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.133 21:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.133 21:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:49.133 "name": "raid_bdev1", 00:22:49.133 "uuid": "52a99247-b242-43fb-a65b-3b346a3b940a", 00:22:49.133 "strip_size_kb": 0, 00:22:49.133 "state": "online", 00:22:49.133 "raid_level": "raid1", 00:22:49.133 "superblock": true, 00:22:49.133 "num_base_bdevs": 3, 00:22:49.133 "num_base_bdevs_discovered": 3, 00:22:49.133 "num_base_bdevs_operational": 3, 00:22:49.133 "base_bdevs_list": [ 00:22:49.133 { 00:22:49.133 "name": "BaseBdev1", 00:22:49.133 "uuid": "c6ff1ce6-2a96-5063-8d71-00775ac4ce14", 00:22:49.133 "is_configured": true, 00:22:49.133 "data_offset": 2048, 00:22:49.133 "data_size": 63488 00:22:49.133 }, 00:22:49.133 { 00:22:49.133 "name": "BaseBdev2", 00:22:49.133 "uuid": "3c40f8e5-f470-5530-b174-d37255fff030", 00:22:49.133 "is_configured": true, 00:22:49.133 "data_offset": 2048, 00:22:49.133 "data_size": 63488 00:22:49.133 }, 00:22:49.133 { 00:22:49.133 "name": "BaseBdev3", 00:22:49.133 "uuid": "79e5b41f-8e8a-5f1a-9cc1-22be81237bb4", 00:22:49.133 "is_configured": true, 00:22:49.133 "data_offset": 2048, 00:22:49.133 "data_size": 63488 00:22:49.133 } 00:22:49.133 ] 00:22:49.133 }' 00:22:49.133 21:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:49.133 21:36:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:49.700 21:36:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:49.959 [2024-07-15 21:36:23.225814] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:49.959 [2024-07-15 21:36:23.225927] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:49.959 [2024-07-15 21:36:23.228409] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:49.959 [2024-07-15 21:36:23.228495] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:49.959 [2024-07-15 21:36:23.228588] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:49.959 [2024-07-15 21:36:23.228617] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:22:49.959 0 00:22:49.959 21:36:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 134735 00:22:49.959 21:36:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 134735 ']' 00:22:49.959 21:36:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 134735 00:22:49.959 21:36:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:22:49.959 21:36:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:49.959 21:36:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 134735 00:22:49.959 21:36:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:49.959 21:36:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:49.959 21:36:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 134735' 00:22:49.959 killing process with pid 134735 00:22:49.959 21:36:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 134735 00:22:49.959 21:36:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 134735 00:22:49.959 [2024-07-15 21:36:23.267108] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:50.222 [2024-07-15 21:36:23.481063] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:51.598 21:36:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.qNeiIK4mQK 00:22:51.598 21:36:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:22:51.598 21:36:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:22:51.598 ************************************ 00:22:51.598 END TEST raid_read_error_test 00:22:51.598 ************************************ 00:22:51.598 21:36:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:22:51.598 21:36:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:22:51.598 21:36:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:51.598 21:36:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:22:51.598 21:36:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:22:51.598 00:22:51.598 real 0m7.195s 00:22:51.598 user 0m10.611s 00:22:51.598 sys 0m0.851s 00:22:51.598 21:36:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:51.598 21:36:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.598 21:36:24 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:51.598 21:36:24 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:22:51.598 21:36:24 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:51.598 21:36:24 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:51.598 21:36:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:51.598 ************************************ 00:22:51.598 START TEST raid_write_error_test 00:22:51.598 ************************************ 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 write 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.Jl8YMPICVV 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=134945 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 134945 /var/tmp/spdk-raid.sock 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 134945 ']' 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:51.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:51.598 21:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.598 [2024-07-15 21:36:24.845885] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:22:51.598 [2024-07-15 21:36:24.846100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134945 ] 00:22:51.857 [2024-07-15 21:36:25.008946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.857 [2024-07-15 21:36:25.198996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.120 [2024-07-15 21:36:25.387309] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:52.382 21:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:52.382 21:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:22:52.382 21:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:52.382 21:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:52.647 BaseBdev1_malloc 00:22:52.647 21:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:22:52.904 true 00:22:52.904 21:36:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:52.904 [2024-07-15 21:36:26.202454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:52.904 [2024-07-15 21:36:26.202583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:52.904 [2024-07-15 21:36:26.202631] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:52.904 [2024-07-15 21:36:26.202672] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:52.904 [2024-07-15 21:36:26.204728] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:52.904 [2024-07-15 21:36:26.204801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:52.904 BaseBdev1 00:22:52.904 21:36:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:52.904 21:36:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:53.160 BaseBdev2_malloc 00:22:53.160 21:36:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:22:53.418 true 00:22:53.418 21:36:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:53.677 [2024-07-15 21:36:26.796893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:53.677 [2024-07-15 21:36:26.797076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:53.677 [2024-07-15 21:36:26.797128] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:22:53.677 [2024-07-15 21:36:26.797165] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:53.677 [2024-07-15 21:36:26.799073] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:53.677 [2024-07-15 21:36:26.799168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:53.677 BaseBdev2 00:22:53.677 21:36:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:53.677 21:36:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:53.677 BaseBdev3_malloc 00:22:53.677 21:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:22:53.935 true 00:22:53.936 21:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:54.195 [2024-07-15 21:36:27.383559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:54.195 [2024-07-15 21:36:27.383751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:54.195 [2024-07-15 21:36:27.383799] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:54.195 [2024-07-15 21:36:27.383837] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:54.195 [2024-07-15 21:36:27.385761] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:54.195 [2024-07-15 21:36:27.385847] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:54.195 BaseBdev3 00:22:54.195 21:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:22:54.454 [2024-07-15 21:36:27.567309] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:54.454 [2024-07-15 21:36:27.568995] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:54.454 [2024-07-15 21:36:27.569103] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:54.454 [2024-07-15 21:36:27.569334] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:22:54.454 [2024-07-15 21:36:27.569369] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:54.454 [2024-07-15 21:36:27.569517] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:22:54.454 [2024-07-15 21:36:27.569858] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:22:54.454 [2024-07-15 21:36:27.569899] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:22:54.454 [2024-07-15 21:36:27.570067] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:54.454 21:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:54.454 21:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:54.454 21:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:54.454 21:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:54.454 21:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:54.454 21:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:54.454 21:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:54.454 21:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:54.454 21:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:54.454 21:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:54.454 21:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.454 21:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.454 21:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:54.454 "name": "raid_bdev1", 00:22:54.454 "uuid": "f23e740a-2320-4085-85e0-e01e41d4ad98", 00:22:54.454 "strip_size_kb": 0, 00:22:54.454 "state": "online", 00:22:54.454 "raid_level": "raid1", 00:22:54.454 "superblock": true, 00:22:54.454 "num_base_bdevs": 3, 00:22:54.454 "num_base_bdevs_discovered": 3, 00:22:54.454 "num_base_bdevs_operational": 3, 00:22:54.454 "base_bdevs_list": [ 00:22:54.454 { 00:22:54.454 "name": "BaseBdev1", 00:22:54.454 "uuid": "2afc12a1-5c42-52d8-a9cf-bdf382caefd0", 00:22:54.454 "is_configured": true, 00:22:54.454 "data_offset": 2048, 00:22:54.454 "data_size": 63488 00:22:54.454 }, 00:22:54.454 { 00:22:54.454 "name": "BaseBdev2", 00:22:54.454 "uuid": "b514aea9-e9b4-5593-a9e5-88ed0d859b15", 00:22:54.454 "is_configured": true, 00:22:54.454 "data_offset": 2048, 00:22:54.454 "data_size": 63488 00:22:54.454 }, 00:22:54.454 { 00:22:54.454 "name": "BaseBdev3", 00:22:54.454 "uuid": "2a3d05bb-b450-55fb-a55f-f2e57d0863c4", 00:22:54.454 "is_configured": true, 00:22:54.454 "data_offset": 2048, 00:22:54.454 "data_size": 63488 00:22:54.454 } 00:22:54.454 ] 00:22:54.454 }' 00:22:54.454 21:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:54.454 21:36:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.023 21:36:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:22:55.023 21:36:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:55.282 [2024-07-15 21:36:28.442786] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:22:56.217 21:36:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:22:56.217 [2024-07-15 21:36:29.538440] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:22:56.217 [2024-07-15 21:36:29.538604] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:56.217 [2024-07-15 21:36:29.538872] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005c70 00:22:56.217 21:36:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:22:56.217 21:36:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:22:56.217 21:36:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:22:56.217 21:36:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=2 00:22:56.217 21:36:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:56.217 21:36:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:56.217 21:36:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:56.217 21:36:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:56.218 21:36:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:56.218 21:36:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:56.218 21:36:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:56.218 21:36:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:56.218 21:36:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:56.218 21:36:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:56.218 21:36:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.218 21:36:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.477 21:36:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:56.477 "name": "raid_bdev1", 00:22:56.477 "uuid": "f23e740a-2320-4085-85e0-e01e41d4ad98", 00:22:56.477 "strip_size_kb": 0, 00:22:56.477 "state": "online", 00:22:56.477 "raid_level": "raid1", 00:22:56.477 "superblock": true, 00:22:56.477 "num_base_bdevs": 3, 00:22:56.477 "num_base_bdevs_discovered": 2, 00:22:56.477 "num_base_bdevs_operational": 2, 00:22:56.477 "base_bdevs_list": [ 00:22:56.477 { 00:22:56.477 "name": null, 00:22:56.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:56.477 "is_configured": false, 00:22:56.477 "data_offset": 2048, 00:22:56.477 "data_size": 63488 00:22:56.477 }, 00:22:56.477 { 00:22:56.477 "name": "BaseBdev2", 00:22:56.477 "uuid": "b514aea9-e9b4-5593-a9e5-88ed0d859b15", 00:22:56.477 "is_configured": true, 00:22:56.477 "data_offset": 2048, 00:22:56.477 "data_size": 63488 00:22:56.477 }, 00:22:56.477 { 00:22:56.477 "name": "BaseBdev3", 00:22:56.477 "uuid": "2a3d05bb-b450-55fb-a55f-f2e57d0863c4", 00:22:56.477 "is_configured": true, 00:22:56.477 "data_offset": 2048, 00:22:56.477 "data_size": 63488 00:22:56.477 } 00:22:56.477 ] 00:22:56.477 }' 00:22:56.477 21:36:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:56.477 21:36:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.047 21:36:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:57.306 [2024-07-15 21:36:30.563859] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:57.306 [2024-07-15 21:36:30.563947] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:57.306 [2024-07-15 21:36:30.566384] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:57.306 [2024-07-15 21:36:30.566468] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:57.306 [2024-07-15 21:36:30.566559] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:57.306 [2024-07-15 21:36:30.566589] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:22:57.306 0 00:22:57.306 21:36:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 134945 00:22:57.306 21:36:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 134945 ']' 00:22:57.306 21:36:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 134945 00:22:57.306 21:36:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:22:57.306 21:36:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:57.306 21:36:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 134945 00:22:57.306 21:36:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:57.306 21:36:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:57.306 21:36:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 134945' 00:22:57.306 killing process with pid 134945 00:22:57.306 21:36:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 134945 00:22:57.306 21:36:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 134945 00:22:57.306 [2024-07-15 21:36:30.606226] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:57.564 [2024-07-15 21:36:30.816798] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:58.941 21:36:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.Jl8YMPICVV 00:22:58.941 21:36:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:22:58.941 21:36:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:22:58.941 ************************************ 00:22:58.941 END TEST raid_write_error_test 00:22:58.941 ************************************ 00:22:58.941 21:36:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:22:58.941 21:36:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:22:58.941 21:36:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:58.941 21:36:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:22:58.941 21:36:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:22:58.941 00:22:58.941 real 0m7.285s 00:22:58.941 user 0m10.737s 00:22:58.941 sys 0m0.847s 00:22:58.941 21:36:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:58.941 21:36:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.941 21:36:32 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:58.941 21:36:32 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:22:58.941 21:36:32 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:22:58.941 21:36:32 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:22:58.941 21:36:32 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:58.941 21:36:32 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:58.941 21:36:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:58.941 ************************************ 00:22:58.941 START TEST raid_state_function_test 00:22:58.941 ************************************ 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 false 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=135136 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 135136' 00:22:58.941 Process raid pid: 135136 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 135136 /var/tmp/spdk-raid.sock 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 135136 ']' 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:58.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:58.941 21:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.941 [2024-07-15 21:36:32.191382] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:22:58.941 [2024-07-15 21:36:32.191523] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:59.199 [2024-07-15 21:36:32.332449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.199 [2024-07-15 21:36:32.519603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.458 [2024-07-15 21:36:32.703520] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:59.716 21:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:59.716 21:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:22:59.716 21:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:59.975 [2024-07-15 21:36:33.165876] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:59.975 [2024-07-15 21:36:33.165948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:59.975 [2024-07-15 21:36:33.165958] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:59.975 [2024-07-15 21:36:33.165975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:59.975 [2024-07-15 21:36:33.165982] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:59.975 [2024-07-15 21:36:33.165993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:59.975 [2024-07-15 21:36:33.165998] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:59.975 [2024-07-15 21:36:33.166014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:59.975 21:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:59.975 21:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:59.975 21:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:59.975 21:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:59.975 21:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:59.975 21:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:59.975 21:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:59.975 21:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:59.975 21:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:59.975 21:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:59.975 21:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.975 21:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:00.234 21:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:00.234 "name": "Existed_Raid", 00:23:00.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.234 "strip_size_kb": 64, 00:23:00.234 "state": "configuring", 00:23:00.234 "raid_level": "raid0", 00:23:00.234 "superblock": false, 00:23:00.234 "num_base_bdevs": 4, 00:23:00.234 "num_base_bdevs_discovered": 0, 00:23:00.234 "num_base_bdevs_operational": 4, 00:23:00.234 "base_bdevs_list": [ 00:23:00.234 { 00:23:00.234 "name": "BaseBdev1", 00:23:00.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.234 "is_configured": false, 00:23:00.234 "data_offset": 0, 00:23:00.234 "data_size": 0 00:23:00.234 }, 00:23:00.234 { 00:23:00.234 "name": "BaseBdev2", 00:23:00.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.234 "is_configured": false, 00:23:00.234 "data_offset": 0, 00:23:00.234 "data_size": 0 00:23:00.234 }, 00:23:00.234 { 00:23:00.234 "name": "BaseBdev3", 00:23:00.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.234 "is_configured": false, 00:23:00.234 "data_offset": 0, 00:23:00.234 "data_size": 0 00:23:00.234 }, 00:23:00.234 { 00:23:00.234 "name": "BaseBdev4", 00:23:00.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.234 "is_configured": false, 00:23:00.234 "data_offset": 0, 00:23:00.234 "data_size": 0 00:23:00.234 } 00:23:00.234 ] 00:23:00.234 }' 00:23:00.234 21:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:00.234 21:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.802 21:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:00.802 [2024-07-15 21:36:34.132109] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:00.802 [2024-07-15 21:36:34.132145] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:23:00.802 21:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:01.070 [2024-07-15 21:36:34.303843] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:01.070 [2024-07-15 21:36:34.303910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:01.070 [2024-07-15 21:36:34.303919] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:01.070 [2024-07-15 21:36:34.303956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:01.070 [2024-07-15 21:36:34.303963] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:01.070 [2024-07-15 21:36:34.303989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:01.070 [2024-07-15 21:36:34.303995] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:01.070 [2024-07-15 21:36:34.304014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:01.070 21:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:01.346 [2024-07-15 21:36:34.493428] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:01.346 BaseBdev1 00:23:01.346 21:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:23:01.346 21:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:23:01.346 21:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:01.346 21:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:01.346 21:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:01.346 21:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:01.346 21:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:01.346 21:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:01.606 [ 00:23:01.606 { 00:23:01.606 "name": "BaseBdev1", 00:23:01.606 "aliases": [ 00:23:01.606 "12b42aa3-3c9f-457d-b854-f0d5332fb0cf" 00:23:01.606 ], 00:23:01.606 "product_name": "Malloc disk", 00:23:01.606 "block_size": 512, 00:23:01.606 "num_blocks": 65536, 00:23:01.606 "uuid": "12b42aa3-3c9f-457d-b854-f0d5332fb0cf", 00:23:01.606 "assigned_rate_limits": { 00:23:01.606 "rw_ios_per_sec": 0, 00:23:01.606 "rw_mbytes_per_sec": 0, 00:23:01.606 "r_mbytes_per_sec": 0, 00:23:01.606 "w_mbytes_per_sec": 0 00:23:01.606 }, 00:23:01.606 "claimed": true, 00:23:01.606 "claim_type": "exclusive_write", 00:23:01.606 "zoned": false, 00:23:01.606 "supported_io_types": { 00:23:01.606 "read": true, 00:23:01.606 "write": true, 00:23:01.606 "unmap": true, 00:23:01.606 "flush": true, 00:23:01.606 "reset": true, 00:23:01.606 "nvme_admin": false, 00:23:01.606 "nvme_io": false, 00:23:01.606 "nvme_io_md": false, 00:23:01.606 "write_zeroes": true, 00:23:01.606 "zcopy": true, 00:23:01.606 "get_zone_info": false, 00:23:01.606 "zone_management": false, 00:23:01.606 "zone_append": false, 00:23:01.606 "compare": false, 00:23:01.606 "compare_and_write": false, 00:23:01.606 "abort": true, 00:23:01.606 "seek_hole": false, 00:23:01.606 "seek_data": false, 00:23:01.606 "copy": true, 00:23:01.606 "nvme_iov_md": false 00:23:01.606 }, 00:23:01.606 "memory_domains": [ 00:23:01.606 { 00:23:01.606 "dma_device_id": "system", 00:23:01.606 "dma_device_type": 1 00:23:01.606 }, 00:23:01.606 { 00:23:01.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:01.606 "dma_device_type": 2 00:23:01.606 } 00:23:01.606 ], 00:23:01.606 "driver_specific": {} 00:23:01.606 } 00:23:01.606 ] 00:23:01.606 21:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:01.606 21:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:01.606 21:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:01.606 21:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:01.606 21:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:01.606 21:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:01.606 21:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:01.606 21:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:01.606 21:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:01.606 21:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:01.606 21:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:01.606 21:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.606 21:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:01.865 21:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:01.865 "name": "Existed_Raid", 00:23:01.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.865 "strip_size_kb": 64, 00:23:01.865 "state": "configuring", 00:23:01.865 "raid_level": "raid0", 00:23:01.865 "superblock": false, 00:23:01.865 "num_base_bdevs": 4, 00:23:01.865 "num_base_bdevs_discovered": 1, 00:23:01.865 "num_base_bdevs_operational": 4, 00:23:01.865 "base_bdevs_list": [ 00:23:01.865 { 00:23:01.865 "name": "BaseBdev1", 00:23:01.865 "uuid": "12b42aa3-3c9f-457d-b854-f0d5332fb0cf", 00:23:01.865 "is_configured": true, 00:23:01.865 "data_offset": 0, 00:23:01.865 "data_size": 65536 00:23:01.865 }, 00:23:01.865 { 00:23:01.865 "name": "BaseBdev2", 00:23:01.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.865 "is_configured": false, 00:23:01.865 "data_offset": 0, 00:23:01.865 "data_size": 0 00:23:01.865 }, 00:23:01.865 { 00:23:01.865 "name": "BaseBdev3", 00:23:01.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.865 "is_configured": false, 00:23:01.865 "data_offset": 0, 00:23:01.865 "data_size": 0 00:23:01.865 }, 00:23:01.865 { 00:23:01.865 "name": "BaseBdev4", 00:23:01.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.865 "is_configured": false, 00:23:01.865 "data_offset": 0, 00:23:01.865 "data_size": 0 00:23:01.865 } 00:23:01.865 ] 00:23:01.865 }' 00:23:01.865 21:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:01.865 21:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.433 21:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:02.693 [2024-07-15 21:36:35.815144] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:02.693 [2024-07-15 21:36:35.815223] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:23:02.693 21:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:02.693 [2024-07-15 21:36:35.994854] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:02.693 [2024-07-15 21:36:35.996421] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:02.693 [2024-07-15 21:36:35.996469] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:02.693 [2024-07-15 21:36:35.996477] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:02.693 [2024-07-15 21:36:35.996494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:02.693 [2024-07-15 21:36:35.996500] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:02.693 [2024-07-15 21:36:35.996522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:02.693 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:23:02.693 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:02.693 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:02.693 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:02.693 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:02.693 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:02.693 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:02.693 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:02.693 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:02.693 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:02.693 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:02.693 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:02.693 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.693 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:02.953 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:02.953 "name": "Existed_Raid", 00:23:02.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.953 "strip_size_kb": 64, 00:23:02.953 "state": "configuring", 00:23:02.953 "raid_level": "raid0", 00:23:02.953 "superblock": false, 00:23:02.953 "num_base_bdevs": 4, 00:23:02.953 "num_base_bdevs_discovered": 1, 00:23:02.953 "num_base_bdevs_operational": 4, 00:23:02.953 "base_bdevs_list": [ 00:23:02.953 { 00:23:02.953 "name": "BaseBdev1", 00:23:02.953 "uuid": "12b42aa3-3c9f-457d-b854-f0d5332fb0cf", 00:23:02.953 "is_configured": true, 00:23:02.953 "data_offset": 0, 00:23:02.953 "data_size": 65536 00:23:02.953 }, 00:23:02.953 { 00:23:02.953 "name": "BaseBdev2", 00:23:02.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.953 "is_configured": false, 00:23:02.953 "data_offset": 0, 00:23:02.953 "data_size": 0 00:23:02.953 }, 00:23:02.953 { 00:23:02.953 "name": "BaseBdev3", 00:23:02.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.953 "is_configured": false, 00:23:02.953 "data_offset": 0, 00:23:02.953 "data_size": 0 00:23:02.953 }, 00:23:02.953 { 00:23:02.953 "name": "BaseBdev4", 00:23:02.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.953 "is_configured": false, 00:23:02.953 "data_offset": 0, 00:23:02.953 "data_size": 0 00:23:02.953 } 00:23:02.953 ] 00:23:02.953 }' 00:23:02.953 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:02.953 21:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.522 21:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:03.781 [2024-07-15 21:36:37.047510] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:03.781 BaseBdev2 00:23:03.781 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:23:03.781 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:23:03.781 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:03.781 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:03.781 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:03.781 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:03.781 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:04.041 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:04.300 [ 00:23:04.300 { 00:23:04.300 "name": "BaseBdev2", 00:23:04.300 "aliases": [ 00:23:04.300 "53f1d81d-11cc-4dff-a424-3fb79c01f10f" 00:23:04.300 ], 00:23:04.300 "product_name": "Malloc disk", 00:23:04.300 "block_size": 512, 00:23:04.300 "num_blocks": 65536, 00:23:04.300 "uuid": "53f1d81d-11cc-4dff-a424-3fb79c01f10f", 00:23:04.300 "assigned_rate_limits": { 00:23:04.300 "rw_ios_per_sec": 0, 00:23:04.300 "rw_mbytes_per_sec": 0, 00:23:04.300 "r_mbytes_per_sec": 0, 00:23:04.300 "w_mbytes_per_sec": 0 00:23:04.300 }, 00:23:04.300 "claimed": true, 00:23:04.300 "claim_type": "exclusive_write", 00:23:04.300 "zoned": false, 00:23:04.300 "supported_io_types": { 00:23:04.300 "read": true, 00:23:04.300 "write": true, 00:23:04.300 "unmap": true, 00:23:04.300 "flush": true, 00:23:04.300 "reset": true, 00:23:04.300 "nvme_admin": false, 00:23:04.300 "nvme_io": false, 00:23:04.300 "nvme_io_md": false, 00:23:04.300 "write_zeroes": true, 00:23:04.300 "zcopy": true, 00:23:04.300 "get_zone_info": false, 00:23:04.300 "zone_management": false, 00:23:04.300 "zone_append": false, 00:23:04.300 "compare": false, 00:23:04.300 "compare_and_write": false, 00:23:04.300 "abort": true, 00:23:04.300 "seek_hole": false, 00:23:04.300 "seek_data": false, 00:23:04.300 "copy": true, 00:23:04.300 "nvme_iov_md": false 00:23:04.300 }, 00:23:04.300 "memory_domains": [ 00:23:04.300 { 00:23:04.300 "dma_device_id": "system", 00:23:04.300 "dma_device_type": 1 00:23:04.300 }, 00:23:04.300 { 00:23:04.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:04.300 "dma_device_type": 2 00:23:04.300 } 00:23:04.300 ], 00:23:04.300 "driver_specific": {} 00:23:04.300 } 00:23:04.300 ] 00:23:04.300 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:04.300 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:04.300 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:04.300 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:04.300 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:04.300 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:04.300 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:04.300 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:04.301 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:04.301 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:04.301 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:04.301 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:04.301 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:04.301 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.301 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:04.301 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:04.301 "name": "Existed_Raid", 00:23:04.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.301 "strip_size_kb": 64, 00:23:04.301 "state": "configuring", 00:23:04.301 "raid_level": "raid0", 00:23:04.301 "superblock": false, 00:23:04.301 "num_base_bdevs": 4, 00:23:04.301 "num_base_bdevs_discovered": 2, 00:23:04.301 "num_base_bdevs_operational": 4, 00:23:04.301 "base_bdevs_list": [ 00:23:04.301 { 00:23:04.301 "name": "BaseBdev1", 00:23:04.301 "uuid": "12b42aa3-3c9f-457d-b854-f0d5332fb0cf", 00:23:04.301 "is_configured": true, 00:23:04.301 "data_offset": 0, 00:23:04.301 "data_size": 65536 00:23:04.301 }, 00:23:04.301 { 00:23:04.301 "name": "BaseBdev2", 00:23:04.301 "uuid": "53f1d81d-11cc-4dff-a424-3fb79c01f10f", 00:23:04.301 "is_configured": true, 00:23:04.301 "data_offset": 0, 00:23:04.301 "data_size": 65536 00:23:04.301 }, 00:23:04.301 { 00:23:04.301 "name": "BaseBdev3", 00:23:04.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.301 "is_configured": false, 00:23:04.301 "data_offset": 0, 00:23:04.301 "data_size": 0 00:23:04.301 }, 00:23:04.301 { 00:23:04.301 "name": "BaseBdev4", 00:23:04.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.301 "is_configured": false, 00:23:04.301 "data_offset": 0, 00:23:04.301 "data_size": 0 00:23:04.301 } 00:23:04.301 ] 00:23:04.301 }' 00:23:04.301 21:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:04.301 21:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.870 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:05.130 [2024-07-15 21:36:38.395300] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:05.130 BaseBdev3 00:23:05.130 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:23:05.130 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:23:05.130 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:05.130 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:05.130 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:05.130 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:05.130 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:05.390 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:05.390 [ 00:23:05.390 { 00:23:05.390 "name": "BaseBdev3", 00:23:05.390 "aliases": [ 00:23:05.390 "1d292149-9f14-4bd6-923c-deee1e58160b" 00:23:05.390 ], 00:23:05.390 "product_name": "Malloc disk", 00:23:05.390 "block_size": 512, 00:23:05.390 "num_blocks": 65536, 00:23:05.390 "uuid": "1d292149-9f14-4bd6-923c-deee1e58160b", 00:23:05.390 "assigned_rate_limits": { 00:23:05.390 "rw_ios_per_sec": 0, 00:23:05.390 "rw_mbytes_per_sec": 0, 00:23:05.390 "r_mbytes_per_sec": 0, 00:23:05.390 "w_mbytes_per_sec": 0 00:23:05.390 }, 00:23:05.390 "claimed": true, 00:23:05.390 "claim_type": "exclusive_write", 00:23:05.390 "zoned": false, 00:23:05.390 "supported_io_types": { 00:23:05.390 "read": true, 00:23:05.390 "write": true, 00:23:05.390 "unmap": true, 00:23:05.390 "flush": true, 00:23:05.390 "reset": true, 00:23:05.390 "nvme_admin": false, 00:23:05.390 "nvme_io": false, 00:23:05.390 "nvme_io_md": false, 00:23:05.390 "write_zeroes": true, 00:23:05.390 "zcopy": true, 00:23:05.390 "get_zone_info": false, 00:23:05.390 "zone_management": false, 00:23:05.390 "zone_append": false, 00:23:05.390 "compare": false, 00:23:05.390 "compare_and_write": false, 00:23:05.390 "abort": true, 00:23:05.390 "seek_hole": false, 00:23:05.390 "seek_data": false, 00:23:05.390 "copy": true, 00:23:05.390 "nvme_iov_md": false 00:23:05.390 }, 00:23:05.390 "memory_domains": [ 00:23:05.390 { 00:23:05.390 "dma_device_id": "system", 00:23:05.390 "dma_device_type": 1 00:23:05.390 }, 00:23:05.390 { 00:23:05.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:05.390 "dma_device_type": 2 00:23:05.390 } 00:23:05.390 ], 00:23:05.390 "driver_specific": {} 00:23:05.390 } 00:23:05.390 ] 00:23:05.650 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:05.650 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:05.650 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:05.650 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:05.650 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:05.650 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:05.650 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:05.650 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:05.650 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:05.650 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:05.650 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:05.650 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:05.650 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:05.650 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:05.650 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.650 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:05.650 "name": "Existed_Raid", 00:23:05.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:05.650 "strip_size_kb": 64, 00:23:05.650 "state": "configuring", 00:23:05.650 "raid_level": "raid0", 00:23:05.650 "superblock": false, 00:23:05.650 "num_base_bdevs": 4, 00:23:05.650 "num_base_bdevs_discovered": 3, 00:23:05.650 "num_base_bdevs_operational": 4, 00:23:05.650 "base_bdevs_list": [ 00:23:05.650 { 00:23:05.650 "name": "BaseBdev1", 00:23:05.650 "uuid": "12b42aa3-3c9f-457d-b854-f0d5332fb0cf", 00:23:05.650 "is_configured": true, 00:23:05.650 "data_offset": 0, 00:23:05.650 "data_size": 65536 00:23:05.650 }, 00:23:05.650 { 00:23:05.650 "name": "BaseBdev2", 00:23:05.650 "uuid": "53f1d81d-11cc-4dff-a424-3fb79c01f10f", 00:23:05.650 "is_configured": true, 00:23:05.650 "data_offset": 0, 00:23:05.650 "data_size": 65536 00:23:05.650 }, 00:23:05.650 { 00:23:05.650 "name": "BaseBdev3", 00:23:05.650 "uuid": "1d292149-9f14-4bd6-923c-deee1e58160b", 00:23:05.650 "is_configured": true, 00:23:05.650 "data_offset": 0, 00:23:05.650 "data_size": 65536 00:23:05.650 }, 00:23:05.650 { 00:23:05.650 "name": "BaseBdev4", 00:23:05.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:05.650 "is_configured": false, 00:23:05.650 "data_offset": 0, 00:23:05.650 "data_size": 0 00:23:05.650 } 00:23:05.650 ] 00:23:05.650 }' 00:23:05.650 21:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:05.650 21:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.219 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:06.478 [2024-07-15 21:36:39.743415] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:06.478 [2024-07-15 21:36:39.743460] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:23:06.478 [2024-07-15 21:36:39.743467] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:23:06.478 [2024-07-15 21:36:39.743575] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:23:06.478 [2024-07-15 21:36:39.743858] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:23:06.478 [2024-07-15 21:36:39.743877] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:23:06.478 [2024-07-15 21:36:39.744084] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:06.478 BaseBdev4 00:23:06.478 21:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:23:06.478 21:36:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:23:06.478 21:36:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:06.478 21:36:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:06.478 21:36:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:06.478 21:36:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:06.478 21:36:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:06.738 21:36:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:07.006 [ 00:23:07.006 { 00:23:07.006 "name": "BaseBdev4", 00:23:07.006 "aliases": [ 00:23:07.006 "53099cbb-f72c-4dee-8a46-e63179a7d682" 00:23:07.006 ], 00:23:07.006 "product_name": "Malloc disk", 00:23:07.006 "block_size": 512, 00:23:07.006 "num_blocks": 65536, 00:23:07.006 "uuid": "53099cbb-f72c-4dee-8a46-e63179a7d682", 00:23:07.006 "assigned_rate_limits": { 00:23:07.006 "rw_ios_per_sec": 0, 00:23:07.006 "rw_mbytes_per_sec": 0, 00:23:07.006 "r_mbytes_per_sec": 0, 00:23:07.006 "w_mbytes_per_sec": 0 00:23:07.006 }, 00:23:07.006 "claimed": true, 00:23:07.006 "claim_type": "exclusive_write", 00:23:07.006 "zoned": false, 00:23:07.006 "supported_io_types": { 00:23:07.006 "read": true, 00:23:07.006 "write": true, 00:23:07.006 "unmap": true, 00:23:07.006 "flush": true, 00:23:07.006 "reset": true, 00:23:07.006 "nvme_admin": false, 00:23:07.006 "nvme_io": false, 00:23:07.006 "nvme_io_md": false, 00:23:07.006 "write_zeroes": true, 00:23:07.006 "zcopy": true, 00:23:07.006 "get_zone_info": false, 00:23:07.006 "zone_management": false, 00:23:07.006 "zone_append": false, 00:23:07.006 "compare": false, 00:23:07.006 "compare_and_write": false, 00:23:07.006 "abort": true, 00:23:07.006 "seek_hole": false, 00:23:07.006 "seek_data": false, 00:23:07.006 "copy": true, 00:23:07.006 "nvme_iov_md": false 00:23:07.006 }, 00:23:07.006 "memory_domains": [ 00:23:07.006 { 00:23:07.006 "dma_device_id": "system", 00:23:07.006 "dma_device_type": 1 00:23:07.006 }, 00:23:07.006 { 00:23:07.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:07.006 "dma_device_type": 2 00:23:07.006 } 00:23:07.006 ], 00:23:07.006 "driver_specific": {} 00:23:07.006 } 00:23:07.006 ] 00:23:07.006 21:36:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:07.006 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:07.006 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:07.006 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:23:07.006 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:07.006 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:07.006 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:07.006 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:07.006 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:07.006 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:07.006 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:07.006 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:07.006 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:07.006 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.006 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:07.006 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:07.006 "name": "Existed_Raid", 00:23:07.006 "uuid": "ac589d6b-7bff-44fb-9209-a90d0549295d", 00:23:07.006 "strip_size_kb": 64, 00:23:07.006 "state": "online", 00:23:07.006 "raid_level": "raid0", 00:23:07.006 "superblock": false, 00:23:07.006 "num_base_bdevs": 4, 00:23:07.006 "num_base_bdevs_discovered": 4, 00:23:07.006 "num_base_bdevs_operational": 4, 00:23:07.006 "base_bdevs_list": [ 00:23:07.006 { 00:23:07.006 "name": "BaseBdev1", 00:23:07.006 "uuid": "12b42aa3-3c9f-457d-b854-f0d5332fb0cf", 00:23:07.006 "is_configured": true, 00:23:07.006 "data_offset": 0, 00:23:07.006 "data_size": 65536 00:23:07.006 }, 00:23:07.006 { 00:23:07.006 "name": "BaseBdev2", 00:23:07.006 "uuid": "53f1d81d-11cc-4dff-a424-3fb79c01f10f", 00:23:07.006 "is_configured": true, 00:23:07.006 "data_offset": 0, 00:23:07.006 "data_size": 65536 00:23:07.006 }, 00:23:07.006 { 00:23:07.006 "name": "BaseBdev3", 00:23:07.006 "uuid": "1d292149-9f14-4bd6-923c-deee1e58160b", 00:23:07.006 "is_configured": true, 00:23:07.006 "data_offset": 0, 00:23:07.006 "data_size": 65536 00:23:07.006 }, 00:23:07.006 { 00:23:07.006 "name": "BaseBdev4", 00:23:07.006 "uuid": "53099cbb-f72c-4dee-8a46-e63179a7d682", 00:23:07.006 "is_configured": true, 00:23:07.006 "data_offset": 0, 00:23:07.006 "data_size": 65536 00:23:07.006 } 00:23:07.006 ] 00:23:07.006 }' 00:23:07.006 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:07.006 21:36:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.582 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:23:07.582 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:07.582 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:07.582 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:07.582 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:07.582 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:07.582 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:07.582 21:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:07.840 [2024-07-15 21:36:41.097375] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:07.840 21:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:07.840 "name": "Existed_Raid", 00:23:07.840 "aliases": [ 00:23:07.840 "ac589d6b-7bff-44fb-9209-a90d0549295d" 00:23:07.840 ], 00:23:07.840 "product_name": "Raid Volume", 00:23:07.840 "block_size": 512, 00:23:07.840 "num_blocks": 262144, 00:23:07.840 "uuid": "ac589d6b-7bff-44fb-9209-a90d0549295d", 00:23:07.840 "assigned_rate_limits": { 00:23:07.840 "rw_ios_per_sec": 0, 00:23:07.840 "rw_mbytes_per_sec": 0, 00:23:07.840 "r_mbytes_per_sec": 0, 00:23:07.840 "w_mbytes_per_sec": 0 00:23:07.840 }, 00:23:07.840 "claimed": false, 00:23:07.840 "zoned": false, 00:23:07.840 "supported_io_types": { 00:23:07.840 "read": true, 00:23:07.840 "write": true, 00:23:07.840 "unmap": true, 00:23:07.840 "flush": true, 00:23:07.840 "reset": true, 00:23:07.840 "nvme_admin": false, 00:23:07.840 "nvme_io": false, 00:23:07.840 "nvme_io_md": false, 00:23:07.840 "write_zeroes": true, 00:23:07.840 "zcopy": false, 00:23:07.840 "get_zone_info": false, 00:23:07.840 "zone_management": false, 00:23:07.840 "zone_append": false, 00:23:07.840 "compare": false, 00:23:07.840 "compare_and_write": false, 00:23:07.840 "abort": false, 00:23:07.840 "seek_hole": false, 00:23:07.840 "seek_data": false, 00:23:07.840 "copy": false, 00:23:07.840 "nvme_iov_md": false 00:23:07.840 }, 00:23:07.840 "memory_domains": [ 00:23:07.840 { 00:23:07.840 "dma_device_id": "system", 00:23:07.840 "dma_device_type": 1 00:23:07.840 }, 00:23:07.840 { 00:23:07.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:07.840 "dma_device_type": 2 00:23:07.840 }, 00:23:07.840 { 00:23:07.840 "dma_device_id": "system", 00:23:07.840 "dma_device_type": 1 00:23:07.840 }, 00:23:07.840 { 00:23:07.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:07.841 "dma_device_type": 2 00:23:07.841 }, 00:23:07.841 { 00:23:07.841 "dma_device_id": "system", 00:23:07.841 "dma_device_type": 1 00:23:07.841 }, 00:23:07.841 { 00:23:07.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:07.841 "dma_device_type": 2 00:23:07.841 }, 00:23:07.841 { 00:23:07.841 "dma_device_id": "system", 00:23:07.841 "dma_device_type": 1 00:23:07.841 }, 00:23:07.841 { 00:23:07.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:07.841 "dma_device_type": 2 00:23:07.841 } 00:23:07.841 ], 00:23:07.841 "driver_specific": { 00:23:07.841 "raid": { 00:23:07.841 "uuid": "ac589d6b-7bff-44fb-9209-a90d0549295d", 00:23:07.841 "strip_size_kb": 64, 00:23:07.841 "state": "online", 00:23:07.841 "raid_level": "raid0", 00:23:07.841 "superblock": false, 00:23:07.841 "num_base_bdevs": 4, 00:23:07.841 "num_base_bdevs_discovered": 4, 00:23:07.841 "num_base_bdevs_operational": 4, 00:23:07.841 "base_bdevs_list": [ 00:23:07.841 { 00:23:07.841 "name": "BaseBdev1", 00:23:07.841 "uuid": "12b42aa3-3c9f-457d-b854-f0d5332fb0cf", 00:23:07.841 "is_configured": true, 00:23:07.841 "data_offset": 0, 00:23:07.841 "data_size": 65536 00:23:07.841 }, 00:23:07.841 { 00:23:07.841 "name": "BaseBdev2", 00:23:07.841 "uuid": "53f1d81d-11cc-4dff-a424-3fb79c01f10f", 00:23:07.841 "is_configured": true, 00:23:07.841 "data_offset": 0, 00:23:07.841 "data_size": 65536 00:23:07.841 }, 00:23:07.841 { 00:23:07.841 "name": "BaseBdev3", 00:23:07.841 "uuid": "1d292149-9f14-4bd6-923c-deee1e58160b", 00:23:07.841 "is_configured": true, 00:23:07.841 "data_offset": 0, 00:23:07.841 "data_size": 65536 00:23:07.841 }, 00:23:07.841 { 00:23:07.841 "name": "BaseBdev4", 00:23:07.841 "uuid": "53099cbb-f72c-4dee-8a46-e63179a7d682", 00:23:07.841 "is_configured": true, 00:23:07.841 "data_offset": 0, 00:23:07.841 "data_size": 65536 00:23:07.841 } 00:23:07.841 ] 00:23:07.841 } 00:23:07.841 } 00:23:07.841 }' 00:23:07.841 21:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:07.841 21:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:23:07.841 BaseBdev2 00:23:07.841 BaseBdev3 00:23:07.841 BaseBdev4' 00:23:07.841 21:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:07.841 21:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:23:07.841 21:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:08.099 21:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:08.099 "name": "BaseBdev1", 00:23:08.099 "aliases": [ 00:23:08.099 "12b42aa3-3c9f-457d-b854-f0d5332fb0cf" 00:23:08.099 ], 00:23:08.099 "product_name": "Malloc disk", 00:23:08.099 "block_size": 512, 00:23:08.099 "num_blocks": 65536, 00:23:08.100 "uuid": "12b42aa3-3c9f-457d-b854-f0d5332fb0cf", 00:23:08.100 "assigned_rate_limits": { 00:23:08.100 "rw_ios_per_sec": 0, 00:23:08.100 "rw_mbytes_per_sec": 0, 00:23:08.100 "r_mbytes_per_sec": 0, 00:23:08.100 "w_mbytes_per_sec": 0 00:23:08.100 }, 00:23:08.100 "claimed": true, 00:23:08.100 "claim_type": "exclusive_write", 00:23:08.100 "zoned": false, 00:23:08.100 "supported_io_types": { 00:23:08.100 "read": true, 00:23:08.100 "write": true, 00:23:08.100 "unmap": true, 00:23:08.100 "flush": true, 00:23:08.100 "reset": true, 00:23:08.100 "nvme_admin": false, 00:23:08.100 "nvme_io": false, 00:23:08.100 "nvme_io_md": false, 00:23:08.100 "write_zeroes": true, 00:23:08.100 "zcopy": true, 00:23:08.100 "get_zone_info": false, 00:23:08.100 "zone_management": false, 00:23:08.100 "zone_append": false, 00:23:08.100 "compare": false, 00:23:08.100 "compare_and_write": false, 00:23:08.100 "abort": true, 00:23:08.100 "seek_hole": false, 00:23:08.100 "seek_data": false, 00:23:08.100 "copy": true, 00:23:08.100 "nvme_iov_md": false 00:23:08.100 }, 00:23:08.100 "memory_domains": [ 00:23:08.100 { 00:23:08.100 "dma_device_id": "system", 00:23:08.100 "dma_device_type": 1 00:23:08.100 }, 00:23:08.100 { 00:23:08.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:08.100 "dma_device_type": 2 00:23:08.100 } 00:23:08.100 ], 00:23:08.100 "driver_specific": {} 00:23:08.100 }' 00:23:08.100 21:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:08.100 21:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:08.100 21:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:08.100 21:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:08.358 21:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:08.358 21:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:08.358 21:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:08.358 21:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:08.358 21:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:08.358 21:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:08.358 21:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:08.618 21:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:08.618 21:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:08.618 21:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:08.618 21:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:08.618 21:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:08.618 "name": "BaseBdev2", 00:23:08.618 "aliases": [ 00:23:08.618 "53f1d81d-11cc-4dff-a424-3fb79c01f10f" 00:23:08.618 ], 00:23:08.618 "product_name": "Malloc disk", 00:23:08.618 "block_size": 512, 00:23:08.618 "num_blocks": 65536, 00:23:08.618 "uuid": "53f1d81d-11cc-4dff-a424-3fb79c01f10f", 00:23:08.618 "assigned_rate_limits": { 00:23:08.618 "rw_ios_per_sec": 0, 00:23:08.618 "rw_mbytes_per_sec": 0, 00:23:08.618 "r_mbytes_per_sec": 0, 00:23:08.618 "w_mbytes_per_sec": 0 00:23:08.618 }, 00:23:08.618 "claimed": true, 00:23:08.618 "claim_type": "exclusive_write", 00:23:08.618 "zoned": false, 00:23:08.618 "supported_io_types": { 00:23:08.618 "read": true, 00:23:08.618 "write": true, 00:23:08.618 "unmap": true, 00:23:08.618 "flush": true, 00:23:08.618 "reset": true, 00:23:08.618 "nvme_admin": false, 00:23:08.618 "nvme_io": false, 00:23:08.618 "nvme_io_md": false, 00:23:08.618 "write_zeroes": true, 00:23:08.618 "zcopy": true, 00:23:08.618 "get_zone_info": false, 00:23:08.618 "zone_management": false, 00:23:08.618 "zone_append": false, 00:23:08.618 "compare": false, 00:23:08.618 "compare_and_write": false, 00:23:08.618 "abort": true, 00:23:08.618 "seek_hole": false, 00:23:08.618 "seek_data": false, 00:23:08.618 "copy": true, 00:23:08.618 "nvme_iov_md": false 00:23:08.618 }, 00:23:08.618 "memory_domains": [ 00:23:08.618 { 00:23:08.618 "dma_device_id": "system", 00:23:08.618 "dma_device_type": 1 00:23:08.618 }, 00:23:08.618 { 00:23:08.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:08.618 "dma_device_type": 2 00:23:08.618 } 00:23:08.618 ], 00:23:08.618 "driver_specific": {} 00:23:08.618 }' 00:23:08.618 21:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:08.877 21:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:08.877 21:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:08.877 21:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:08.877 21:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:08.877 21:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:08.877 21:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:08.877 21:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:09.135 21:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:09.135 21:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:09.135 21:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:09.135 21:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:09.136 21:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:09.136 21:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:09.136 21:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:09.394 21:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:09.394 "name": "BaseBdev3", 00:23:09.394 "aliases": [ 00:23:09.394 "1d292149-9f14-4bd6-923c-deee1e58160b" 00:23:09.394 ], 00:23:09.394 "product_name": "Malloc disk", 00:23:09.394 "block_size": 512, 00:23:09.394 "num_blocks": 65536, 00:23:09.394 "uuid": "1d292149-9f14-4bd6-923c-deee1e58160b", 00:23:09.394 "assigned_rate_limits": { 00:23:09.394 "rw_ios_per_sec": 0, 00:23:09.394 "rw_mbytes_per_sec": 0, 00:23:09.394 "r_mbytes_per_sec": 0, 00:23:09.394 "w_mbytes_per_sec": 0 00:23:09.394 }, 00:23:09.394 "claimed": true, 00:23:09.394 "claim_type": "exclusive_write", 00:23:09.394 "zoned": false, 00:23:09.394 "supported_io_types": { 00:23:09.394 "read": true, 00:23:09.394 "write": true, 00:23:09.394 "unmap": true, 00:23:09.394 "flush": true, 00:23:09.394 "reset": true, 00:23:09.394 "nvme_admin": false, 00:23:09.394 "nvme_io": false, 00:23:09.394 "nvme_io_md": false, 00:23:09.394 "write_zeroes": true, 00:23:09.394 "zcopy": true, 00:23:09.394 "get_zone_info": false, 00:23:09.394 "zone_management": false, 00:23:09.394 "zone_append": false, 00:23:09.394 "compare": false, 00:23:09.394 "compare_and_write": false, 00:23:09.394 "abort": true, 00:23:09.394 "seek_hole": false, 00:23:09.394 "seek_data": false, 00:23:09.394 "copy": true, 00:23:09.394 "nvme_iov_md": false 00:23:09.394 }, 00:23:09.394 "memory_domains": [ 00:23:09.394 { 00:23:09.394 "dma_device_id": "system", 00:23:09.394 "dma_device_type": 1 00:23:09.394 }, 00:23:09.394 { 00:23:09.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:09.394 "dma_device_type": 2 00:23:09.394 } 00:23:09.394 ], 00:23:09.394 "driver_specific": {} 00:23:09.394 }' 00:23:09.394 21:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:09.394 21:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:09.394 21:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:09.394 21:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:09.394 21:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:09.654 21:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:09.654 21:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:09.654 21:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:09.654 21:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:09.654 21:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:09.654 21:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:09.654 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:09.654 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:09.654 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:09.654 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:09.984 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:09.984 "name": "BaseBdev4", 00:23:09.984 "aliases": [ 00:23:09.984 "53099cbb-f72c-4dee-8a46-e63179a7d682" 00:23:09.984 ], 00:23:09.984 "product_name": "Malloc disk", 00:23:09.984 "block_size": 512, 00:23:09.984 "num_blocks": 65536, 00:23:09.984 "uuid": "53099cbb-f72c-4dee-8a46-e63179a7d682", 00:23:09.984 "assigned_rate_limits": { 00:23:09.984 "rw_ios_per_sec": 0, 00:23:09.984 "rw_mbytes_per_sec": 0, 00:23:09.984 "r_mbytes_per_sec": 0, 00:23:09.984 "w_mbytes_per_sec": 0 00:23:09.984 }, 00:23:09.984 "claimed": true, 00:23:09.984 "claim_type": "exclusive_write", 00:23:09.984 "zoned": false, 00:23:09.984 "supported_io_types": { 00:23:09.984 "read": true, 00:23:09.984 "write": true, 00:23:09.984 "unmap": true, 00:23:09.984 "flush": true, 00:23:09.984 "reset": true, 00:23:09.984 "nvme_admin": false, 00:23:09.984 "nvme_io": false, 00:23:09.984 "nvme_io_md": false, 00:23:09.984 "write_zeroes": true, 00:23:09.984 "zcopy": true, 00:23:09.984 "get_zone_info": false, 00:23:09.984 "zone_management": false, 00:23:09.984 "zone_append": false, 00:23:09.984 "compare": false, 00:23:09.984 "compare_and_write": false, 00:23:09.984 "abort": true, 00:23:09.984 "seek_hole": false, 00:23:09.984 "seek_data": false, 00:23:09.984 "copy": true, 00:23:09.984 "nvme_iov_md": false 00:23:09.984 }, 00:23:09.984 "memory_domains": [ 00:23:09.984 { 00:23:09.984 "dma_device_id": "system", 00:23:09.984 "dma_device_type": 1 00:23:09.984 }, 00:23:09.984 { 00:23:09.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:09.984 "dma_device_type": 2 00:23:09.984 } 00:23:09.984 ], 00:23:09.984 "driver_specific": {} 00:23:09.984 }' 00:23:09.984 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:09.984 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:09.984 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:09.984 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:09.984 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:10.249 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:10.249 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:10.249 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:10.249 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:10.249 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:10.249 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:10.249 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:10.249 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:10.507 [2024-07-15 21:36:43.784602] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:10.507 [2024-07-15 21:36:43.784636] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:10.507 [2024-07-15 21:36:43.784685] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:10.765 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:23:10.765 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:23:10.765 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:10.765 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:23:10.765 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:23:10.765 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:23:10.765 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:10.765 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:23:10.765 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:10.765 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:10.765 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:10.765 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:10.765 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:10.765 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:10.765 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:10.765 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.765 21:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:10.765 21:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:10.765 "name": "Existed_Raid", 00:23:10.765 "uuid": "ac589d6b-7bff-44fb-9209-a90d0549295d", 00:23:10.765 "strip_size_kb": 64, 00:23:10.765 "state": "offline", 00:23:10.765 "raid_level": "raid0", 00:23:10.765 "superblock": false, 00:23:10.765 "num_base_bdevs": 4, 00:23:10.765 "num_base_bdevs_discovered": 3, 00:23:10.765 "num_base_bdevs_operational": 3, 00:23:10.765 "base_bdevs_list": [ 00:23:10.765 { 00:23:10.765 "name": null, 00:23:10.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:10.765 "is_configured": false, 00:23:10.765 "data_offset": 0, 00:23:10.765 "data_size": 65536 00:23:10.765 }, 00:23:10.765 { 00:23:10.765 "name": "BaseBdev2", 00:23:10.765 "uuid": "53f1d81d-11cc-4dff-a424-3fb79c01f10f", 00:23:10.765 "is_configured": true, 00:23:10.765 "data_offset": 0, 00:23:10.765 "data_size": 65536 00:23:10.765 }, 00:23:10.765 { 00:23:10.765 "name": "BaseBdev3", 00:23:10.765 "uuid": "1d292149-9f14-4bd6-923c-deee1e58160b", 00:23:10.765 "is_configured": true, 00:23:10.765 "data_offset": 0, 00:23:10.765 "data_size": 65536 00:23:10.765 }, 00:23:10.766 { 00:23:10.766 "name": "BaseBdev4", 00:23:10.766 "uuid": "53099cbb-f72c-4dee-8a46-e63179a7d682", 00:23:10.766 "is_configured": true, 00:23:10.766 "data_offset": 0, 00:23:10.766 "data_size": 65536 00:23:10.766 } 00:23:10.766 ] 00:23:10.766 }' 00:23:10.766 21:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:10.766 21:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.333 21:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:23:11.333 21:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:11.333 21:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.333 21:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:11.592 21:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:11.592 21:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:11.592 21:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:11.851 [2024-07-15 21:36:45.057230] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:11.851 21:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:11.851 21:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:11.851 21:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.851 21:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:12.111 21:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:12.111 21:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:12.111 21:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:12.370 [2024-07-15 21:36:45.524453] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:12.370 21:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:12.370 21:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:12.370 21:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.370 21:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:12.630 21:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:12.630 21:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:12.630 21:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:23:12.630 [2024-07-15 21:36:45.999668] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:12.630 [2024-07-15 21:36:45.999722] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:23:12.889 21:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:12.889 21:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:12.889 21:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.889 21:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:23:13.148 21:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:23:13.148 21:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:23:13.148 21:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:23:13.148 21:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:23:13.148 21:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:13.148 21:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:13.148 BaseBdev2 00:23:13.148 21:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:23:13.148 21:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:23:13.148 21:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:13.148 21:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:13.148 21:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:13.148 21:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:13.148 21:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:13.406 21:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:13.669 [ 00:23:13.669 { 00:23:13.669 "name": "BaseBdev2", 00:23:13.669 "aliases": [ 00:23:13.669 "ed72f2c1-b384-408a-9556-1e03e7aac6b9" 00:23:13.669 ], 00:23:13.669 "product_name": "Malloc disk", 00:23:13.669 "block_size": 512, 00:23:13.669 "num_blocks": 65536, 00:23:13.669 "uuid": "ed72f2c1-b384-408a-9556-1e03e7aac6b9", 00:23:13.669 "assigned_rate_limits": { 00:23:13.669 "rw_ios_per_sec": 0, 00:23:13.669 "rw_mbytes_per_sec": 0, 00:23:13.669 "r_mbytes_per_sec": 0, 00:23:13.669 "w_mbytes_per_sec": 0 00:23:13.669 }, 00:23:13.669 "claimed": false, 00:23:13.669 "zoned": false, 00:23:13.669 "supported_io_types": { 00:23:13.669 "read": true, 00:23:13.669 "write": true, 00:23:13.669 "unmap": true, 00:23:13.669 "flush": true, 00:23:13.669 "reset": true, 00:23:13.669 "nvme_admin": false, 00:23:13.669 "nvme_io": false, 00:23:13.669 "nvme_io_md": false, 00:23:13.669 "write_zeroes": true, 00:23:13.669 "zcopy": true, 00:23:13.669 "get_zone_info": false, 00:23:13.669 "zone_management": false, 00:23:13.669 "zone_append": false, 00:23:13.669 "compare": false, 00:23:13.669 "compare_and_write": false, 00:23:13.669 "abort": true, 00:23:13.669 "seek_hole": false, 00:23:13.669 "seek_data": false, 00:23:13.669 "copy": true, 00:23:13.669 "nvme_iov_md": false 00:23:13.669 }, 00:23:13.669 "memory_domains": [ 00:23:13.669 { 00:23:13.669 "dma_device_id": "system", 00:23:13.669 "dma_device_type": 1 00:23:13.669 }, 00:23:13.669 { 00:23:13.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:13.669 "dma_device_type": 2 00:23:13.669 } 00:23:13.669 ], 00:23:13.669 "driver_specific": {} 00:23:13.669 } 00:23:13.669 ] 00:23:13.669 21:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:13.669 21:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:13.669 21:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:13.669 21:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:13.933 BaseBdev3 00:23:13.933 21:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:23:13.933 21:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:23:13.933 21:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:13.933 21:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:13.933 21:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:13.933 21:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:13.933 21:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:13.933 21:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:14.207 [ 00:23:14.207 { 00:23:14.207 "name": "BaseBdev3", 00:23:14.207 "aliases": [ 00:23:14.207 "17685da8-2e22-4752-9886-01ec735b4979" 00:23:14.207 ], 00:23:14.207 "product_name": "Malloc disk", 00:23:14.207 "block_size": 512, 00:23:14.207 "num_blocks": 65536, 00:23:14.207 "uuid": "17685da8-2e22-4752-9886-01ec735b4979", 00:23:14.207 "assigned_rate_limits": { 00:23:14.207 "rw_ios_per_sec": 0, 00:23:14.207 "rw_mbytes_per_sec": 0, 00:23:14.207 "r_mbytes_per_sec": 0, 00:23:14.207 "w_mbytes_per_sec": 0 00:23:14.207 }, 00:23:14.207 "claimed": false, 00:23:14.207 "zoned": false, 00:23:14.207 "supported_io_types": { 00:23:14.207 "read": true, 00:23:14.207 "write": true, 00:23:14.207 "unmap": true, 00:23:14.207 "flush": true, 00:23:14.207 "reset": true, 00:23:14.207 "nvme_admin": false, 00:23:14.207 "nvme_io": false, 00:23:14.207 "nvme_io_md": false, 00:23:14.207 "write_zeroes": true, 00:23:14.207 "zcopy": true, 00:23:14.207 "get_zone_info": false, 00:23:14.207 "zone_management": false, 00:23:14.207 "zone_append": false, 00:23:14.207 "compare": false, 00:23:14.207 "compare_and_write": false, 00:23:14.207 "abort": true, 00:23:14.207 "seek_hole": false, 00:23:14.207 "seek_data": false, 00:23:14.207 "copy": true, 00:23:14.207 "nvme_iov_md": false 00:23:14.207 }, 00:23:14.207 "memory_domains": [ 00:23:14.207 { 00:23:14.207 "dma_device_id": "system", 00:23:14.207 "dma_device_type": 1 00:23:14.207 }, 00:23:14.207 { 00:23:14.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:14.207 "dma_device_type": 2 00:23:14.207 } 00:23:14.207 ], 00:23:14.207 "driver_specific": {} 00:23:14.207 } 00:23:14.207 ] 00:23:14.207 21:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:14.207 21:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:14.207 21:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:14.207 21:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:14.466 BaseBdev4 00:23:14.466 21:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:23:14.466 21:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:23:14.466 21:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:14.466 21:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:14.466 21:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:14.466 21:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:14.466 21:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:14.725 21:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:14.725 [ 00:23:14.725 { 00:23:14.725 "name": "BaseBdev4", 00:23:14.725 "aliases": [ 00:23:14.725 "9f959c40-fade-432d-91c2-ad1684ecf760" 00:23:14.725 ], 00:23:14.725 "product_name": "Malloc disk", 00:23:14.725 "block_size": 512, 00:23:14.725 "num_blocks": 65536, 00:23:14.725 "uuid": "9f959c40-fade-432d-91c2-ad1684ecf760", 00:23:14.725 "assigned_rate_limits": { 00:23:14.725 "rw_ios_per_sec": 0, 00:23:14.725 "rw_mbytes_per_sec": 0, 00:23:14.725 "r_mbytes_per_sec": 0, 00:23:14.725 "w_mbytes_per_sec": 0 00:23:14.725 }, 00:23:14.725 "claimed": false, 00:23:14.725 "zoned": false, 00:23:14.725 "supported_io_types": { 00:23:14.725 "read": true, 00:23:14.725 "write": true, 00:23:14.725 "unmap": true, 00:23:14.725 "flush": true, 00:23:14.725 "reset": true, 00:23:14.725 "nvme_admin": false, 00:23:14.725 "nvme_io": false, 00:23:14.725 "nvme_io_md": false, 00:23:14.725 "write_zeroes": true, 00:23:14.725 "zcopy": true, 00:23:14.725 "get_zone_info": false, 00:23:14.725 "zone_management": false, 00:23:14.725 "zone_append": false, 00:23:14.725 "compare": false, 00:23:14.725 "compare_and_write": false, 00:23:14.725 "abort": true, 00:23:14.725 "seek_hole": false, 00:23:14.725 "seek_data": false, 00:23:14.725 "copy": true, 00:23:14.725 "nvme_iov_md": false 00:23:14.725 }, 00:23:14.725 "memory_domains": [ 00:23:14.725 { 00:23:14.725 "dma_device_id": "system", 00:23:14.725 "dma_device_type": 1 00:23:14.725 }, 00:23:14.725 { 00:23:14.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:14.725 "dma_device_type": 2 00:23:14.725 } 00:23:14.725 ], 00:23:14.725 "driver_specific": {} 00:23:14.725 } 00:23:14.725 ] 00:23:14.725 21:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:14.725 21:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:14.725 21:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:14.725 21:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:14.992 [2024-07-15 21:36:48.225578] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:14.992 [2024-07-15 21:36:48.225647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:14.992 [2024-07-15 21:36:48.225683] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:14.992 [2024-07-15 21:36:48.227310] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:14.992 [2024-07-15 21:36:48.227368] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:14.992 21:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:14.992 21:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:14.992 21:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:14.992 21:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:14.992 21:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:14.992 21:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:14.992 21:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:14.992 21:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:14.992 21:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:14.992 21:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:14.992 21:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.992 21:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:15.256 21:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:15.256 "name": "Existed_Raid", 00:23:15.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.256 "strip_size_kb": 64, 00:23:15.256 "state": "configuring", 00:23:15.256 "raid_level": "raid0", 00:23:15.256 "superblock": false, 00:23:15.256 "num_base_bdevs": 4, 00:23:15.256 "num_base_bdevs_discovered": 3, 00:23:15.256 "num_base_bdevs_operational": 4, 00:23:15.256 "base_bdevs_list": [ 00:23:15.256 { 00:23:15.256 "name": "BaseBdev1", 00:23:15.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.256 "is_configured": false, 00:23:15.256 "data_offset": 0, 00:23:15.256 "data_size": 0 00:23:15.256 }, 00:23:15.256 { 00:23:15.256 "name": "BaseBdev2", 00:23:15.256 "uuid": "ed72f2c1-b384-408a-9556-1e03e7aac6b9", 00:23:15.256 "is_configured": true, 00:23:15.256 "data_offset": 0, 00:23:15.256 "data_size": 65536 00:23:15.256 }, 00:23:15.256 { 00:23:15.256 "name": "BaseBdev3", 00:23:15.256 "uuid": "17685da8-2e22-4752-9886-01ec735b4979", 00:23:15.256 "is_configured": true, 00:23:15.256 "data_offset": 0, 00:23:15.256 "data_size": 65536 00:23:15.256 }, 00:23:15.256 { 00:23:15.256 "name": "BaseBdev4", 00:23:15.256 "uuid": "9f959c40-fade-432d-91c2-ad1684ecf760", 00:23:15.256 "is_configured": true, 00:23:15.256 "data_offset": 0, 00:23:15.256 "data_size": 65536 00:23:15.256 } 00:23:15.256 ] 00:23:15.256 }' 00:23:15.256 21:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:15.256 21:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.822 21:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:16.081 [2024-07-15 21:36:49.239824] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:16.081 21:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:16.081 21:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:16.081 21:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:16.081 21:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:16.081 21:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:16.081 21:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:16.081 21:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:16.081 21:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:16.081 21:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:16.081 21:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:16.081 21:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.081 21:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:16.081 21:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:16.081 "name": "Existed_Raid", 00:23:16.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.081 "strip_size_kb": 64, 00:23:16.081 "state": "configuring", 00:23:16.081 "raid_level": "raid0", 00:23:16.081 "superblock": false, 00:23:16.081 "num_base_bdevs": 4, 00:23:16.081 "num_base_bdevs_discovered": 2, 00:23:16.081 "num_base_bdevs_operational": 4, 00:23:16.081 "base_bdevs_list": [ 00:23:16.081 { 00:23:16.081 "name": "BaseBdev1", 00:23:16.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.081 "is_configured": false, 00:23:16.081 "data_offset": 0, 00:23:16.081 "data_size": 0 00:23:16.081 }, 00:23:16.081 { 00:23:16.081 "name": null, 00:23:16.081 "uuid": "ed72f2c1-b384-408a-9556-1e03e7aac6b9", 00:23:16.081 "is_configured": false, 00:23:16.081 "data_offset": 0, 00:23:16.081 "data_size": 65536 00:23:16.081 }, 00:23:16.081 { 00:23:16.081 "name": "BaseBdev3", 00:23:16.081 "uuid": "17685da8-2e22-4752-9886-01ec735b4979", 00:23:16.081 "is_configured": true, 00:23:16.081 "data_offset": 0, 00:23:16.081 "data_size": 65536 00:23:16.081 }, 00:23:16.081 { 00:23:16.081 "name": "BaseBdev4", 00:23:16.081 "uuid": "9f959c40-fade-432d-91c2-ad1684ecf760", 00:23:16.081 "is_configured": true, 00:23:16.081 "data_offset": 0, 00:23:16.081 "data_size": 65536 00:23:16.081 } 00:23:16.081 ] 00:23:16.081 }' 00:23:16.081 21:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:16.081 21:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.018 21:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.018 21:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:17.018 21:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:23:17.018 21:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:17.277 [2024-07-15 21:36:50.456705] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:17.277 BaseBdev1 00:23:17.277 21:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:23:17.277 21:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:23:17.277 21:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:17.277 21:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:17.277 21:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:17.277 21:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:17.277 21:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:17.277 21:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:17.536 [ 00:23:17.536 { 00:23:17.536 "name": "BaseBdev1", 00:23:17.536 "aliases": [ 00:23:17.536 "23e65fea-74c7-4591-be3c-55506e0f1e1b" 00:23:17.536 ], 00:23:17.536 "product_name": "Malloc disk", 00:23:17.536 "block_size": 512, 00:23:17.536 "num_blocks": 65536, 00:23:17.536 "uuid": "23e65fea-74c7-4591-be3c-55506e0f1e1b", 00:23:17.536 "assigned_rate_limits": { 00:23:17.536 "rw_ios_per_sec": 0, 00:23:17.536 "rw_mbytes_per_sec": 0, 00:23:17.536 "r_mbytes_per_sec": 0, 00:23:17.536 "w_mbytes_per_sec": 0 00:23:17.536 }, 00:23:17.536 "claimed": true, 00:23:17.536 "claim_type": "exclusive_write", 00:23:17.536 "zoned": false, 00:23:17.536 "supported_io_types": { 00:23:17.536 "read": true, 00:23:17.536 "write": true, 00:23:17.536 "unmap": true, 00:23:17.536 "flush": true, 00:23:17.536 "reset": true, 00:23:17.536 "nvme_admin": false, 00:23:17.536 "nvme_io": false, 00:23:17.536 "nvme_io_md": false, 00:23:17.536 "write_zeroes": true, 00:23:17.536 "zcopy": true, 00:23:17.536 "get_zone_info": false, 00:23:17.536 "zone_management": false, 00:23:17.536 "zone_append": false, 00:23:17.536 "compare": false, 00:23:17.536 "compare_and_write": false, 00:23:17.536 "abort": true, 00:23:17.536 "seek_hole": false, 00:23:17.536 "seek_data": false, 00:23:17.536 "copy": true, 00:23:17.536 "nvme_iov_md": false 00:23:17.536 }, 00:23:17.536 "memory_domains": [ 00:23:17.536 { 00:23:17.536 "dma_device_id": "system", 00:23:17.536 "dma_device_type": 1 00:23:17.536 }, 00:23:17.536 { 00:23:17.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:17.536 "dma_device_type": 2 00:23:17.536 } 00:23:17.536 ], 00:23:17.536 "driver_specific": {} 00:23:17.536 } 00:23:17.536 ] 00:23:17.536 21:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:17.536 21:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:17.536 21:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:17.536 21:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:17.536 21:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:17.536 21:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:17.536 21:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:17.536 21:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:17.536 21:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:17.536 21:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:17.536 21:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:17.536 21:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.536 21:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:17.808 21:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:17.808 "name": "Existed_Raid", 00:23:17.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.808 "strip_size_kb": 64, 00:23:17.808 "state": "configuring", 00:23:17.808 "raid_level": "raid0", 00:23:17.808 "superblock": false, 00:23:17.808 "num_base_bdevs": 4, 00:23:17.808 "num_base_bdevs_discovered": 3, 00:23:17.808 "num_base_bdevs_operational": 4, 00:23:17.808 "base_bdevs_list": [ 00:23:17.808 { 00:23:17.808 "name": "BaseBdev1", 00:23:17.808 "uuid": "23e65fea-74c7-4591-be3c-55506e0f1e1b", 00:23:17.808 "is_configured": true, 00:23:17.808 "data_offset": 0, 00:23:17.808 "data_size": 65536 00:23:17.808 }, 00:23:17.808 { 00:23:17.808 "name": null, 00:23:17.808 "uuid": "ed72f2c1-b384-408a-9556-1e03e7aac6b9", 00:23:17.808 "is_configured": false, 00:23:17.808 "data_offset": 0, 00:23:17.808 "data_size": 65536 00:23:17.808 }, 00:23:17.808 { 00:23:17.808 "name": "BaseBdev3", 00:23:17.808 "uuid": "17685da8-2e22-4752-9886-01ec735b4979", 00:23:17.808 "is_configured": true, 00:23:17.808 "data_offset": 0, 00:23:17.808 "data_size": 65536 00:23:17.808 }, 00:23:17.808 { 00:23:17.808 "name": "BaseBdev4", 00:23:17.808 "uuid": "9f959c40-fade-432d-91c2-ad1684ecf760", 00:23:17.808 "is_configured": true, 00:23:17.808 "data_offset": 0, 00:23:17.808 "data_size": 65536 00:23:17.808 } 00:23:17.808 ] 00:23:17.808 }' 00:23:17.808 21:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:17.808 21:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.402 21:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.403 21:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:18.661 21:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:23:18.661 21:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:23:18.921 [2024-07-15 21:36:52.046047] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:18.921 21:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:18.921 21:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:18.921 21:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:18.921 21:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:18.921 21:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:18.921 21:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:18.921 21:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:18.921 21:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:18.921 21:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:18.921 21:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:18.921 21:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.921 21:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:18.921 21:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:18.921 "name": "Existed_Raid", 00:23:18.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.921 "strip_size_kb": 64, 00:23:18.921 "state": "configuring", 00:23:18.921 "raid_level": "raid0", 00:23:18.921 "superblock": false, 00:23:18.921 "num_base_bdevs": 4, 00:23:18.921 "num_base_bdevs_discovered": 2, 00:23:18.921 "num_base_bdevs_operational": 4, 00:23:18.921 "base_bdevs_list": [ 00:23:18.921 { 00:23:18.921 "name": "BaseBdev1", 00:23:18.921 "uuid": "23e65fea-74c7-4591-be3c-55506e0f1e1b", 00:23:18.921 "is_configured": true, 00:23:18.921 "data_offset": 0, 00:23:18.921 "data_size": 65536 00:23:18.921 }, 00:23:18.921 { 00:23:18.921 "name": null, 00:23:18.921 "uuid": "ed72f2c1-b384-408a-9556-1e03e7aac6b9", 00:23:18.921 "is_configured": false, 00:23:18.921 "data_offset": 0, 00:23:18.921 "data_size": 65536 00:23:18.921 }, 00:23:18.921 { 00:23:18.921 "name": null, 00:23:18.921 "uuid": "17685da8-2e22-4752-9886-01ec735b4979", 00:23:18.921 "is_configured": false, 00:23:18.921 "data_offset": 0, 00:23:18.921 "data_size": 65536 00:23:18.921 }, 00:23:18.921 { 00:23:18.921 "name": "BaseBdev4", 00:23:18.921 "uuid": "9f959c40-fade-432d-91c2-ad1684ecf760", 00:23:18.921 "is_configured": true, 00:23:18.921 "data_offset": 0, 00:23:18.921 "data_size": 65536 00:23:18.921 } 00:23:18.921 ] 00:23:18.921 }' 00:23:18.921 21:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:18.921 21:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.856 21:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.856 21:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:19.856 21:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:23:19.856 21:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:19.857 [2024-07-15 21:36:53.224002] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:20.115 21:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:20.115 21:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:20.115 21:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:20.115 21:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:20.115 21:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:20.115 21:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:20.115 21:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:20.115 21:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:20.115 21:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:20.115 21:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:20.115 21:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:20.115 21:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:20.115 21:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:20.115 "name": "Existed_Raid", 00:23:20.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.115 "strip_size_kb": 64, 00:23:20.115 "state": "configuring", 00:23:20.115 "raid_level": "raid0", 00:23:20.115 "superblock": false, 00:23:20.115 "num_base_bdevs": 4, 00:23:20.115 "num_base_bdevs_discovered": 3, 00:23:20.115 "num_base_bdevs_operational": 4, 00:23:20.115 "base_bdevs_list": [ 00:23:20.115 { 00:23:20.115 "name": "BaseBdev1", 00:23:20.115 "uuid": "23e65fea-74c7-4591-be3c-55506e0f1e1b", 00:23:20.115 "is_configured": true, 00:23:20.115 "data_offset": 0, 00:23:20.115 "data_size": 65536 00:23:20.115 }, 00:23:20.115 { 00:23:20.115 "name": null, 00:23:20.115 "uuid": "ed72f2c1-b384-408a-9556-1e03e7aac6b9", 00:23:20.115 "is_configured": false, 00:23:20.115 "data_offset": 0, 00:23:20.115 "data_size": 65536 00:23:20.115 }, 00:23:20.115 { 00:23:20.115 "name": "BaseBdev3", 00:23:20.115 "uuid": "17685da8-2e22-4752-9886-01ec735b4979", 00:23:20.116 "is_configured": true, 00:23:20.116 "data_offset": 0, 00:23:20.116 "data_size": 65536 00:23:20.116 }, 00:23:20.116 { 00:23:20.116 "name": "BaseBdev4", 00:23:20.116 "uuid": "9f959c40-fade-432d-91c2-ad1684ecf760", 00:23:20.116 "is_configured": true, 00:23:20.116 "data_offset": 0, 00:23:20.116 "data_size": 65536 00:23:20.116 } 00:23:20.116 ] 00:23:20.116 }' 00:23:20.116 21:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:20.116 21:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.053 21:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.053 21:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:21.053 21:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:23:21.053 21:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:21.325 [2024-07-15 21:36:54.501786] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:21.325 21:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:21.325 21:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:21.325 21:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:21.325 21:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:21.325 21:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:21.325 21:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:21.325 21:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:21.325 21:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:21.325 21:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:21.325 21:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:21.325 21:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.325 21:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:21.585 21:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:21.585 "name": "Existed_Raid", 00:23:21.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.585 "strip_size_kb": 64, 00:23:21.585 "state": "configuring", 00:23:21.585 "raid_level": "raid0", 00:23:21.585 "superblock": false, 00:23:21.585 "num_base_bdevs": 4, 00:23:21.585 "num_base_bdevs_discovered": 2, 00:23:21.585 "num_base_bdevs_operational": 4, 00:23:21.585 "base_bdevs_list": [ 00:23:21.585 { 00:23:21.585 "name": null, 00:23:21.585 "uuid": "23e65fea-74c7-4591-be3c-55506e0f1e1b", 00:23:21.585 "is_configured": false, 00:23:21.585 "data_offset": 0, 00:23:21.585 "data_size": 65536 00:23:21.585 }, 00:23:21.585 { 00:23:21.585 "name": null, 00:23:21.585 "uuid": "ed72f2c1-b384-408a-9556-1e03e7aac6b9", 00:23:21.585 "is_configured": false, 00:23:21.585 "data_offset": 0, 00:23:21.585 "data_size": 65536 00:23:21.585 }, 00:23:21.585 { 00:23:21.585 "name": "BaseBdev3", 00:23:21.585 "uuid": "17685da8-2e22-4752-9886-01ec735b4979", 00:23:21.585 "is_configured": true, 00:23:21.585 "data_offset": 0, 00:23:21.585 "data_size": 65536 00:23:21.585 }, 00:23:21.585 { 00:23:21.585 "name": "BaseBdev4", 00:23:21.585 "uuid": "9f959c40-fade-432d-91c2-ad1684ecf760", 00:23:21.585 "is_configured": true, 00:23:21.585 "data_offset": 0, 00:23:21.585 "data_size": 65536 00:23:21.585 } 00:23:21.585 ] 00:23:21.585 }' 00:23:21.585 21:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:21.585 21:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.154 21:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.154 21:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:22.413 21:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:23:22.413 21:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:22.673 [2024-07-15 21:36:55.803992] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:22.673 21:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:22.673 21:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:22.673 21:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:22.673 21:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:22.673 21:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:22.673 21:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:22.673 21:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:22.673 21:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:22.673 21:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:22.673 21:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:22.673 21:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.673 21:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:22.673 21:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:22.673 "name": "Existed_Raid", 00:23:22.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.673 "strip_size_kb": 64, 00:23:22.673 "state": "configuring", 00:23:22.673 "raid_level": "raid0", 00:23:22.673 "superblock": false, 00:23:22.673 "num_base_bdevs": 4, 00:23:22.673 "num_base_bdevs_discovered": 3, 00:23:22.673 "num_base_bdevs_operational": 4, 00:23:22.673 "base_bdevs_list": [ 00:23:22.673 { 00:23:22.673 "name": null, 00:23:22.673 "uuid": "23e65fea-74c7-4591-be3c-55506e0f1e1b", 00:23:22.673 "is_configured": false, 00:23:22.673 "data_offset": 0, 00:23:22.673 "data_size": 65536 00:23:22.673 }, 00:23:22.673 { 00:23:22.673 "name": "BaseBdev2", 00:23:22.673 "uuid": "ed72f2c1-b384-408a-9556-1e03e7aac6b9", 00:23:22.673 "is_configured": true, 00:23:22.673 "data_offset": 0, 00:23:22.673 "data_size": 65536 00:23:22.673 }, 00:23:22.673 { 00:23:22.673 "name": "BaseBdev3", 00:23:22.673 "uuid": "17685da8-2e22-4752-9886-01ec735b4979", 00:23:22.673 "is_configured": true, 00:23:22.673 "data_offset": 0, 00:23:22.673 "data_size": 65536 00:23:22.673 }, 00:23:22.673 { 00:23:22.673 "name": "BaseBdev4", 00:23:22.673 "uuid": "9f959c40-fade-432d-91c2-ad1684ecf760", 00:23:22.673 "is_configured": true, 00:23:22.673 "data_offset": 0, 00:23:22.673 "data_size": 65536 00:23:22.673 } 00:23:22.673 ] 00:23:22.673 }' 00:23:22.673 21:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:22.673 21:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.612 21:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.612 21:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:23.612 21:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:23:23.612 21:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.612 21:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:23.871 21:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 23e65fea-74c7-4591-be3c-55506e0f1e1b 00:23:24.130 [2024-07-15 21:36:57.242710] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:24.130 [2024-07-15 21:36:57.242754] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:23:24.130 [2024-07-15 21:36:57.242761] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:23:24.130 [2024-07-15 21:36:57.242877] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:24.130 [2024-07-15 21:36:57.243185] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:23:24.130 [2024-07-15 21:36:57.243206] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009680 00:23:24.130 [2024-07-15 21:36:57.243422] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:24.130 NewBaseBdev 00:23:24.130 21:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:23:24.130 21:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:23:24.130 21:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:24.130 21:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:24.130 21:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:24.130 21:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:24.130 21:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:24.130 21:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:24.389 [ 00:23:24.389 { 00:23:24.389 "name": "NewBaseBdev", 00:23:24.389 "aliases": [ 00:23:24.389 "23e65fea-74c7-4591-be3c-55506e0f1e1b" 00:23:24.389 ], 00:23:24.389 "product_name": "Malloc disk", 00:23:24.389 "block_size": 512, 00:23:24.389 "num_blocks": 65536, 00:23:24.389 "uuid": "23e65fea-74c7-4591-be3c-55506e0f1e1b", 00:23:24.389 "assigned_rate_limits": { 00:23:24.389 "rw_ios_per_sec": 0, 00:23:24.389 "rw_mbytes_per_sec": 0, 00:23:24.389 "r_mbytes_per_sec": 0, 00:23:24.389 "w_mbytes_per_sec": 0 00:23:24.389 }, 00:23:24.389 "claimed": true, 00:23:24.389 "claim_type": "exclusive_write", 00:23:24.389 "zoned": false, 00:23:24.389 "supported_io_types": { 00:23:24.389 "read": true, 00:23:24.389 "write": true, 00:23:24.389 "unmap": true, 00:23:24.389 "flush": true, 00:23:24.389 "reset": true, 00:23:24.389 "nvme_admin": false, 00:23:24.389 "nvme_io": false, 00:23:24.389 "nvme_io_md": false, 00:23:24.389 "write_zeroes": true, 00:23:24.389 "zcopy": true, 00:23:24.389 "get_zone_info": false, 00:23:24.389 "zone_management": false, 00:23:24.389 "zone_append": false, 00:23:24.389 "compare": false, 00:23:24.389 "compare_and_write": false, 00:23:24.389 "abort": true, 00:23:24.389 "seek_hole": false, 00:23:24.390 "seek_data": false, 00:23:24.390 "copy": true, 00:23:24.390 "nvme_iov_md": false 00:23:24.390 }, 00:23:24.390 "memory_domains": [ 00:23:24.390 { 00:23:24.390 "dma_device_id": "system", 00:23:24.390 "dma_device_type": 1 00:23:24.390 }, 00:23:24.390 { 00:23:24.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:24.390 "dma_device_type": 2 00:23:24.390 } 00:23:24.390 ], 00:23:24.390 "driver_specific": {} 00:23:24.390 } 00:23:24.390 ] 00:23:24.390 21:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:24.390 21:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:23:24.390 21:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:24.390 21:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:24.390 21:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:24.390 21:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:24.390 21:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:24.390 21:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:24.390 21:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:24.390 21:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:24.390 21:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:24.390 21:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.390 21:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:24.649 21:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:24.649 "name": "Existed_Raid", 00:23:24.649 "uuid": "6c592e0a-1fc0-45f1-91c4-7790fa57cec0", 00:23:24.649 "strip_size_kb": 64, 00:23:24.649 "state": "online", 00:23:24.649 "raid_level": "raid0", 00:23:24.649 "superblock": false, 00:23:24.649 "num_base_bdevs": 4, 00:23:24.649 "num_base_bdevs_discovered": 4, 00:23:24.649 "num_base_bdevs_operational": 4, 00:23:24.649 "base_bdevs_list": [ 00:23:24.649 { 00:23:24.649 "name": "NewBaseBdev", 00:23:24.649 "uuid": "23e65fea-74c7-4591-be3c-55506e0f1e1b", 00:23:24.649 "is_configured": true, 00:23:24.649 "data_offset": 0, 00:23:24.649 "data_size": 65536 00:23:24.649 }, 00:23:24.649 { 00:23:24.649 "name": "BaseBdev2", 00:23:24.649 "uuid": "ed72f2c1-b384-408a-9556-1e03e7aac6b9", 00:23:24.649 "is_configured": true, 00:23:24.649 "data_offset": 0, 00:23:24.649 "data_size": 65536 00:23:24.649 }, 00:23:24.649 { 00:23:24.649 "name": "BaseBdev3", 00:23:24.649 "uuid": "17685da8-2e22-4752-9886-01ec735b4979", 00:23:24.649 "is_configured": true, 00:23:24.649 "data_offset": 0, 00:23:24.649 "data_size": 65536 00:23:24.649 }, 00:23:24.649 { 00:23:24.649 "name": "BaseBdev4", 00:23:24.649 "uuid": "9f959c40-fade-432d-91c2-ad1684ecf760", 00:23:24.649 "is_configured": true, 00:23:24.649 "data_offset": 0, 00:23:24.649 "data_size": 65536 00:23:24.649 } 00:23:24.649 ] 00:23:24.649 }' 00:23:24.649 21:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:24.649 21:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:25.217 21:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:23:25.217 21:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:25.217 21:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:25.217 21:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:25.217 21:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:25.217 21:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:25.217 21:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:25.217 21:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:25.476 [2024-07-15 21:36:58.641405] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:25.476 21:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:25.476 "name": "Existed_Raid", 00:23:25.476 "aliases": [ 00:23:25.476 "6c592e0a-1fc0-45f1-91c4-7790fa57cec0" 00:23:25.476 ], 00:23:25.476 "product_name": "Raid Volume", 00:23:25.476 "block_size": 512, 00:23:25.476 "num_blocks": 262144, 00:23:25.476 "uuid": "6c592e0a-1fc0-45f1-91c4-7790fa57cec0", 00:23:25.476 "assigned_rate_limits": { 00:23:25.476 "rw_ios_per_sec": 0, 00:23:25.476 "rw_mbytes_per_sec": 0, 00:23:25.476 "r_mbytes_per_sec": 0, 00:23:25.476 "w_mbytes_per_sec": 0 00:23:25.476 }, 00:23:25.476 "claimed": false, 00:23:25.476 "zoned": false, 00:23:25.476 "supported_io_types": { 00:23:25.476 "read": true, 00:23:25.476 "write": true, 00:23:25.476 "unmap": true, 00:23:25.476 "flush": true, 00:23:25.476 "reset": true, 00:23:25.476 "nvme_admin": false, 00:23:25.476 "nvme_io": false, 00:23:25.476 "nvme_io_md": false, 00:23:25.476 "write_zeroes": true, 00:23:25.476 "zcopy": false, 00:23:25.476 "get_zone_info": false, 00:23:25.476 "zone_management": false, 00:23:25.476 "zone_append": false, 00:23:25.476 "compare": false, 00:23:25.476 "compare_and_write": false, 00:23:25.476 "abort": false, 00:23:25.476 "seek_hole": false, 00:23:25.476 "seek_data": false, 00:23:25.476 "copy": false, 00:23:25.476 "nvme_iov_md": false 00:23:25.476 }, 00:23:25.476 "memory_domains": [ 00:23:25.476 { 00:23:25.476 "dma_device_id": "system", 00:23:25.476 "dma_device_type": 1 00:23:25.476 }, 00:23:25.476 { 00:23:25.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:25.476 "dma_device_type": 2 00:23:25.476 }, 00:23:25.476 { 00:23:25.476 "dma_device_id": "system", 00:23:25.476 "dma_device_type": 1 00:23:25.476 }, 00:23:25.476 { 00:23:25.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:25.476 "dma_device_type": 2 00:23:25.476 }, 00:23:25.476 { 00:23:25.476 "dma_device_id": "system", 00:23:25.476 "dma_device_type": 1 00:23:25.476 }, 00:23:25.476 { 00:23:25.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:25.476 "dma_device_type": 2 00:23:25.476 }, 00:23:25.476 { 00:23:25.476 "dma_device_id": "system", 00:23:25.476 "dma_device_type": 1 00:23:25.476 }, 00:23:25.476 { 00:23:25.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:25.476 "dma_device_type": 2 00:23:25.476 } 00:23:25.476 ], 00:23:25.476 "driver_specific": { 00:23:25.476 "raid": { 00:23:25.476 "uuid": "6c592e0a-1fc0-45f1-91c4-7790fa57cec0", 00:23:25.476 "strip_size_kb": 64, 00:23:25.476 "state": "online", 00:23:25.476 "raid_level": "raid0", 00:23:25.476 "superblock": false, 00:23:25.476 "num_base_bdevs": 4, 00:23:25.476 "num_base_bdevs_discovered": 4, 00:23:25.476 "num_base_bdevs_operational": 4, 00:23:25.476 "base_bdevs_list": [ 00:23:25.476 { 00:23:25.476 "name": "NewBaseBdev", 00:23:25.476 "uuid": "23e65fea-74c7-4591-be3c-55506e0f1e1b", 00:23:25.476 "is_configured": true, 00:23:25.476 "data_offset": 0, 00:23:25.476 "data_size": 65536 00:23:25.476 }, 00:23:25.476 { 00:23:25.476 "name": "BaseBdev2", 00:23:25.476 "uuid": "ed72f2c1-b384-408a-9556-1e03e7aac6b9", 00:23:25.476 "is_configured": true, 00:23:25.476 "data_offset": 0, 00:23:25.476 "data_size": 65536 00:23:25.476 }, 00:23:25.476 { 00:23:25.476 "name": "BaseBdev3", 00:23:25.476 "uuid": "17685da8-2e22-4752-9886-01ec735b4979", 00:23:25.476 "is_configured": true, 00:23:25.476 "data_offset": 0, 00:23:25.477 "data_size": 65536 00:23:25.477 }, 00:23:25.477 { 00:23:25.477 "name": "BaseBdev4", 00:23:25.477 "uuid": "9f959c40-fade-432d-91c2-ad1684ecf760", 00:23:25.477 "is_configured": true, 00:23:25.477 "data_offset": 0, 00:23:25.477 "data_size": 65536 00:23:25.477 } 00:23:25.477 ] 00:23:25.477 } 00:23:25.477 } 00:23:25.477 }' 00:23:25.477 21:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:25.477 21:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:23:25.477 BaseBdev2 00:23:25.477 BaseBdev3 00:23:25.477 BaseBdev4' 00:23:25.477 21:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:25.477 21:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:23:25.477 21:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:25.738 21:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:25.738 "name": "NewBaseBdev", 00:23:25.738 "aliases": [ 00:23:25.738 "23e65fea-74c7-4591-be3c-55506e0f1e1b" 00:23:25.738 ], 00:23:25.738 "product_name": "Malloc disk", 00:23:25.738 "block_size": 512, 00:23:25.738 "num_blocks": 65536, 00:23:25.738 "uuid": "23e65fea-74c7-4591-be3c-55506e0f1e1b", 00:23:25.738 "assigned_rate_limits": { 00:23:25.738 "rw_ios_per_sec": 0, 00:23:25.738 "rw_mbytes_per_sec": 0, 00:23:25.738 "r_mbytes_per_sec": 0, 00:23:25.738 "w_mbytes_per_sec": 0 00:23:25.738 }, 00:23:25.738 "claimed": true, 00:23:25.738 "claim_type": "exclusive_write", 00:23:25.738 "zoned": false, 00:23:25.738 "supported_io_types": { 00:23:25.738 "read": true, 00:23:25.738 "write": true, 00:23:25.738 "unmap": true, 00:23:25.738 "flush": true, 00:23:25.738 "reset": true, 00:23:25.738 "nvme_admin": false, 00:23:25.738 "nvme_io": false, 00:23:25.738 "nvme_io_md": false, 00:23:25.738 "write_zeroes": true, 00:23:25.738 "zcopy": true, 00:23:25.738 "get_zone_info": false, 00:23:25.738 "zone_management": false, 00:23:25.738 "zone_append": false, 00:23:25.738 "compare": false, 00:23:25.738 "compare_and_write": false, 00:23:25.738 "abort": true, 00:23:25.738 "seek_hole": false, 00:23:25.738 "seek_data": false, 00:23:25.738 "copy": true, 00:23:25.738 "nvme_iov_md": false 00:23:25.738 }, 00:23:25.738 "memory_domains": [ 00:23:25.738 { 00:23:25.738 "dma_device_id": "system", 00:23:25.738 "dma_device_type": 1 00:23:25.738 }, 00:23:25.738 { 00:23:25.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:25.738 "dma_device_type": 2 00:23:25.738 } 00:23:25.738 ], 00:23:25.738 "driver_specific": {} 00:23:25.738 }' 00:23:25.738 21:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:25.738 21:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:25.738 21:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:25.738 21:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:25.738 21:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:25.997 21:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:25.997 21:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:25.997 21:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:25.997 21:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:25.997 21:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:25.997 21:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:25.997 21:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:25.997 21:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:25.997 21:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:25.997 21:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:26.257 21:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:26.257 "name": "BaseBdev2", 00:23:26.257 "aliases": [ 00:23:26.257 "ed72f2c1-b384-408a-9556-1e03e7aac6b9" 00:23:26.257 ], 00:23:26.257 "product_name": "Malloc disk", 00:23:26.257 "block_size": 512, 00:23:26.257 "num_blocks": 65536, 00:23:26.257 "uuid": "ed72f2c1-b384-408a-9556-1e03e7aac6b9", 00:23:26.257 "assigned_rate_limits": { 00:23:26.257 "rw_ios_per_sec": 0, 00:23:26.257 "rw_mbytes_per_sec": 0, 00:23:26.257 "r_mbytes_per_sec": 0, 00:23:26.257 "w_mbytes_per_sec": 0 00:23:26.257 }, 00:23:26.257 "claimed": true, 00:23:26.257 "claim_type": "exclusive_write", 00:23:26.257 "zoned": false, 00:23:26.257 "supported_io_types": { 00:23:26.257 "read": true, 00:23:26.257 "write": true, 00:23:26.257 "unmap": true, 00:23:26.257 "flush": true, 00:23:26.257 "reset": true, 00:23:26.257 "nvme_admin": false, 00:23:26.257 "nvme_io": false, 00:23:26.257 "nvme_io_md": false, 00:23:26.257 "write_zeroes": true, 00:23:26.257 "zcopy": true, 00:23:26.257 "get_zone_info": false, 00:23:26.257 "zone_management": false, 00:23:26.257 "zone_append": false, 00:23:26.257 "compare": false, 00:23:26.257 "compare_and_write": false, 00:23:26.257 "abort": true, 00:23:26.257 "seek_hole": false, 00:23:26.257 "seek_data": false, 00:23:26.257 "copy": true, 00:23:26.257 "nvme_iov_md": false 00:23:26.257 }, 00:23:26.257 "memory_domains": [ 00:23:26.257 { 00:23:26.257 "dma_device_id": "system", 00:23:26.257 "dma_device_type": 1 00:23:26.257 }, 00:23:26.257 { 00:23:26.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:26.257 "dma_device_type": 2 00:23:26.257 } 00:23:26.257 ], 00:23:26.257 "driver_specific": {} 00:23:26.257 }' 00:23:26.257 21:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:26.257 21:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:26.516 21:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:26.516 21:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:26.516 21:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:26.516 21:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:26.516 21:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:26.516 21:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:26.516 21:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:26.516 21:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:26.775 21:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:26.775 21:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:26.775 21:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:26.775 21:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:26.775 21:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:27.032 21:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:27.032 "name": "BaseBdev3", 00:23:27.032 "aliases": [ 00:23:27.032 "17685da8-2e22-4752-9886-01ec735b4979" 00:23:27.032 ], 00:23:27.032 "product_name": "Malloc disk", 00:23:27.032 "block_size": 512, 00:23:27.032 "num_blocks": 65536, 00:23:27.032 "uuid": "17685da8-2e22-4752-9886-01ec735b4979", 00:23:27.032 "assigned_rate_limits": { 00:23:27.032 "rw_ios_per_sec": 0, 00:23:27.032 "rw_mbytes_per_sec": 0, 00:23:27.032 "r_mbytes_per_sec": 0, 00:23:27.032 "w_mbytes_per_sec": 0 00:23:27.032 }, 00:23:27.032 "claimed": true, 00:23:27.032 "claim_type": "exclusive_write", 00:23:27.032 "zoned": false, 00:23:27.032 "supported_io_types": { 00:23:27.032 "read": true, 00:23:27.032 "write": true, 00:23:27.032 "unmap": true, 00:23:27.032 "flush": true, 00:23:27.032 "reset": true, 00:23:27.032 "nvme_admin": false, 00:23:27.032 "nvme_io": false, 00:23:27.032 "nvme_io_md": false, 00:23:27.032 "write_zeroes": true, 00:23:27.032 "zcopy": true, 00:23:27.032 "get_zone_info": false, 00:23:27.032 "zone_management": false, 00:23:27.032 "zone_append": false, 00:23:27.032 "compare": false, 00:23:27.032 "compare_and_write": false, 00:23:27.032 "abort": true, 00:23:27.032 "seek_hole": false, 00:23:27.032 "seek_data": false, 00:23:27.032 "copy": true, 00:23:27.032 "nvme_iov_md": false 00:23:27.032 }, 00:23:27.032 "memory_domains": [ 00:23:27.032 { 00:23:27.032 "dma_device_id": "system", 00:23:27.032 "dma_device_type": 1 00:23:27.032 }, 00:23:27.032 { 00:23:27.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:27.032 "dma_device_type": 2 00:23:27.032 } 00:23:27.032 ], 00:23:27.032 "driver_specific": {} 00:23:27.032 }' 00:23:27.032 21:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:27.032 21:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:27.032 21:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:27.032 21:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:27.032 21:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:27.032 21:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:27.032 21:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:27.291 21:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:27.291 21:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:27.291 21:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:27.291 21:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:27.291 21:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:27.291 21:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:27.291 21:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:27.291 21:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:27.550 21:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:27.550 "name": "BaseBdev4", 00:23:27.550 "aliases": [ 00:23:27.550 "9f959c40-fade-432d-91c2-ad1684ecf760" 00:23:27.550 ], 00:23:27.550 "product_name": "Malloc disk", 00:23:27.550 "block_size": 512, 00:23:27.550 "num_blocks": 65536, 00:23:27.550 "uuid": "9f959c40-fade-432d-91c2-ad1684ecf760", 00:23:27.550 "assigned_rate_limits": { 00:23:27.550 "rw_ios_per_sec": 0, 00:23:27.550 "rw_mbytes_per_sec": 0, 00:23:27.550 "r_mbytes_per_sec": 0, 00:23:27.550 "w_mbytes_per_sec": 0 00:23:27.550 }, 00:23:27.550 "claimed": true, 00:23:27.550 "claim_type": "exclusive_write", 00:23:27.550 "zoned": false, 00:23:27.550 "supported_io_types": { 00:23:27.550 "read": true, 00:23:27.550 "write": true, 00:23:27.550 "unmap": true, 00:23:27.550 "flush": true, 00:23:27.550 "reset": true, 00:23:27.550 "nvme_admin": false, 00:23:27.550 "nvme_io": false, 00:23:27.550 "nvme_io_md": false, 00:23:27.550 "write_zeroes": true, 00:23:27.550 "zcopy": true, 00:23:27.550 "get_zone_info": false, 00:23:27.550 "zone_management": false, 00:23:27.550 "zone_append": false, 00:23:27.550 "compare": false, 00:23:27.550 "compare_and_write": false, 00:23:27.550 "abort": true, 00:23:27.550 "seek_hole": false, 00:23:27.550 "seek_data": false, 00:23:27.550 "copy": true, 00:23:27.550 "nvme_iov_md": false 00:23:27.550 }, 00:23:27.550 "memory_domains": [ 00:23:27.550 { 00:23:27.550 "dma_device_id": "system", 00:23:27.550 "dma_device_type": 1 00:23:27.550 }, 00:23:27.550 { 00:23:27.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:27.550 "dma_device_type": 2 00:23:27.550 } 00:23:27.550 ], 00:23:27.550 "driver_specific": {} 00:23:27.550 }' 00:23:27.550 21:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:27.550 21:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:27.550 21:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:27.550 21:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:27.809 21:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:27.809 21:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:27.809 21:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:27.809 21:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:27.809 21:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:27.809 21:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:27.809 21:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:28.068 21:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:28.068 21:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:28.068 [2024-07-15 21:37:01.392668] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:28.068 [2024-07-15 21:37:01.392709] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:28.068 [2024-07-15 21:37:01.392783] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:28.068 [2024-07-15 21:37:01.392842] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:28.068 [2024-07-15 21:37:01.392849] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name Existed_Raid, state offline 00:23:28.068 21:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 135136 00:23:28.068 21:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 135136 ']' 00:23:28.068 21:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 135136 00:23:28.068 21:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:23:28.068 21:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:28.068 21:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 135136 00:23:28.068 killing process with pid 135136 00:23:28.068 21:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:28.068 21:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:28.068 21:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 135136' 00:23:28.068 21:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 135136 00:23:28.068 21:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 135136 00:23:28.068 [2024-07-15 21:37:01.430143] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:28.636 [2024-07-15 21:37:01.793955] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:30.048 ************************************ 00:23:30.048 END TEST raid_state_function_test 00:23:30.048 ************************************ 00:23:30.048 21:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:23:30.048 00:23:30.048 real 0m30.874s 00:23:30.048 user 0m57.300s 00:23:30.048 sys 0m3.813s 00:23:30.048 21:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:30.048 21:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.048 21:37:03 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:30.048 21:37:03 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:23:30.048 21:37:03 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:23:30.048 21:37:03 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:30.048 21:37:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:30.048 ************************************ 00:23:30.048 START TEST raid_state_function_test_sb 00:23:30.048 ************************************ 00:23:30.048 21:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 true 00:23:30.048 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:23:30.048 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:23:30.048 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:23:30.048 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:23:30.048 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:23:30.048 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:23:30.048 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:30.048 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:23:30.049 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:30.049 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:30.049 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:23:30.049 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:30.049 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:30.049 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:23:30.049 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:30.049 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:30.049 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:23:30.049 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:30.049 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:30.049 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:23:30.049 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:23:30.049 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:23:30.049 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:23:30.049 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:23:30.049 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:23:30.049 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:23:30.049 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:23:30.049 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:23:30.049 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:23:30.049 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=136264 00:23:30.049 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 136264' 00:23:30.049 Process raid pid: 136264 00:23:30.049 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:30.049 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 136264 /var/tmp/spdk-raid.sock 00:23:30.049 21:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 136264 ']' 00:23:30.049 21:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:30.049 21:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:30.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:30.049 21:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:30.049 21:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:30.049 21:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.049 [2024-07-15 21:37:03.139370] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:23:30.049 [2024-07-15 21:37:03.139514] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.049 [2024-07-15 21:37:03.299566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.306 [2024-07-15 21:37:03.486140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.306 [2024-07-15 21:37:03.664883] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:30.873 21:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:30.873 21:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:23:30.873 21:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:30.874 [2024-07-15 21:37:04.129998] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:30.874 [2024-07-15 21:37:04.130083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:30.874 [2024-07-15 21:37:04.130093] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:30.874 [2024-07-15 21:37:04.130112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:30.874 [2024-07-15 21:37:04.130136] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:30.874 [2024-07-15 21:37:04.130149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:30.874 [2024-07-15 21:37:04.130156] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:30.874 [2024-07-15 21:37:04.130175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:30.874 21:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:30.874 21:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:30.874 21:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:30.874 21:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:30.874 21:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:30.874 21:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:30.874 21:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:30.874 21:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:30.874 21:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:30.874 21:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:30.874 21:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.874 21:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:31.132 21:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:31.132 "name": "Existed_Raid", 00:23:31.132 "uuid": "affdf08f-dd4b-46b3-91de-94eeba248a9d", 00:23:31.132 "strip_size_kb": 64, 00:23:31.132 "state": "configuring", 00:23:31.132 "raid_level": "raid0", 00:23:31.132 "superblock": true, 00:23:31.132 "num_base_bdevs": 4, 00:23:31.132 "num_base_bdevs_discovered": 0, 00:23:31.132 "num_base_bdevs_operational": 4, 00:23:31.132 "base_bdevs_list": [ 00:23:31.132 { 00:23:31.132 "name": "BaseBdev1", 00:23:31.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.132 "is_configured": false, 00:23:31.132 "data_offset": 0, 00:23:31.132 "data_size": 0 00:23:31.132 }, 00:23:31.132 { 00:23:31.132 "name": "BaseBdev2", 00:23:31.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.132 "is_configured": false, 00:23:31.132 "data_offset": 0, 00:23:31.132 "data_size": 0 00:23:31.132 }, 00:23:31.132 { 00:23:31.132 "name": "BaseBdev3", 00:23:31.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.132 "is_configured": false, 00:23:31.132 "data_offset": 0, 00:23:31.132 "data_size": 0 00:23:31.132 }, 00:23:31.132 { 00:23:31.132 "name": "BaseBdev4", 00:23:31.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.132 "is_configured": false, 00:23:31.132 "data_offset": 0, 00:23:31.132 "data_size": 0 00:23:31.132 } 00:23:31.132 ] 00:23:31.132 }' 00:23:31.132 21:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:31.132 21:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.699 21:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:31.699 [2024-07-15 21:37:04.952414] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:31.699 [2024-07-15 21:37:04.952460] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:23:31.699 21:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:31.957 [2024-07-15 21:37:05.108181] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:31.957 [2024-07-15 21:37:05.108244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:31.957 [2024-07-15 21:37:05.108253] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:31.957 [2024-07-15 21:37:05.108288] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:31.957 [2024-07-15 21:37:05.108296] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:31.957 [2024-07-15 21:37:05.108318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:31.957 [2024-07-15 21:37:05.108327] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:31.957 [2024-07-15 21:37:05.108344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:31.957 21:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:31.957 [2024-07-15 21:37:05.326547] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:31.957 BaseBdev1 00:23:32.214 21:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:23:32.214 21:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:23:32.214 21:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:32.214 21:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:32.214 21:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:32.214 21:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:32.214 21:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:32.214 21:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:32.472 [ 00:23:32.472 { 00:23:32.472 "name": "BaseBdev1", 00:23:32.472 "aliases": [ 00:23:32.472 "20367526-dcd5-48f3-a649-0016a55f767d" 00:23:32.472 ], 00:23:32.472 "product_name": "Malloc disk", 00:23:32.472 "block_size": 512, 00:23:32.472 "num_blocks": 65536, 00:23:32.472 "uuid": "20367526-dcd5-48f3-a649-0016a55f767d", 00:23:32.472 "assigned_rate_limits": { 00:23:32.472 "rw_ios_per_sec": 0, 00:23:32.472 "rw_mbytes_per_sec": 0, 00:23:32.472 "r_mbytes_per_sec": 0, 00:23:32.472 "w_mbytes_per_sec": 0 00:23:32.472 }, 00:23:32.472 "claimed": true, 00:23:32.472 "claim_type": "exclusive_write", 00:23:32.472 "zoned": false, 00:23:32.472 "supported_io_types": { 00:23:32.472 "read": true, 00:23:32.472 "write": true, 00:23:32.472 "unmap": true, 00:23:32.472 "flush": true, 00:23:32.472 "reset": true, 00:23:32.472 "nvme_admin": false, 00:23:32.472 "nvme_io": false, 00:23:32.472 "nvme_io_md": false, 00:23:32.472 "write_zeroes": true, 00:23:32.472 "zcopy": true, 00:23:32.472 "get_zone_info": false, 00:23:32.472 "zone_management": false, 00:23:32.472 "zone_append": false, 00:23:32.472 "compare": false, 00:23:32.472 "compare_and_write": false, 00:23:32.472 "abort": true, 00:23:32.472 "seek_hole": false, 00:23:32.472 "seek_data": false, 00:23:32.472 "copy": true, 00:23:32.472 "nvme_iov_md": false 00:23:32.472 }, 00:23:32.472 "memory_domains": [ 00:23:32.472 { 00:23:32.472 "dma_device_id": "system", 00:23:32.472 "dma_device_type": 1 00:23:32.472 }, 00:23:32.472 { 00:23:32.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:32.472 "dma_device_type": 2 00:23:32.472 } 00:23:32.472 ], 00:23:32.472 "driver_specific": {} 00:23:32.472 } 00:23:32.472 ] 00:23:32.472 21:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:32.472 21:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:32.472 21:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:32.472 21:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:32.472 21:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:32.472 21:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:32.472 21:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:32.472 21:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:32.472 21:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:32.472 21:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:32.472 21:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:32.472 21:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.472 21:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:32.752 21:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:32.752 "name": "Existed_Raid", 00:23:32.752 "uuid": "405d7c9b-bc63-422d-8e38-72f6c307f68f", 00:23:32.752 "strip_size_kb": 64, 00:23:32.752 "state": "configuring", 00:23:32.752 "raid_level": "raid0", 00:23:32.752 "superblock": true, 00:23:32.752 "num_base_bdevs": 4, 00:23:32.752 "num_base_bdevs_discovered": 1, 00:23:32.752 "num_base_bdevs_operational": 4, 00:23:32.752 "base_bdevs_list": [ 00:23:32.752 { 00:23:32.752 "name": "BaseBdev1", 00:23:32.752 "uuid": "20367526-dcd5-48f3-a649-0016a55f767d", 00:23:32.752 "is_configured": true, 00:23:32.752 "data_offset": 2048, 00:23:32.752 "data_size": 63488 00:23:32.752 }, 00:23:32.752 { 00:23:32.752 "name": "BaseBdev2", 00:23:32.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:32.752 "is_configured": false, 00:23:32.752 "data_offset": 0, 00:23:32.752 "data_size": 0 00:23:32.752 }, 00:23:32.752 { 00:23:32.752 "name": "BaseBdev3", 00:23:32.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:32.752 "is_configured": false, 00:23:32.752 "data_offset": 0, 00:23:32.752 "data_size": 0 00:23:32.752 }, 00:23:32.752 { 00:23:32.752 "name": "BaseBdev4", 00:23:32.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:32.752 "is_configured": false, 00:23:32.752 "data_offset": 0, 00:23:32.752 "data_size": 0 00:23:32.752 } 00:23:32.752 ] 00:23:32.752 }' 00:23:32.752 21:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:32.752 21:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:33.319 21:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:33.319 [2024-07-15 21:37:06.644439] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:33.319 [2024-07-15 21:37:06.644499] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:23:33.319 21:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:33.578 [2024-07-15 21:37:06.836185] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:33.578 [2024-07-15 21:37:06.837868] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:33.578 [2024-07-15 21:37:06.837919] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:33.578 [2024-07-15 21:37:06.837927] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:33.578 [2024-07-15 21:37:06.837949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:33.578 [2024-07-15 21:37:06.837956] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:33.578 [2024-07-15 21:37:06.837996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:33.578 21:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:23:33.578 21:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:33.578 21:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:33.578 21:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:33.578 21:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:33.578 21:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:33.578 21:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:33.578 21:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:33.578 21:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:33.578 21:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:33.578 21:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:33.578 21:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:33.578 21:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.578 21:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:33.837 21:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:33.837 "name": "Existed_Raid", 00:23:33.837 "uuid": "8d89cd81-aedf-4b8e-943f-eeac41741a14", 00:23:33.837 "strip_size_kb": 64, 00:23:33.837 "state": "configuring", 00:23:33.837 "raid_level": "raid0", 00:23:33.837 "superblock": true, 00:23:33.837 "num_base_bdevs": 4, 00:23:33.837 "num_base_bdevs_discovered": 1, 00:23:33.837 "num_base_bdevs_operational": 4, 00:23:33.837 "base_bdevs_list": [ 00:23:33.837 { 00:23:33.837 "name": "BaseBdev1", 00:23:33.837 "uuid": "20367526-dcd5-48f3-a649-0016a55f767d", 00:23:33.837 "is_configured": true, 00:23:33.837 "data_offset": 2048, 00:23:33.837 "data_size": 63488 00:23:33.838 }, 00:23:33.838 { 00:23:33.838 "name": "BaseBdev2", 00:23:33.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.838 "is_configured": false, 00:23:33.838 "data_offset": 0, 00:23:33.838 "data_size": 0 00:23:33.838 }, 00:23:33.838 { 00:23:33.838 "name": "BaseBdev3", 00:23:33.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.838 "is_configured": false, 00:23:33.838 "data_offset": 0, 00:23:33.838 "data_size": 0 00:23:33.838 }, 00:23:33.838 { 00:23:33.838 "name": "BaseBdev4", 00:23:33.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.838 "is_configured": false, 00:23:33.838 "data_offset": 0, 00:23:33.838 "data_size": 0 00:23:33.838 } 00:23:33.838 ] 00:23:33.838 }' 00:23:33.838 21:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:33.838 21:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:34.404 21:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:34.664 [2024-07-15 21:37:07.851850] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:34.664 BaseBdev2 00:23:34.664 21:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:23:34.664 21:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:23:34.664 21:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:34.664 21:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:34.664 21:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:34.664 21:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:34.664 21:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:34.923 21:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:34.923 [ 00:23:34.923 { 00:23:34.923 "name": "BaseBdev2", 00:23:34.923 "aliases": [ 00:23:34.923 "2510fb95-587a-42df-8efb-ac547d7c8d2f" 00:23:34.923 ], 00:23:34.923 "product_name": "Malloc disk", 00:23:34.923 "block_size": 512, 00:23:34.923 "num_blocks": 65536, 00:23:34.923 "uuid": "2510fb95-587a-42df-8efb-ac547d7c8d2f", 00:23:34.923 "assigned_rate_limits": { 00:23:34.923 "rw_ios_per_sec": 0, 00:23:34.923 "rw_mbytes_per_sec": 0, 00:23:34.923 "r_mbytes_per_sec": 0, 00:23:34.923 "w_mbytes_per_sec": 0 00:23:34.923 }, 00:23:34.923 "claimed": true, 00:23:34.923 "claim_type": "exclusive_write", 00:23:34.923 "zoned": false, 00:23:34.923 "supported_io_types": { 00:23:34.923 "read": true, 00:23:34.923 "write": true, 00:23:34.923 "unmap": true, 00:23:34.923 "flush": true, 00:23:34.923 "reset": true, 00:23:34.923 "nvme_admin": false, 00:23:34.923 "nvme_io": false, 00:23:34.923 "nvme_io_md": false, 00:23:34.923 "write_zeroes": true, 00:23:34.923 "zcopy": true, 00:23:34.923 "get_zone_info": false, 00:23:34.923 "zone_management": false, 00:23:34.923 "zone_append": false, 00:23:34.923 "compare": false, 00:23:34.923 "compare_and_write": false, 00:23:34.923 "abort": true, 00:23:34.923 "seek_hole": false, 00:23:34.923 "seek_data": false, 00:23:34.923 "copy": true, 00:23:34.923 "nvme_iov_md": false 00:23:34.923 }, 00:23:34.923 "memory_domains": [ 00:23:34.923 { 00:23:34.923 "dma_device_id": "system", 00:23:34.923 "dma_device_type": 1 00:23:34.923 }, 00:23:34.923 { 00:23:34.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:34.923 "dma_device_type": 2 00:23:34.923 } 00:23:34.923 ], 00:23:34.923 "driver_specific": {} 00:23:34.923 } 00:23:34.923 ] 00:23:34.923 21:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:34.923 21:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:34.923 21:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:34.923 21:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:34.923 21:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:34.923 21:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:34.924 21:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:34.924 21:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:34.924 21:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:34.924 21:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:34.924 21:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:34.924 21:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:34.924 21:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:34.924 21:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.924 21:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:35.183 21:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:35.183 "name": "Existed_Raid", 00:23:35.183 "uuid": "8d89cd81-aedf-4b8e-943f-eeac41741a14", 00:23:35.183 "strip_size_kb": 64, 00:23:35.183 "state": "configuring", 00:23:35.183 "raid_level": "raid0", 00:23:35.183 "superblock": true, 00:23:35.183 "num_base_bdevs": 4, 00:23:35.183 "num_base_bdevs_discovered": 2, 00:23:35.183 "num_base_bdevs_operational": 4, 00:23:35.183 "base_bdevs_list": [ 00:23:35.183 { 00:23:35.183 "name": "BaseBdev1", 00:23:35.183 "uuid": "20367526-dcd5-48f3-a649-0016a55f767d", 00:23:35.183 "is_configured": true, 00:23:35.183 "data_offset": 2048, 00:23:35.183 "data_size": 63488 00:23:35.183 }, 00:23:35.183 { 00:23:35.183 "name": "BaseBdev2", 00:23:35.183 "uuid": "2510fb95-587a-42df-8efb-ac547d7c8d2f", 00:23:35.183 "is_configured": true, 00:23:35.183 "data_offset": 2048, 00:23:35.183 "data_size": 63488 00:23:35.183 }, 00:23:35.183 { 00:23:35.183 "name": "BaseBdev3", 00:23:35.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:35.183 "is_configured": false, 00:23:35.183 "data_offset": 0, 00:23:35.183 "data_size": 0 00:23:35.183 }, 00:23:35.183 { 00:23:35.183 "name": "BaseBdev4", 00:23:35.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:35.183 "is_configured": false, 00:23:35.183 "data_offset": 0, 00:23:35.183 "data_size": 0 00:23:35.183 } 00:23:35.183 ] 00:23:35.183 }' 00:23:35.183 21:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:35.183 21:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:35.751 21:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:36.010 [2024-07-15 21:37:09.203507] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:36.010 BaseBdev3 00:23:36.010 21:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:23:36.010 21:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:23:36.010 21:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:36.011 21:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:36.011 21:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:36.011 21:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:36.011 21:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:36.269 21:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:36.269 [ 00:23:36.269 { 00:23:36.269 "name": "BaseBdev3", 00:23:36.269 "aliases": [ 00:23:36.269 "5f17072a-a1f0-44b5-9791-1dd8f606bb2c" 00:23:36.269 ], 00:23:36.269 "product_name": "Malloc disk", 00:23:36.269 "block_size": 512, 00:23:36.269 "num_blocks": 65536, 00:23:36.269 "uuid": "5f17072a-a1f0-44b5-9791-1dd8f606bb2c", 00:23:36.269 "assigned_rate_limits": { 00:23:36.269 "rw_ios_per_sec": 0, 00:23:36.269 "rw_mbytes_per_sec": 0, 00:23:36.269 "r_mbytes_per_sec": 0, 00:23:36.269 "w_mbytes_per_sec": 0 00:23:36.269 }, 00:23:36.269 "claimed": true, 00:23:36.269 "claim_type": "exclusive_write", 00:23:36.269 "zoned": false, 00:23:36.269 "supported_io_types": { 00:23:36.269 "read": true, 00:23:36.269 "write": true, 00:23:36.269 "unmap": true, 00:23:36.269 "flush": true, 00:23:36.269 "reset": true, 00:23:36.269 "nvme_admin": false, 00:23:36.269 "nvme_io": false, 00:23:36.269 "nvme_io_md": false, 00:23:36.269 "write_zeroes": true, 00:23:36.269 "zcopy": true, 00:23:36.269 "get_zone_info": false, 00:23:36.269 "zone_management": false, 00:23:36.269 "zone_append": false, 00:23:36.269 "compare": false, 00:23:36.269 "compare_and_write": false, 00:23:36.269 "abort": true, 00:23:36.269 "seek_hole": false, 00:23:36.269 "seek_data": false, 00:23:36.269 "copy": true, 00:23:36.269 "nvme_iov_md": false 00:23:36.269 }, 00:23:36.269 "memory_domains": [ 00:23:36.269 { 00:23:36.269 "dma_device_id": "system", 00:23:36.269 "dma_device_type": 1 00:23:36.269 }, 00:23:36.269 { 00:23:36.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:36.269 "dma_device_type": 2 00:23:36.269 } 00:23:36.269 ], 00:23:36.269 "driver_specific": {} 00:23:36.269 } 00:23:36.269 ] 00:23:36.269 21:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:36.269 21:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:36.269 21:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:36.269 21:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:36.269 21:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:36.269 21:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:36.269 21:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:36.269 21:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:36.269 21:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:36.269 21:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:36.269 21:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:36.269 21:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:36.269 21:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:36.269 21:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.269 21:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:36.528 21:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:36.528 "name": "Existed_Raid", 00:23:36.528 "uuid": "8d89cd81-aedf-4b8e-943f-eeac41741a14", 00:23:36.528 "strip_size_kb": 64, 00:23:36.528 "state": "configuring", 00:23:36.528 "raid_level": "raid0", 00:23:36.528 "superblock": true, 00:23:36.528 "num_base_bdevs": 4, 00:23:36.528 "num_base_bdevs_discovered": 3, 00:23:36.528 "num_base_bdevs_operational": 4, 00:23:36.528 "base_bdevs_list": [ 00:23:36.528 { 00:23:36.528 "name": "BaseBdev1", 00:23:36.528 "uuid": "20367526-dcd5-48f3-a649-0016a55f767d", 00:23:36.528 "is_configured": true, 00:23:36.528 "data_offset": 2048, 00:23:36.528 "data_size": 63488 00:23:36.528 }, 00:23:36.528 { 00:23:36.528 "name": "BaseBdev2", 00:23:36.528 "uuid": "2510fb95-587a-42df-8efb-ac547d7c8d2f", 00:23:36.528 "is_configured": true, 00:23:36.528 "data_offset": 2048, 00:23:36.528 "data_size": 63488 00:23:36.528 }, 00:23:36.528 { 00:23:36.528 "name": "BaseBdev3", 00:23:36.528 "uuid": "5f17072a-a1f0-44b5-9791-1dd8f606bb2c", 00:23:36.528 "is_configured": true, 00:23:36.528 "data_offset": 2048, 00:23:36.528 "data_size": 63488 00:23:36.528 }, 00:23:36.528 { 00:23:36.528 "name": "BaseBdev4", 00:23:36.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.528 "is_configured": false, 00:23:36.528 "data_offset": 0, 00:23:36.528 "data_size": 0 00:23:36.528 } 00:23:36.528 ] 00:23:36.528 }' 00:23:36.528 21:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:36.528 21:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:37.097 21:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:37.356 [2024-07-15 21:37:10.672996] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:37.356 [2024-07-15 21:37:10.673213] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:23:37.356 [2024-07-15 21:37:10.673224] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:37.356 [2024-07-15 21:37:10.673381] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:23:37.356 BaseBdev4 00:23:37.356 [2024-07-15 21:37:10.673664] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:23:37.356 [2024-07-15 21:37:10.673675] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:23:37.356 [2024-07-15 21:37:10.673830] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:37.356 21:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:23:37.356 21:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:23:37.356 21:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:37.356 21:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:37.356 21:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:37.356 21:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:37.356 21:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:37.615 21:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:37.873 [ 00:23:37.873 { 00:23:37.873 "name": "BaseBdev4", 00:23:37.873 "aliases": [ 00:23:37.873 "6ef8bf38-cba7-4327-94a7-cbb7f0e28862" 00:23:37.873 ], 00:23:37.873 "product_name": "Malloc disk", 00:23:37.873 "block_size": 512, 00:23:37.873 "num_blocks": 65536, 00:23:37.873 "uuid": "6ef8bf38-cba7-4327-94a7-cbb7f0e28862", 00:23:37.873 "assigned_rate_limits": { 00:23:37.873 "rw_ios_per_sec": 0, 00:23:37.873 "rw_mbytes_per_sec": 0, 00:23:37.873 "r_mbytes_per_sec": 0, 00:23:37.873 "w_mbytes_per_sec": 0 00:23:37.873 }, 00:23:37.873 "claimed": true, 00:23:37.873 "claim_type": "exclusive_write", 00:23:37.873 "zoned": false, 00:23:37.873 "supported_io_types": { 00:23:37.873 "read": true, 00:23:37.873 "write": true, 00:23:37.873 "unmap": true, 00:23:37.873 "flush": true, 00:23:37.873 "reset": true, 00:23:37.873 "nvme_admin": false, 00:23:37.873 "nvme_io": false, 00:23:37.873 "nvme_io_md": false, 00:23:37.873 "write_zeroes": true, 00:23:37.873 "zcopy": true, 00:23:37.873 "get_zone_info": false, 00:23:37.873 "zone_management": false, 00:23:37.873 "zone_append": false, 00:23:37.873 "compare": false, 00:23:37.873 "compare_and_write": false, 00:23:37.873 "abort": true, 00:23:37.873 "seek_hole": false, 00:23:37.873 "seek_data": false, 00:23:37.873 "copy": true, 00:23:37.873 "nvme_iov_md": false 00:23:37.873 }, 00:23:37.873 "memory_domains": [ 00:23:37.873 { 00:23:37.873 "dma_device_id": "system", 00:23:37.873 "dma_device_type": 1 00:23:37.873 }, 00:23:37.873 { 00:23:37.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:37.873 "dma_device_type": 2 00:23:37.873 } 00:23:37.873 ], 00:23:37.873 "driver_specific": {} 00:23:37.873 } 00:23:37.873 ] 00:23:37.873 21:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:37.873 21:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:37.873 21:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:37.873 21:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:23:37.873 21:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:37.873 21:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:37.873 21:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:37.873 21:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:37.873 21:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:37.873 21:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:37.873 21:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:37.873 21:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:37.873 21:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:37.874 21:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.874 21:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:38.156 21:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:38.156 "name": "Existed_Raid", 00:23:38.156 "uuid": "8d89cd81-aedf-4b8e-943f-eeac41741a14", 00:23:38.156 "strip_size_kb": 64, 00:23:38.156 "state": "online", 00:23:38.156 "raid_level": "raid0", 00:23:38.156 "superblock": true, 00:23:38.156 "num_base_bdevs": 4, 00:23:38.156 "num_base_bdevs_discovered": 4, 00:23:38.156 "num_base_bdevs_operational": 4, 00:23:38.156 "base_bdevs_list": [ 00:23:38.156 { 00:23:38.156 "name": "BaseBdev1", 00:23:38.156 "uuid": "20367526-dcd5-48f3-a649-0016a55f767d", 00:23:38.156 "is_configured": true, 00:23:38.156 "data_offset": 2048, 00:23:38.156 "data_size": 63488 00:23:38.156 }, 00:23:38.156 { 00:23:38.156 "name": "BaseBdev2", 00:23:38.156 "uuid": "2510fb95-587a-42df-8efb-ac547d7c8d2f", 00:23:38.156 "is_configured": true, 00:23:38.156 "data_offset": 2048, 00:23:38.156 "data_size": 63488 00:23:38.156 }, 00:23:38.156 { 00:23:38.156 "name": "BaseBdev3", 00:23:38.156 "uuid": "5f17072a-a1f0-44b5-9791-1dd8f606bb2c", 00:23:38.156 "is_configured": true, 00:23:38.156 "data_offset": 2048, 00:23:38.156 "data_size": 63488 00:23:38.156 }, 00:23:38.156 { 00:23:38.156 "name": "BaseBdev4", 00:23:38.156 "uuid": "6ef8bf38-cba7-4327-94a7-cbb7f0e28862", 00:23:38.157 "is_configured": true, 00:23:38.157 "data_offset": 2048, 00:23:38.157 "data_size": 63488 00:23:38.157 } 00:23:38.157 ] 00:23:38.157 }' 00:23:38.157 21:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:38.157 21:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:38.724 21:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:23:38.724 21:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:38.724 21:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:38.724 21:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:38.724 21:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:38.724 21:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:23:38.724 21:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:38.724 21:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:38.724 [2024-07-15 21:37:12.023020] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:38.724 21:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:38.724 "name": "Existed_Raid", 00:23:38.724 "aliases": [ 00:23:38.724 "8d89cd81-aedf-4b8e-943f-eeac41741a14" 00:23:38.724 ], 00:23:38.724 "product_name": "Raid Volume", 00:23:38.724 "block_size": 512, 00:23:38.724 "num_blocks": 253952, 00:23:38.724 "uuid": "8d89cd81-aedf-4b8e-943f-eeac41741a14", 00:23:38.724 "assigned_rate_limits": { 00:23:38.724 "rw_ios_per_sec": 0, 00:23:38.724 "rw_mbytes_per_sec": 0, 00:23:38.724 "r_mbytes_per_sec": 0, 00:23:38.724 "w_mbytes_per_sec": 0 00:23:38.724 }, 00:23:38.724 "claimed": false, 00:23:38.724 "zoned": false, 00:23:38.724 "supported_io_types": { 00:23:38.724 "read": true, 00:23:38.724 "write": true, 00:23:38.724 "unmap": true, 00:23:38.724 "flush": true, 00:23:38.724 "reset": true, 00:23:38.724 "nvme_admin": false, 00:23:38.724 "nvme_io": false, 00:23:38.724 "nvme_io_md": false, 00:23:38.724 "write_zeroes": true, 00:23:38.724 "zcopy": false, 00:23:38.724 "get_zone_info": false, 00:23:38.724 "zone_management": false, 00:23:38.724 "zone_append": false, 00:23:38.724 "compare": false, 00:23:38.724 "compare_and_write": false, 00:23:38.724 "abort": false, 00:23:38.724 "seek_hole": false, 00:23:38.724 "seek_data": false, 00:23:38.724 "copy": false, 00:23:38.724 "nvme_iov_md": false 00:23:38.724 }, 00:23:38.724 "memory_domains": [ 00:23:38.724 { 00:23:38.724 "dma_device_id": "system", 00:23:38.724 "dma_device_type": 1 00:23:38.724 }, 00:23:38.724 { 00:23:38.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:38.724 "dma_device_type": 2 00:23:38.724 }, 00:23:38.724 { 00:23:38.724 "dma_device_id": "system", 00:23:38.724 "dma_device_type": 1 00:23:38.724 }, 00:23:38.724 { 00:23:38.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:38.724 "dma_device_type": 2 00:23:38.724 }, 00:23:38.724 { 00:23:38.724 "dma_device_id": "system", 00:23:38.724 "dma_device_type": 1 00:23:38.724 }, 00:23:38.724 { 00:23:38.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:38.724 "dma_device_type": 2 00:23:38.724 }, 00:23:38.724 { 00:23:38.724 "dma_device_id": "system", 00:23:38.724 "dma_device_type": 1 00:23:38.724 }, 00:23:38.724 { 00:23:38.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:38.724 "dma_device_type": 2 00:23:38.724 } 00:23:38.724 ], 00:23:38.724 "driver_specific": { 00:23:38.724 "raid": { 00:23:38.724 "uuid": "8d89cd81-aedf-4b8e-943f-eeac41741a14", 00:23:38.724 "strip_size_kb": 64, 00:23:38.724 "state": "online", 00:23:38.724 "raid_level": "raid0", 00:23:38.724 "superblock": true, 00:23:38.724 "num_base_bdevs": 4, 00:23:38.724 "num_base_bdevs_discovered": 4, 00:23:38.724 "num_base_bdevs_operational": 4, 00:23:38.724 "base_bdevs_list": [ 00:23:38.724 { 00:23:38.724 "name": "BaseBdev1", 00:23:38.724 "uuid": "20367526-dcd5-48f3-a649-0016a55f767d", 00:23:38.724 "is_configured": true, 00:23:38.724 "data_offset": 2048, 00:23:38.724 "data_size": 63488 00:23:38.724 }, 00:23:38.724 { 00:23:38.724 "name": "BaseBdev2", 00:23:38.724 "uuid": "2510fb95-587a-42df-8efb-ac547d7c8d2f", 00:23:38.724 "is_configured": true, 00:23:38.724 "data_offset": 2048, 00:23:38.724 "data_size": 63488 00:23:38.724 }, 00:23:38.724 { 00:23:38.724 "name": "BaseBdev3", 00:23:38.724 "uuid": "5f17072a-a1f0-44b5-9791-1dd8f606bb2c", 00:23:38.724 "is_configured": true, 00:23:38.724 "data_offset": 2048, 00:23:38.724 "data_size": 63488 00:23:38.724 }, 00:23:38.724 { 00:23:38.724 "name": "BaseBdev4", 00:23:38.724 "uuid": "6ef8bf38-cba7-4327-94a7-cbb7f0e28862", 00:23:38.724 "is_configured": true, 00:23:38.724 "data_offset": 2048, 00:23:38.724 "data_size": 63488 00:23:38.724 } 00:23:38.724 ] 00:23:38.724 } 00:23:38.724 } 00:23:38.724 }' 00:23:38.724 21:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:38.983 21:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:23:38.983 BaseBdev2 00:23:38.983 BaseBdev3 00:23:38.983 BaseBdev4' 00:23:38.983 21:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:38.983 21:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:23:38.983 21:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:38.983 21:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:38.983 "name": "BaseBdev1", 00:23:38.983 "aliases": [ 00:23:38.983 "20367526-dcd5-48f3-a649-0016a55f767d" 00:23:38.983 ], 00:23:38.983 "product_name": "Malloc disk", 00:23:38.983 "block_size": 512, 00:23:38.983 "num_blocks": 65536, 00:23:38.983 "uuid": "20367526-dcd5-48f3-a649-0016a55f767d", 00:23:38.983 "assigned_rate_limits": { 00:23:38.983 "rw_ios_per_sec": 0, 00:23:38.983 "rw_mbytes_per_sec": 0, 00:23:38.983 "r_mbytes_per_sec": 0, 00:23:38.984 "w_mbytes_per_sec": 0 00:23:38.984 }, 00:23:38.984 "claimed": true, 00:23:38.984 "claim_type": "exclusive_write", 00:23:38.984 "zoned": false, 00:23:38.984 "supported_io_types": { 00:23:38.984 "read": true, 00:23:38.984 "write": true, 00:23:38.984 "unmap": true, 00:23:38.984 "flush": true, 00:23:38.984 "reset": true, 00:23:38.984 "nvme_admin": false, 00:23:38.984 "nvme_io": false, 00:23:38.984 "nvme_io_md": false, 00:23:38.984 "write_zeroes": true, 00:23:38.984 "zcopy": true, 00:23:38.984 "get_zone_info": false, 00:23:38.984 "zone_management": false, 00:23:38.984 "zone_append": false, 00:23:38.984 "compare": false, 00:23:38.984 "compare_and_write": false, 00:23:38.984 "abort": true, 00:23:38.984 "seek_hole": false, 00:23:38.984 "seek_data": false, 00:23:38.984 "copy": true, 00:23:38.984 "nvme_iov_md": false 00:23:38.984 }, 00:23:38.984 "memory_domains": [ 00:23:38.984 { 00:23:38.984 "dma_device_id": "system", 00:23:38.984 "dma_device_type": 1 00:23:38.984 }, 00:23:38.984 { 00:23:38.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:38.984 "dma_device_type": 2 00:23:38.984 } 00:23:38.984 ], 00:23:38.984 "driver_specific": {} 00:23:38.984 }' 00:23:38.984 21:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:38.984 21:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:39.243 21:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:39.243 21:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:39.243 21:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:39.243 21:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:39.243 21:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:39.243 21:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:39.243 21:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:39.243 21:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:39.502 21:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:39.502 21:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:39.502 21:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:39.502 21:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:39.502 21:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:39.761 21:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:39.761 "name": "BaseBdev2", 00:23:39.761 "aliases": [ 00:23:39.761 "2510fb95-587a-42df-8efb-ac547d7c8d2f" 00:23:39.761 ], 00:23:39.761 "product_name": "Malloc disk", 00:23:39.761 "block_size": 512, 00:23:39.761 "num_blocks": 65536, 00:23:39.761 "uuid": "2510fb95-587a-42df-8efb-ac547d7c8d2f", 00:23:39.761 "assigned_rate_limits": { 00:23:39.761 "rw_ios_per_sec": 0, 00:23:39.761 "rw_mbytes_per_sec": 0, 00:23:39.761 "r_mbytes_per_sec": 0, 00:23:39.761 "w_mbytes_per_sec": 0 00:23:39.761 }, 00:23:39.761 "claimed": true, 00:23:39.761 "claim_type": "exclusive_write", 00:23:39.761 "zoned": false, 00:23:39.761 "supported_io_types": { 00:23:39.761 "read": true, 00:23:39.761 "write": true, 00:23:39.761 "unmap": true, 00:23:39.761 "flush": true, 00:23:39.761 "reset": true, 00:23:39.761 "nvme_admin": false, 00:23:39.761 "nvme_io": false, 00:23:39.761 "nvme_io_md": false, 00:23:39.761 "write_zeroes": true, 00:23:39.761 "zcopy": true, 00:23:39.761 "get_zone_info": false, 00:23:39.761 "zone_management": false, 00:23:39.761 "zone_append": false, 00:23:39.761 "compare": false, 00:23:39.761 "compare_and_write": false, 00:23:39.761 "abort": true, 00:23:39.761 "seek_hole": false, 00:23:39.761 "seek_data": false, 00:23:39.761 "copy": true, 00:23:39.761 "nvme_iov_md": false 00:23:39.761 }, 00:23:39.761 "memory_domains": [ 00:23:39.761 { 00:23:39.761 "dma_device_id": "system", 00:23:39.761 "dma_device_type": 1 00:23:39.761 }, 00:23:39.761 { 00:23:39.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:39.761 "dma_device_type": 2 00:23:39.761 } 00:23:39.761 ], 00:23:39.761 "driver_specific": {} 00:23:39.761 }' 00:23:39.761 21:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:39.761 21:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:39.761 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:39.761 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:39.761 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:39.761 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:39.761 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:40.020 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:40.020 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:40.020 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:40.020 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:40.020 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:40.020 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:40.020 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:40.020 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:40.280 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:40.280 "name": "BaseBdev3", 00:23:40.280 "aliases": [ 00:23:40.280 "5f17072a-a1f0-44b5-9791-1dd8f606bb2c" 00:23:40.280 ], 00:23:40.280 "product_name": "Malloc disk", 00:23:40.280 "block_size": 512, 00:23:40.280 "num_blocks": 65536, 00:23:40.280 "uuid": "5f17072a-a1f0-44b5-9791-1dd8f606bb2c", 00:23:40.280 "assigned_rate_limits": { 00:23:40.280 "rw_ios_per_sec": 0, 00:23:40.280 "rw_mbytes_per_sec": 0, 00:23:40.280 "r_mbytes_per_sec": 0, 00:23:40.280 "w_mbytes_per_sec": 0 00:23:40.280 }, 00:23:40.280 "claimed": true, 00:23:40.280 "claim_type": "exclusive_write", 00:23:40.280 "zoned": false, 00:23:40.280 "supported_io_types": { 00:23:40.280 "read": true, 00:23:40.280 "write": true, 00:23:40.280 "unmap": true, 00:23:40.280 "flush": true, 00:23:40.280 "reset": true, 00:23:40.280 "nvme_admin": false, 00:23:40.280 "nvme_io": false, 00:23:40.280 "nvme_io_md": false, 00:23:40.280 "write_zeroes": true, 00:23:40.280 "zcopy": true, 00:23:40.280 "get_zone_info": false, 00:23:40.280 "zone_management": false, 00:23:40.280 "zone_append": false, 00:23:40.280 "compare": false, 00:23:40.280 "compare_and_write": false, 00:23:40.280 "abort": true, 00:23:40.280 "seek_hole": false, 00:23:40.280 "seek_data": false, 00:23:40.280 "copy": true, 00:23:40.280 "nvme_iov_md": false 00:23:40.280 }, 00:23:40.280 "memory_domains": [ 00:23:40.280 { 00:23:40.280 "dma_device_id": "system", 00:23:40.280 "dma_device_type": 1 00:23:40.280 }, 00:23:40.280 { 00:23:40.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:40.280 "dma_device_type": 2 00:23:40.280 } 00:23:40.280 ], 00:23:40.280 "driver_specific": {} 00:23:40.280 }' 00:23:40.280 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:40.280 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:40.280 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:40.280 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:40.280 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:40.538 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:40.538 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:40.538 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:40.538 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:40.538 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:40.538 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:40.538 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:40.538 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:40.538 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:40.538 21:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:40.803 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:40.803 "name": "BaseBdev4", 00:23:40.803 "aliases": [ 00:23:40.803 "6ef8bf38-cba7-4327-94a7-cbb7f0e28862" 00:23:40.803 ], 00:23:40.803 "product_name": "Malloc disk", 00:23:40.804 "block_size": 512, 00:23:40.804 "num_blocks": 65536, 00:23:40.804 "uuid": "6ef8bf38-cba7-4327-94a7-cbb7f0e28862", 00:23:40.804 "assigned_rate_limits": { 00:23:40.804 "rw_ios_per_sec": 0, 00:23:40.804 "rw_mbytes_per_sec": 0, 00:23:40.804 "r_mbytes_per_sec": 0, 00:23:40.804 "w_mbytes_per_sec": 0 00:23:40.804 }, 00:23:40.804 "claimed": true, 00:23:40.804 "claim_type": "exclusive_write", 00:23:40.804 "zoned": false, 00:23:40.804 "supported_io_types": { 00:23:40.804 "read": true, 00:23:40.804 "write": true, 00:23:40.804 "unmap": true, 00:23:40.804 "flush": true, 00:23:40.804 "reset": true, 00:23:40.804 "nvme_admin": false, 00:23:40.804 "nvme_io": false, 00:23:40.804 "nvme_io_md": false, 00:23:40.804 "write_zeroes": true, 00:23:40.804 "zcopy": true, 00:23:40.804 "get_zone_info": false, 00:23:40.804 "zone_management": false, 00:23:40.804 "zone_append": false, 00:23:40.804 "compare": false, 00:23:40.804 "compare_and_write": false, 00:23:40.804 "abort": true, 00:23:40.804 "seek_hole": false, 00:23:40.804 "seek_data": false, 00:23:40.804 "copy": true, 00:23:40.804 "nvme_iov_md": false 00:23:40.804 }, 00:23:40.804 "memory_domains": [ 00:23:40.804 { 00:23:40.804 "dma_device_id": "system", 00:23:40.804 "dma_device_type": 1 00:23:40.804 }, 00:23:40.804 { 00:23:40.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:40.804 "dma_device_type": 2 00:23:40.804 } 00:23:40.804 ], 00:23:40.804 "driver_specific": {} 00:23:40.804 }' 00:23:40.804 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:41.072 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:41.072 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:41.072 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:41.072 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:41.072 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:41.072 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:41.072 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:41.330 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:41.330 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:41.330 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:41.330 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:41.330 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:41.588 [2024-07-15 21:37:14.726137] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:41.588 [2024-07-15 21:37:14.726174] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:41.588 [2024-07-15 21:37:14.726238] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:41.588 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:23:41.588 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:23:41.588 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:41.588 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:23:41.588 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:23:41.589 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:23:41.589 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:41.589 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:23:41.589 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:41.589 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:41.589 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:41.589 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:41.589 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:41.589 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:41.589 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:41.589 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.589 21:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:41.847 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:41.847 "name": "Existed_Raid", 00:23:41.847 "uuid": "8d89cd81-aedf-4b8e-943f-eeac41741a14", 00:23:41.847 "strip_size_kb": 64, 00:23:41.847 "state": "offline", 00:23:41.847 "raid_level": "raid0", 00:23:41.847 "superblock": true, 00:23:41.847 "num_base_bdevs": 4, 00:23:41.847 "num_base_bdevs_discovered": 3, 00:23:41.847 "num_base_bdevs_operational": 3, 00:23:41.847 "base_bdevs_list": [ 00:23:41.847 { 00:23:41.847 "name": null, 00:23:41.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:41.847 "is_configured": false, 00:23:41.847 "data_offset": 2048, 00:23:41.847 "data_size": 63488 00:23:41.847 }, 00:23:41.847 { 00:23:41.847 "name": "BaseBdev2", 00:23:41.847 "uuid": "2510fb95-587a-42df-8efb-ac547d7c8d2f", 00:23:41.847 "is_configured": true, 00:23:41.847 "data_offset": 2048, 00:23:41.847 "data_size": 63488 00:23:41.847 }, 00:23:41.847 { 00:23:41.847 "name": "BaseBdev3", 00:23:41.847 "uuid": "5f17072a-a1f0-44b5-9791-1dd8f606bb2c", 00:23:41.847 "is_configured": true, 00:23:41.847 "data_offset": 2048, 00:23:41.847 "data_size": 63488 00:23:41.847 }, 00:23:41.847 { 00:23:41.847 "name": "BaseBdev4", 00:23:41.847 "uuid": "6ef8bf38-cba7-4327-94a7-cbb7f0e28862", 00:23:41.847 "is_configured": true, 00:23:41.847 "data_offset": 2048, 00:23:41.847 "data_size": 63488 00:23:41.847 } 00:23:41.847 ] 00:23:41.847 }' 00:23:41.847 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:41.847 21:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.415 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:23:42.415 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:42.415 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.415 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:42.674 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:42.674 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:42.674 21:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:42.931 [2024-07-15 21:37:16.079221] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:42.931 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:42.931 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:42.931 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.931 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:43.214 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:43.214 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:43.214 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:43.473 [2024-07-15 21:37:16.603694] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:43.473 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:43.473 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:43.473 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:43.473 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.733 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:43.733 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:43.733 21:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:23:43.991 [2024-07-15 21:37:17.145108] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:43.991 [2024-07-15 21:37:17.145195] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:23:43.991 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:43.991 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:43.991 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.991 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:23:44.250 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:23:44.250 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:23:44.250 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:23:44.250 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:23:44.250 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:44.250 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:44.508 BaseBdev2 00:23:44.508 21:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:23:44.508 21:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:23:44.508 21:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:44.508 21:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:44.508 21:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:44.508 21:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:44.508 21:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:44.508 21:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:44.767 [ 00:23:44.767 { 00:23:44.767 "name": "BaseBdev2", 00:23:44.767 "aliases": [ 00:23:44.767 "1e3dbd47-49c5-432b-8dd2-2e094d3a4152" 00:23:44.767 ], 00:23:44.767 "product_name": "Malloc disk", 00:23:44.767 "block_size": 512, 00:23:44.767 "num_blocks": 65536, 00:23:44.767 "uuid": "1e3dbd47-49c5-432b-8dd2-2e094d3a4152", 00:23:44.767 "assigned_rate_limits": { 00:23:44.767 "rw_ios_per_sec": 0, 00:23:44.767 "rw_mbytes_per_sec": 0, 00:23:44.767 "r_mbytes_per_sec": 0, 00:23:44.767 "w_mbytes_per_sec": 0 00:23:44.767 }, 00:23:44.767 "claimed": false, 00:23:44.767 "zoned": false, 00:23:44.767 "supported_io_types": { 00:23:44.767 "read": true, 00:23:44.767 "write": true, 00:23:44.767 "unmap": true, 00:23:44.767 "flush": true, 00:23:44.767 "reset": true, 00:23:44.768 "nvme_admin": false, 00:23:44.768 "nvme_io": false, 00:23:44.768 "nvme_io_md": false, 00:23:44.768 "write_zeroes": true, 00:23:44.768 "zcopy": true, 00:23:44.768 "get_zone_info": false, 00:23:44.768 "zone_management": false, 00:23:44.768 "zone_append": false, 00:23:44.768 "compare": false, 00:23:44.768 "compare_and_write": false, 00:23:44.768 "abort": true, 00:23:44.768 "seek_hole": false, 00:23:44.768 "seek_data": false, 00:23:44.768 "copy": true, 00:23:44.768 "nvme_iov_md": false 00:23:44.768 }, 00:23:44.768 "memory_domains": [ 00:23:44.768 { 00:23:44.768 "dma_device_id": "system", 00:23:44.768 "dma_device_type": 1 00:23:44.768 }, 00:23:44.768 { 00:23:44.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:44.768 "dma_device_type": 2 00:23:44.768 } 00:23:44.768 ], 00:23:44.768 "driver_specific": {} 00:23:44.768 } 00:23:44.768 ] 00:23:44.768 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:44.768 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:44.768 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:44.768 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:45.028 BaseBdev3 00:23:45.028 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:23:45.028 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:23:45.028 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:45.028 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:45.028 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:45.028 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:45.028 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:45.288 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:45.288 [ 00:23:45.288 { 00:23:45.288 "name": "BaseBdev3", 00:23:45.288 "aliases": [ 00:23:45.288 "72ba3086-0f15-4bc1-a568-a30304915262" 00:23:45.288 ], 00:23:45.288 "product_name": "Malloc disk", 00:23:45.288 "block_size": 512, 00:23:45.288 "num_blocks": 65536, 00:23:45.288 "uuid": "72ba3086-0f15-4bc1-a568-a30304915262", 00:23:45.288 "assigned_rate_limits": { 00:23:45.288 "rw_ios_per_sec": 0, 00:23:45.288 "rw_mbytes_per_sec": 0, 00:23:45.288 "r_mbytes_per_sec": 0, 00:23:45.288 "w_mbytes_per_sec": 0 00:23:45.288 }, 00:23:45.288 "claimed": false, 00:23:45.288 "zoned": false, 00:23:45.288 "supported_io_types": { 00:23:45.288 "read": true, 00:23:45.288 "write": true, 00:23:45.288 "unmap": true, 00:23:45.288 "flush": true, 00:23:45.288 "reset": true, 00:23:45.288 "nvme_admin": false, 00:23:45.288 "nvme_io": false, 00:23:45.288 "nvme_io_md": false, 00:23:45.288 "write_zeroes": true, 00:23:45.288 "zcopy": true, 00:23:45.288 "get_zone_info": false, 00:23:45.288 "zone_management": false, 00:23:45.288 "zone_append": false, 00:23:45.288 "compare": false, 00:23:45.288 "compare_and_write": false, 00:23:45.288 "abort": true, 00:23:45.288 "seek_hole": false, 00:23:45.288 "seek_data": false, 00:23:45.288 "copy": true, 00:23:45.288 "nvme_iov_md": false 00:23:45.288 }, 00:23:45.288 "memory_domains": [ 00:23:45.288 { 00:23:45.288 "dma_device_id": "system", 00:23:45.288 "dma_device_type": 1 00:23:45.288 }, 00:23:45.288 { 00:23:45.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:45.288 "dma_device_type": 2 00:23:45.288 } 00:23:45.288 ], 00:23:45.288 "driver_specific": {} 00:23:45.288 } 00:23:45.288 ] 00:23:45.288 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:45.288 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:45.288 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:45.288 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:45.548 BaseBdev4 00:23:45.548 21:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:23:45.548 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:23:45.548 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:45.548 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:45.548 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:45.548 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:45.548 21:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:45.807 21:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:46.067 [ 00:23:46.067 { 00:23:46.067 "name": "BaseBdev4", 00:23:46.067 "aliases": [ 00:23:46.067 "1a193ccb-4910-4ee3-b1e4-594139afdf59" 00:23:46.067 ], 00:23:46.067 "product_name": "Malloc disk", 00:23:46.067 "block_size": 512, 00:23:46.067 "num_blocks": 65536, 00:23:46.067 "uuid": "1a193ccb-4910-4ee3-b1e4-594139afdf59", 00:23:46.067 "assigned_rate_limits": { 00:23:46.067 "rw_ios_per_sec": 0, 00:23:46.067 "rw_mbytes_per_sec": 0, 00:23:46.067 "r_mbytes_per_sec": 0, 00:23:46.067 "w_mbytes_per_sec": 0 00:23:46.067 }, 00:23:46.067 "claimed": false, 00:23:46.067 "zoned": false, 00:23:46.067 "supported_io_types": { 00:23:46.067 "read": true, 00:23:46.067 "write": true, 00:23:46.067 "unmap": true, 00:23:46.067 "flush": true, 00:23:46.067 "reset": true, 00:23:46.067 "nvme_admin": false, 00:23:46.067 "nvme_io": false, 00:23:46.067 "nvme_io_md": false, 00:23:46.067 "write_zeroes": true, 00:23:46.067 "zcopy": true, 00:23:46.067 "get_zone_info": false, 00:23:46.067 "zone_management": false, 00:23:46.067 "zone_append": false, 00:23:46.067 "compare": false, 00:23:46.067 "compare_and_write": false, 00:23:46.067 "abort": true, 00:23:46.067 "seek_hole": false, 00:23:46.067 "seek_data": false, 00:23:46.067 "copy": true, 00:23:46.067 "nvme_iov_md": false 00:23:46.067 }, 00:23:46.067 "memory_domains": [ 00:23:46.067 { 00:23:46.067 "dma_device_id": "system", 00:23:46.067 "dma_device_type": 1 00:23:46.067 }, 00:23:46.067 { 00:23:46.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:46.067 "dma_device_type": 2 00:23:46.067 } 00:23:46.067 ], 00:23:46.067 "driver_specific": {} 00:23:46.067 } 00:23:46.067 ] 00:23:46.067 21:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:46.067 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:46.067 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:46.067 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:46.067 [2024-07-15 21:37:19.366772] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:46.067 [2024-07-15 21:37:19.366837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:46.067 [2024-07-15 21:37:19.366864] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:46.067 [2024-07-15 21:37:19.368603] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:46.067 [2024-07-15 21:37:19.368659] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:46.067 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:46.067 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:46.067 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:46.067 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:46.067 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:46.067 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:46.067 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:46.067 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:46.067 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:46.067 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:46.067 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.067 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:46.326 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:46.326 "name": "Existed_Raid", 00:23:46.326 "uuid": "c4dc2ead-038a-4869-8c2d-45ecf6f6e7b6", 00:23:46.326 "strip_size_kb": 64, 00:23:46.326 "state": "configuring", 00:23:46.326 "raid_level": "raid0", 00:23:46.326 "superblock": true, 00:23:46.326 "num_base_bdevs": 4, 00:23:46.326 "num_base_bdevs_discovered": 3, 00:23:46.326 "num_base_bdevs_operational": 4, 00:23:46.326 "base_bdevs_list": [ 00:23:46.326 { 00:23:46.326 "name": "BaseBdev1", 00:23:46.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:46.326 "is_configured": false, 00:23:46.326 "data_offset": 0, 00:23:46.326 "data_size": 0 00:23:46.326 }, 00:23:46.326 { 00:23:46.326 "name": "BaseBdev2", 00:23:46.326 "uuid": "1e3dbd47-49c5-432b-8dd2-2e094d3a4152", 00:23:46.326 "is_configured": true, 00:23:46.326 "data_offset": 2048, 00:23:46.326 "data_size": 63488 00:23:46.326 }, 00:23:46.326 { 00:23:46.326 "name": "BaseBdev3", 00:23:46.326 "uuid": "72ba3086-0f15-4bc1-a568-a30304915262", 00:23:46.326 "is_configured": true, 00:23:46.326 "data_offset": 2048, 00:23:46.326 "data_size": 63488 00:23:46.326 }, 00:23:46.327 { 00:23:46.327 "name": "BaseBdev4", 00:23:46.327 "uuid": "1a193ccb-4910-4ee3-b1e4-594139afdf59", 00:23:46.327 "is_configured": true, 00:23:46.327 "data_offset": 2048, 00:23:46.327 "data_size": 63488 00:23:46.327 } 00:23:46.327 ] 00:23:46.327 }' 00:23:46.327 21:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:46.327 21:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.896 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:47.155 [2024-07-15 21:37:20.388943] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:47.155 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:47.155 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:47.155 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:47.155 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:47.155 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:47.155 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:47.155 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:47.155 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:47.155 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:47.155 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:47.155 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.155 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:47.414 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:47.414 "name": "Existed_Raid", 00:23:47.414 "uuid": "c4dc2ead-038a-4869-8c2d-45ecf6f6e7b6", 00:23:47.414 "strip_size_kb": 64, 00:23:47.414 "state": "configuring", 00:23:47.414 "raid_level": "raid0", 00:23:47.414 "superblock": true, 00:23:47.414 "num_base_bdevs": 4, 00:23:47.414 "num_base_bdevs_discovered": 2, 00:23:47.414 "num_base_bdevs_operational": 4, 00:23:47.414 "base_bdevs_list": [ 00:23:47.414 { 00:23:47.414 "name": "BaseBdev1", 00:23:47.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.414 "is_configured": false, 00:23:47.414 "data_offset": 0, 00:23:47.414 "data_size": 0 00:23:47.414 }, 00:23:47.414 { 00:23:47.414 "name": null, 00:23:47.414 "uuid": "1e3dbd47-49c5-432b-8dd2-2e094d3a4152", 00:23:47.414 "is_configured": false, 00:23:47.414 "data_offset": 2048, 00:23:47.414 "data_size": 63488 00:23:47.414 }, 00:23:47.414 { 00:23:47.414 "name": "BaseBdev3", 00:23:47.414 "uuid": "72ba3086-0f15-4bc1-a568-a30304915262", 00:23:47.414 "is_configured": true, 00:23:47.414 "data_offset": 2048, 00:23:47.414 "data_size": 63488 00:23:47.414 }, 00:23:47.414 { 00:23:47.414 "name": "BaseBdev4", 00:23:47.414 "uuid": "1a193ccb-4910-4ee3-b1e4-594139afdf59", 00:23:47.414 "is_configured": true, 00:23:47.414 "data_offset": 2048, 00:23:47.414 "data_size": 63488 00:23:47.414 } 00:23:47.414 ] 00:23:47.414 }' 00:23:47.414 21:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:47.414 21:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:47.984 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.984 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:48.243 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:23:48.243 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:48.502 [2024-07-15 21:37:21.667767] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:48.502 BaseBdev1 00:23:48.502 21:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:23:48.502 21:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:23:48.502 21:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:48.502 21:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:48.502 21:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:48.502 21:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:48.502 21:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:48.502 21:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:48.761 [ 00:23:48.761 { 00:23:48.761 "name": "BaseBdev1", 00:23:48.761 "aliases": [ 00:23:48.761 "70c60a20-1552-402f-b902-9d733fdb5271" 00:23:48.761 ], 00:23:48.761 "product_name": "Malloc disk", 00:23:48.761 "block_size": 512, 00:23:48.761 "num_blocks": 65536, 00:23:48.761 "uuid": "70c60a20-1552-402f-b902-9d733fdb5271", 00:23:48.761 "assigned_rate_limits": { 00:23:48.761 "rw_ios_per_sec": 0, 00:23:48.761 "rw_mbytes_per_sec": 0, 00:23:48.761 "r_mbytes_per_sec": 0, 00:23:48.761 "w_mbytes_per_sec": 0 00:23:48.761 }, 00:23:48.761 "claimed": true, 00:23:48.761 "claim_type": "exclusive_write", 00:23:48.761 "zoned": false, 00:23:48.761 "supported_io_types": { 00:23:48.761 "read": true, 00:23:48.761 "write": true, 00:23:48.761 "unmap": true, 00:23:48.761 "flush": true, 00:23:48.761 "reset": true, 00:23:48.761 "nvme_admin": false, 00:23:48.761 "nvme_io": false, 00:23:48.761 "nvme_io_md": false, 00:23:48.761 "write_zeroes": true, 00:23:48.761 "zcopy": true, 00:23:48.761 "get_zone_info": false, 00:23:48.761 "zone_management": false, 00:23:48.761 "zone_append": false, 00:23:48.761 "compare": false, 00:23:48.761 "compare_and_write": false, 00:23:48.761 "abort": true, 00:23:48.761 "seek_hole": false, 00:23:48.761 "seek_data": false, 00:23:48.761 "copy": true, 00:23:48.761 "nvme_iov_md": false 00:23:48.761 }, 00:23:48.761 "memory_domains": [ 00:23:48.761 { 00:23:48.761 "dma_device_id": "system", 00:23:48.761 "dma_device_type": 1 00:23:48.761 }, 00:23:48.761 { 00:23:48.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:48.761 "dma_device_type": 2 00:23:48.761 } 00:23:48.761 ], 00:23:48.761 "driver_specific": {} 00:23:48.761 } 00:23:48.761 ] 00:23:48.761 21:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:48.761 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:48.761 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:48.761 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:48.761 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:48.761 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:48.761 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:48.761 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:48.761 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:48.762 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:48.762 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:48.762 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.762 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:49.021 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:49.021 "name": "Existed_Raid", 00:23:49.021 "uuid": "c4dc2ead-038a-4869-8c2d-45ecf6f6e7b6", 00:23:49.021 "strip_size_kb": 64, 00:23:49.021 "state": "configuring", 00:23:49.021 "raid_level": "raid0", 00:23:49.021 "superblock": true, 00:23:49.021 "num_base_bdevs": 4, 00:23:49.021 "num_base_bdevs_discovered": 3, 00:23:49.021 "num_base_bdevs_operational": 4, 00:23:49.021 "base_bdevs_list": [ 00:23:49.021 { 00:23:49.021 "name": "BaseBdev1", 00:23:49.021 "uuid": "70c60a20-1552-402f-b902-9d733fdb5271", 00:23:49.021 "is_configured": true, 00:23:49.021 "data_offset": 2048, 00:23:49.021 "data_size": 63488 00:23:49.021 }, 00:23:49.021 { 00:23:49.021 "name": null, 00:23:49.021 "uuid": "1e3dbd47-49c5-432b-8dd2-2e094d3a4152", 00:23:49.021 "is_configured": false, 00:23:49.021 "data_offset": 2048, 00:23:49.021 "data_size": 63488 00:23:49.021 }, 00:23:49.021 { 00:23:49.021 "name": "BaseBdev3", 00:23:49.021 "uuid": "72ba3086-0f15-4bc1-a568-a30304915262", 00:23:49.021 "is_configured": true, 00:23:49.021 "data_offset": 2048, 00:23:49.021 "data_size": 63488 00:23:49.021 }, 00:23:49.021 { 00:23:49.021 "name": "BaseBdev4", 00:23:49.021 "uuid": "1a193ccb-4910-4ee3-b1e4-594139afdf59", 00:23:49.021 "is_configured": true, 00:23:49.021 "data_offset": 2048, 00:23:49.021 "data_size": 63488 00:23:49.021 } 00:23:49.021 ] 00:23:49.021 }' 00:23:49.021 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:49.021 21:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:49.589 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.589 21:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:49.848 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:23:49.848 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:23:50.105 [2024-07-15 21:37:23.265076] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:50.105 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:50.105 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:50.105 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:50.105 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:50.105 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:50.105 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:50.105 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:50.105 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:50.105 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:50.105 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:50.105 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.105 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:50.364 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:50.364 "name": "Existed_Raid", 00:23:50.364 "uuid": "c4dc2ead-038a-4869-8c2d-45ecf6f6e7b6", 00:23:50.364 "strip_size_kb": 64, 00:23:50.364 "state": "configuring", 00:23:50.364 "raid_level": "raid0", 00:23:50.364 "superblock": true, 00:23:50.364 "num_base_bdevs": 4, 00:23:50.364 "num_base_bdevs_discovered": 2, 00:23:50.364 "num_base_bdevs_operational": 4, 00:23:50.364 "base_bdevs_list": [ 00:23:50.364 { 00:23:50.364 "name": "BaseBdev1", 00:23:50.364 "uuid": "70c60a20-1552-402f-b902-9d733fdb5271", 00:23:50.364 "is_configured": true, 00:23:50.364 "data_offset": 2048, 00:23:50.364 "data_size": 63488 00:23:50.364 }, 00:23:50.364 { 00:23:50.364 "name": null, 00:23:50.364 "uuid": "1e3dbd47-49c5-432b-8dd2-2e094d3a4152", 00:23:50.364 "is_configured": false, 00:23:50.364 "data_offset": 2048, 00:23:50.364 "data_size": 63488 00:23:50.364 }, 00:23:50.364 { 00:23:50.364 "name": null, 00:23:50.364 "uuid": "72ba3086-0f15-4bc1-a568-a30304915262", 00:23:50.364 "is_configured": false, 00:23:50.364 "data_offset": 2048, 00:23:50.364 "data_size": 63488 00:23:50.364 }, 00:23:50.364 { 00:23:50.364 "name": "BaseBdev4", 00:23:50.364 "uuid": "1a193ccb-4910-4ee3-b1e4-594139afdf59", 00:23:50.364 "is_configured": true, 00:23:50.364 "data_offset": 2048, 00:23:50.364 "data_size": 63488 00:23:50.364 } 00:23:50.364 ] 00:23:50.364 }' 00:23:50.364 21:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:50.364 21:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:50.961 21:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.961 21:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:50.961 21:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:23:50.961 21:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:51.228 [2024-07-15 21:37:24.475050] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:51.228 21:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:51.229 21:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:51.229 21:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:51.229 21:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:51.229 21:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:51.229 21:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:51.229 21:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:51.229 21:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:51.229 21:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:51.229 21:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:51.229 21:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.229 21:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:51.488 21:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:51.488 "name": "Existed_Raid", 00:23:51.488 "uuid": "c4dc2ead-038a-4869-8c2d-45ecf6f6e7b6", 00:23:51.488 "strip_size_kb": 64, 00:23:51.488 "state": "configuring", 00:23:51.488 "raid_level": "raid0", 00:23:51.488 "superblock": true, 00:23:51.488 "num_base_bdevs": 4, 00:23:51.488 "num_base_bdevs_discovered": 3, 00:23:51.488 "num_base_bdevs_operational": 4, 00:23:51.488 "base_bdevs_list": [ 00:23:51.488 { 00:23:51.488 "name": "BaseBdev1", 00:23:51.488 "uuid": "70c60a20-1552-402f-b902-9d733fdb5271", 00:23:51.488 "is_configured": true, 00:23:51.488 "data_offset": 2048, 00:23:51.488 "data_size": 63488 00:23:51.488 }, 00:23:51.488 { 00:23:51.488 "name": null, 00:23:51.488 "uuid": "1e3dbd47-49c5-432b-8dd2-2e094d3a4152", 00:23:51.488 "is_configured": false, 00:23:51.488 "data_offset": 2048, 00:23:51.488 "data_size": 63488 00:23:51.488 }, 00:23:51.488 { 00:23:51.488 "name": "BaseBdev3", 00:23:51.488 "uuid": "72ba3086-0f15-4bc1-a568-a30304915262", 00:23:51.488 "is_configured": true, 00:23:51.488 "data_offset": 2048, 00:23:51.488 "data_size": 63488 00:23:51.488 }, 00:23:51.488 { 00:23:51.488 "name": "BaseBdev4", 00:23:51.488 "uuid": "1a193ccb-4910-4ee3-b1e4-594139afdf59", 00:23:51.488 "is_configured": true, 00:23:51.488 "data_offset": 2048, 00:23:51.488 "data_size": 63488 00:23:51.488 } 00:23:51.488 ] 00:23:51.488 }' 00:23:51.488 21:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:51.488 21:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:52.057 21:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:52.057 21:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.316 21:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:23:52.316 21:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:52.575 [2024-07-15 21:37:25.748987] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:52.575 21:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:52.575 21:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:52.575 21:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:52.575 21:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:52.575 21:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:52.575 21:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:52.575 21:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:52.575 21:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:52.575 21:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:52.575 21:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:52.576 21:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.576 21:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:52.834 21:37:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:52.834 "name": "Existed_Raid", 00:23:52.834 "uuid": "c4dc2ead-038a-4869-8c2d-45ecf6f6e7b6", 00:23:52.834 "strip_size_kb": 64, 00:23:52.834 "state": "configuring", 00:23:52.834 "raid_level": "raid0", 00:23:52.834 "superblock": true, 00:23:52.834 "num_base_bdevs": 4, 00:23:52.834 "num_base_bdevs_discovered": 2, 00:23:52.834 "num_base_bdevs_operational": 4, 00:23:52.834 "base_bdevs_list": [ 00:23:52.834 { 00:23:52.834 "name": null, 00:23:52.834 "uuid": "70c60a20-1552-402f-b902-9d733fdb5271", 00:23:52.834 "is_configured": false, 00:23:52.834 "data_offset": 2048, 00:23:52.834 "data_size": 63488 00:23:52.834 }, 00:23:52.834 { 00:23:52.834 "name": null, 00:23:52.834 "uuid": "1e3dbd47-49c5-432b-8dd2-2e094d3a4152", 00:23:52.834 "is_configured": false, 00:23:52.834 "data_offset": 2048, 00:23:52.834 "data_size": 63488 00:23:52.834 }, 00:23:52.834 { 00:23:52.834 "name": "BaseBdev3", 00:23:52.834 "uuid": "72ba3086-0f15-4bc1-a568-a30304915262", 00:23:52.834 "is_configured": true, 00:23:52.834 "data_offset": 2048, 00:23:52.834 "data_size": 63488 00:23:52.834 }, 00:23:52.834 { 00:23:52.834 "name": "BaseBdev4", 00:23:52.834 "uuid": "1a193ccb-4910-4ee3-b1e4-594139afdf59", 00:23:52.834 "is_configured": true, 00:23:52.834 "data_offset": 2048, 00:23:52.834 "data_size": 63488 00:23:52.834 } 00:23:52.834 ] 00:23:52.834 }' 00:23:52.834 21:37:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:52.834 21:37:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:53.770 21:37:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:53.770 21:37:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:53.770 21:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:23:53.770 21:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:54.028 [2024-07-15 21:37:27.189533] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:54.028 21:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:54.028 21:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:54.028 21:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:54.028 21:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:54.028 21:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:54.028 21:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:54.028 21:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:54.028 21:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:54.028 21:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:54.028 21:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:54.028 21:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.028 21:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:54.285 21:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:54.285 "name": "Existed_Raid", 00:23:54.285 "uuid": "c4dc2ead-038a-4869-8c2d-45ecf6f6e7b6", 00:23:54.285 "strip_size_kb": 64, 00:23:54.285 "state": "configuring", 00:23:54.285 "raid_level": "raid0", 00:23:54.285 "superblock": true, 00:23:54.285 "num_base_bdevs": 4, 00:23:54.285 "num_base_bdevs_discovered": 3, 00:23:54.285 "num_base_bdevs_operational": 4, 00:23:54.285 "base_bdevs_list": [ 00:23:54.285 { 00:23:54.285 "name": null, 00:23:54.285 "uuid": "70c60a20-1552-402f-b902-9d733fdb5271", 00:23:54.285 "is_configured": false, 00:23:54.285 "data_offset": 2048, 00:23:54.285 "data_size": 63488 00:23:54.285 }, 00:23:54.285 { 00:23:54.285 "name": "BaseBdev2", 00:23:54.285 "uuid": "1e3dbd47-49c5-432b-8dd2-2e094d3a4152", 00:23:54.285 "is_configured": true, 00:23:54.285 "data_offset": 2048, 00:23:54.285 "data_size": 63488 00:23:54.285 }, 00:23:54.285 { 00:23:54.285 "name": "BaseBdev3", 00:23:54.285 "uuid": "72ba3086-0f15-4bc1-a568-a30304915262", 00:23:54.285 "is_configured": true, 00:23:54.285 "data_offset": 2048, 00:23:54.285 "data_size": 63488 00:23:54.285 }, 00:23:54.285 { 00:23:54.285 "name": "BaseBdev4", 00:23:54.285 "uuid": "1a193ccb-4910-4ee3-b1e4-594139afdf59", 00:23:54.285 "is_configured": true, 00:23:54.285 "data_offset": 2048, 00:23:54.285 "data_size": 63488 00:23:54.285 } 00:23:54.285 ] 00:23:54.285 }' 00:23:54.285 21:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:54.285 21:37:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:54.861 21:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.862 21:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:54.862 21:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:23:54.862 21:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.862 21:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:55.119 21:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 70c60a20-1552-402f-b902-9d733fdb5271 00:23:55.378 [2024-07-15 21:37:28.581578] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:55.378 [2024-07-15 21:37:28.581838] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:23:55.379 [2024-07-15 21:37:28.581880] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:55.379 [2024-07-15 21:37:28.581998] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:55.379 [2024-07-15 21:37:28.582318] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:23:55.379 NewBaseBdev 00:23:55.379 [2024-07-15 21:37:28.582370] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009680 00:23:55.379 [2024-07-15 21:37:28.582529] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:55.379 21:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:23:55.379 21:37:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:23:55.379 21:37:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:55.379 21:37:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:55.379 21:37:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:55.379 21:37:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:55.379 21:37:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:55.637 21:37:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:55.637 [ 00:23:55.637 { 00:23:55.637 "name": "NewBaseBdev", 00:23:55.637 "aliases": [ 00:23:55.637 "70c60a20-1552-402f-b902-9d733fdb5271" 00:23:55.637 ], 00:23:55.637 "product_name": "Malloc disk", 00:23:55.637 "block_size": 512, 00:23:55.637 "num_blocks": 65536, 00:23:55.637 "uuid": "70c60a20-1552-402f-b902-9d733fdb5271", 00:23:55.637 "assigned_rate_limits": { 00:23:55.637 "rw_ios_per_sec": 0, 00:23:55.637 "rw_mbytes_per_sec": 0, 00:23:55.637 "r_mbytes_per_sec": 0, 00:23:55.637 "w_mbytes_per_sec": 0 00:23:55.637 }, 00:23:55.637 "claimed": true, 00:23:55.637 "claim_type": "exclusive_write", 00:23:55.637 "zoned": false, 00:23:55.637 "supported_io_types": { 00:23:55.637 "read": true, 00:23:55.637 "write": true, 00:23:55.637 "unmap": true, 00:23:55.637 "flush": true, 00:23:55.637 "reset": true, 00:23:55.637 "nvme_admin": false, 00:23:55.637 "nvme_io": false, 00:23:55.637 "nvme_io_md": false, 00:23:55.637 "write_zeroes": true, 00:23:55.637 "zcopy": true, 00:23:55.637 "get_zone_info": false, 00:23:55.637 "zone_management": false, 00:23:55.637 "zone_append": false, 00:23:55.637 "compare": false, 00:23:55.637 "compare_and_write": false, 00:23:55.637 "abort": true, 00:23:55.637 "seek_hole": false, 00:23:55.637 "seek_data": false, 00:23:55.637 "copy": true, 00:23:55.637 "nvme_iov_md": false 00:23:55.637 }, 00:23:55.637 "memory_domains": [ 00:23:55.637 { 00:23:55.638 "dma_device_id": "system", 00:23:55.638 "dma_device_type": 1 00:23:55.638 }, 00:23:55.638 { 00:23:55.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:55.638 "dma_device_type": 2 00:23:55.638 } 00:23:55.638 ], 00:23:55.638 "driver_specific": {} 00:23:55.638 } 00:23:55.638 ] 00:23:55.638 21:37:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:55.638 21:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:23:55.638 21:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:55.638 21:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:55.638 21:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:55.638 21:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:55.638 21:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:55.638 21:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:55.638 21:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:55.638 21:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:55.638 21:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:55.638 21:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.638 21:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:55.896 21:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:55.896 "name": "Existed_Raid", 00:23:55.896 "uuid": "c4dc2ead-038a-4869-8c2d-45ecf6f6e7b6", 00:23:55.896 "strip_size_kb": 64, 00:23:55.896 "state": "online", 00:23:55.896 "raid_level": "raid0", 00:23:55.896 "superblock": true, 00:23:55.896 "num_base_bdevs": 4, 00:23:55.896 "num_base_bdevs_discovered": 4, 00:23:55.896 "num_base_bdevs_operational": 4, 00:23:55.896 "base_bdevs_list": [ 00:23:55.896 { 00:23:55.896 "name": "NewBaseBdev", 00:23:55.896 "uuid": "70c60a20-1552-402f-b902-9d733fdb5271", 00:23:55.896 "is_configured": true, 00:23:55.896 "data_offset": 2048, 00:23:55.896 "data_size": 63488 00:23:55.896 }, 00:23:55.896 { 00:23:55.896 "name": "BaseBdev2", 00:23:55.896 "uuid": "1e3dbd47-49c5-432b-8dd2-2e094d3a4152", 00:23:55.896 "is_configured": true, 00:23:55.896 "data_offset": 2048, 00:23:55.896 "data_size": 63488 00:23:55.896 }, 00:23:55.896 { 00:23:55.896 "name": "BaseBdev3", 00:23:55.896 "uuid": "72ba3086-0f15-4bc1-a568-a30304915262", 00:23:55.896 "is_configured": true, 00:23:55.896 "data_offset": 2048, 00:23:55.896 "data_size": 63488 00:23:55.896 }, 00:23:55.896 { 00:23:55.896 "name": "BaseBdev4", 00:23:55.896 "uuid": "1a193ccb-4910-4ee3-b1e4-594139afdf59", 00:23:55.896 "is_configured": true, 00:23:55.896 "data_offset": 2048, 00:23:55.896 "data_size": 63488 00:23:55.896 } 00:23:55.896 ] 00:23:55.896 }' 00:23:55.896 21:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:55.896 21:37:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.462 21:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:23:56.462 21:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:56.462 21:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:56.462 21:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:56.462 21:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:56.462 21:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:23:56.462 21:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:56.463 21:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:56.722 [2024-07-15 21:37:30.003466] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:56.722 21:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:56.722 "name": "Existed_Raid", 00:23:56.722 "aliases": [ 00:23:56.722 "c4dc2ead-038a-4869-8c2d-45ecf6f6e7b6" 00:23:56.722 ], 00:23:56.722 "product_name": "Raid Volume", 00:23:56.722 "block_size": 512, 00:23:56.722 "num_blocks": 253952, 00:23:56.722 "uuid": "c4dc2ead-038a-4869-8c2d-45ecf6f6e7b6", 00:23:56.722 "assigned_rate_limits": { 00:23:56.722 "rw_ios_per_sec": 0, 00:23:56.722 "rw_mbytes_per_sec": 0, 00:23:56.722 "r_mbytes_per_sec": 0, 00:23:56.722 "w_mbytes_per_sec": 0 00:23:56.722 }, 00:23:56.722 "claimed": false, 00:23:56.722 "zoned": false, 00:23:56.722 "supported_io_types": { 00:23:56.722 "read": true, 00:23:56.722 "write": true, 00:23:56.722 "unmap": true, 00:23:56.722 "flush": true, 00:23:56.722 "reset": true, 00:23:56.722 "nvme_admin": false, 00:23:56.722 "nvme_io": false, 00:23:56.722 "nvme_io_md": false, 00:23:56.722 "write_zeroes": true, 00:23:56.722 "zcopy": false, 00:23:56.722 "get_zone_info": false, 00:23:56.722 "zone_management": false, 00:23:56.722 "zone_append": false, 00:23:56.722 "compare": false, 00:23:56.722 "compare_and_write": false, 00:23:56.722 "abort": false, 00:23:56.722 "seek_hole": false, 00:23:56.722 "seek_data": false, 00:23:56.722 "copy": false, 00:23:56.722 "nvme_iov_md": false 00:23:56.722 }, 00:23:56.722 "memory_domains": [ 00:23:56.722 { 00:23:56.722 "dma_device_id": "system", 00:23:56.722 "dma_device_type": 1 00:23:56.722 }, 00:23:56.722 { 00:23:56.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:56.722 "dma_device_type": 2 00:23:56.722 }, 00:23:56.722 { 00:23:56.722 "dma_device_id": "system", 00:23:56.722 "dma_device_type": 1 00:23:56.722 }, 00:23:56.722 { 00:23:56.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:56.722 "dma_device_type": 2 00:23:56.722 }, 00:23:56.722 { 00:23:56.722 "dma_device_id": "system", 00:23:56.722 "dma_device_type": 1 00:23:56.722 }, 00:23:56.722 { 00:23:56.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:56.722 "dma_device_type": 2 00:23:56.722 }, 00:23:56.722 { 00:23:56.722 "dma_device_id": "system", 00:23:56.722 "dma_device_type": 1 00:23:56.722 }, 00:23:56.722 { 00:23:56.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:56.722 "dma_device_type": 2 00:23:56.722 } 00:23:56.722 ], 00:23:56.722 "driver_specific": { 00:23:56.722 "raid": { 00:23:56.722 "uuid": "c4dc2ead-038a-4869-8c2d-45ecf6f6e7b6", 00:23:56.722 "strip_size_kb": 64, 00:23:56.722 "state": "online", 00:23:56.722 "raid_level": "raid0", 00:23:56.722 "superblock": true, 00:23:56.722 "num_base_bdevs": 4, 00:23:56.722 "num_base_bdevs_discovered": 4, 00:23:56.722 "num_base_bdevs_operational": 4, 00:23:56.722 "base_bdevs_list": [ 00:23:56.722 { 00:23:56.722 "name": "NewBaseBdev", 00:23:56.722 "uuid": "70c60a20-1552-402f-b902-9d733fdb5271", 00:23:56.722 "is_configured": true, 00:23:56.722 "data_offset": 2048, 00:23:56.722 "data_size": 63488 00:23:56.722 }, 00:23:56.722 { 00:23:56.722 "name": "BaseBdev2", 00:23:56.722 "uuid": "1e3dbd47-49c5-432b-8dd2-2e094d3a4152", 00:23:56.722 "is_configured": true, 00:23:56.722 "data_offset": 2048, 00:23:56.722 "data_size": 63488 00:23:56.722 }, 00:23:56.722 { 00:23:56.722 "name": "BaseBdev3", 00:23:56.722 "uuid": "72ba3086-0f15-4bc1-a568-a30304915262", 00:23:56.722 "is_configured": true, 00:23:56.722 "data_offset": 2048, 00:23:56.722 "data_size": 63488 00:23:56.722 }, 00:23:56.722 { 00:23:56.722 "name": "BaseBdev4", 00:23:56.722 "uuid": "1a193ccb-4910-4ee3-b1e4-594139afdf59", 00:23:56.722 "is_configured": true, 00:23:56.722 "data_offset": 2048, 00:23:56.722 "data_size": 63488 00:23:56.722 } 00:23:56.722 ] 00:23:56.722 } 00:23:56.722 } 00:23:56.722 }' 00:23:56.722 21:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:56.722 21:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:23:56.722 BaseBdev2 00:23:56.722 BaseBdev3 00:23:56.722 BaseBdev4' 00:23:56.722 21:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:56.722 21:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:23:56.722 21:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:56.981 21:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:56.981 "name": "NewBaseBdev", 00:23:56.981 "aliases": [ 00:23:56.981 "70c60a20-1552-402f-b902-9d733fdb5271" 00:23:56.981 ], 00:23:56.981 "product_name": "Malloc disk", 00:23:56.981 "block_size": 512, 00:23:56.981 "num_blocks": 65536, 00:23:56.981 "uuid": "70c60a20-1552-402f-b902-9d733fdb5271", 00:23:56.981 "assigned_rate_limits": { 00:23:56.981 "rw_ios_per_sec": 0, 00:23:56.981 "rw_mbytes_per_sec": 0, 00:23:56.981 "r_mbytes_per_sec": 0, 00:23:56.981 "w_mbytes_per_sec": 0 00:23:56.981 }, 00:23:56.981 "claimed": true, 00:23:56.981 "claim_type": "exclusive_write", 00:23:56.981 "zoned": false, 00:23:56.981 "supported_io_types": { 00:23:56.981 "read": true, 00:23:56.981 "write": true, 00:23:56.981 "unmap": true, 00:23:56.981 "flush": true, 00:23:56.981 "reset": true, 00:23:56.981 "nvme_admin": false, 00:23:56.981 "nvme_io": false, 00:23:56.981 "nvme_io_md": false, 00:23:56.981 "write_zeroes": true, 00:23:56.981 "zcopy": true, 00:23:56.981 "get_zone_info": false, 00:23:56.981 "zone_management": false, 00:23:56.981 "zone_append": false, 00:23:56.981 "compare": false, 00:23:56.981 "compare_and_write": false, 00:23:56.981 "abort": true, 00:23:56.981 "seek_hole": false, 00:23:56.981 "seek_data": false, 00:23:56.981 "copy": true, 00:23:56.981 "nvme_iov_md": false 00:23:56.981 }, 00:23:56.981 "memory_domains": [ 00:23:56.981 { 00:23:56.981 "dma_device_id": "system", 00:23:56.981 "dma_device_type": 1 00:23:56.981 }, 00:23:56.981 { 00:23:56.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:56.981 "dma_device_type": 2 00:23:56.981 } 00:23:56.981 ], 00:23:56.981 "driver_specific": {} 00:23:56.981 }' 00:23:56.981 21:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:56.981 21:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:56.981 21:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:56.981 21:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:57.240 21:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:57.240 21:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:57.240 21:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:57.240 21:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:57.240 21:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:57.240 21:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:57.240 21:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:57.499 21:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:57.499 21:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:57.499 21:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:57.499 21:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:57.499 21:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:57.499 "name": "BaseBdev2", 00:23:57.499 "aliases": [ 00:23:57.499 "1e3dbd47-49c5-432b-8dd2-2e094d3a4152" 00:23:57.499 ], 00:23:57.499 "product_name": "Malloc disk", 00:23:57.499 "block_size": 512, 00:23:57.499 "num_blocks": 65536, 00:23:57.499 "uuid": "1e3dbd47-49c5-432b-8dd2-2e094d3a4152", 00:23:57.499 "assigned_rate_limits": { 00:23:57.499 "rw_ios_per_sec": 0, 00:23:57.499 "rw_mbytes_per_sec": 0, 00:23:57.499 "r_mbytes_per_sec": 0, 00:23:57.499 "w_mbytes_per_sec": 0 00:23:57.499 }, 00:23:57.499 "claimed": true, 00:23:57.499 "claim_type": "exclusive_write", 00:23:57.499 "zoned": false, 00:23:57.499 "supported_io_types": { 00:23:57.499 "read": true, 00:23:57.499 "write": true, 00:23:57.499 "unmap": true, 00:23:57.499 "flush": true, 00:23:57.499 "reset": true, 00:23:57.499 "nvme_admin": false, 00:23:57.499 "nvme_io": false, 00:23:57.499 "nvme_io_md": false, 00:23:57.499 "write_zeroes": true, 00:23:57.499 "zcopy": true, 00:23:57.499 "get_zone_info": false, 00:23:57.499 "zone_management": false, 00:23:57.499 "zone_append": false, 00:23:57.499 "compare": false, 00:23:57.499 "compare_and_write": false, 00:23:57.499 "abort": true, 00:23:57.499 "seek_hole": false, 00:23:57.499 "seek_data": false, 00:23:57.499 "copy": true, 00:23:57.499 "nvme_iov_md": false 00:23:57.499 }, 00:23:57.499 "memory_domains": [ 00:23:57.499 { 00:23:57.499 "dma_device_id": "system", 00:23:57.499 "dma_device_type": 1 00:23:57.499 }, 00:23:57.499 { 00:23:57.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:57.499 "dma_device_type": 2 00:23:57.499 } 00:23:57.499 ], 00:23:57.499 "driver_specific": {} 00:23:57.499 }' 00:23:57.499 21:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:57.757 21:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:57.757 21:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:57.757 21:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:57.757 21:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:57.757 21:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:57.757 21:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:57.757 21:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:58.016 21:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:58.016 21:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:58.016 21:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:58.016 21:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:58.016 21:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:58.016 21:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:58.016 21:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:58.275 21:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:58.276 "name": "BaseBdev3", 00:23:58.276 "aliases": [ 00:23:58.276 "72ba3086-0f15-4bc1-a568-a30304915262" 00:23:58.276 ], 00:23:58.276 "product_name": "Malloc disk", 00:23:58.276 "block_size": 512, 00:23:58.276 "num_blocks": 65536, 00:23:58.276 "uuid": "72ba3086-0f15-4bc1-a568-a30304915262", 00:23:58.276 "assigned_rate_limits": { 00:23:58.276 "rw_ios_per_sec": 0, 00:23:58.276 "rw_mbytes_per_sec": 0, 00:23:58.276 "r_mbytes_per_sec": 0, 00:23:58.276 "w_mbytes_per_sec": 0 00:23:58.276 }, 00:23:58.276 "claimed": true, 00:23:58.276 "claim_type": "exclusive_write", 00:23:58.276 "zoned": false, 00:23:58.276 "supported_io_types": { 00:23:58.276 "read": true, 00:23:58.276 "write": true, 00:23:58.276 "unmap": true, 00:23:58.276 "flush": true, 00:23:58.276 "reset": true, 00:23:58.276 "nvme_admin": false, 00:23:58.276 "nvme_io": false, 00:23:58.276 "nvme_io_md": false, 00:23:58.276 "write_zeroes": true, 00:23:58.276 "zcopy": true, 00:23:58.276 "get_zone_info": false, 00:23:58.276 "zone_management": false, 00:23:58.276 "zone_append": false, 00:23:58.276 "compare": false, 00:23:58.276 "compare_and_write": false, 00:23:58.276 "abort": true, 00:23:58.276 "seek_hole": false, 00:23:58.276 "seek_data": false, 00:23:58.276 "copy": true, 00:23:58.276 "nvme_iov_md": false 00:23:58.276 }, 00:23:58.276 "memory_domains": [ 00:23:58.276 { 00:23:58.276 "dma_device_id": "system", 00:23:58.276 "dma_device_type": 1 00:23:58.276 }, 00:23:58.276 { 00:23:58.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:58.276 "dma_device_type": 2 00:23:58.276 } 00:23:58.276 ], 00:23:58.276 "driver_specific": {} 00:23:58.276 }' 00:23:58.276 21:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:58.276 21:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:58.276 21:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:58.276 21:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:58.535 21:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:58.535 21:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:58.535 21:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:58.535 21:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:58.535 21:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:58.535 21:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:58.535 21:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:58.798 21:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:58.798 21:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:58.798 21:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:58.798 21:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:58.798 21:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:58.798 "name": "BaseBdev4", 00:23:58.798 "aliases": [ 00:23:58.798 "1a193ccb-4910-4ee3-b1e4-594139afdf59" 00:23:58.798 ], 00:23:58.798 "product_name": "Malloc disk", 00:23:58.798 "block_size": 512, 00:23:58.798 "num_blocks": 65536, 00:23:58.798 "uuid": "1a193ccb-4910-4ee3-b1e4-594139afdf59", 00:23:58.798 "assigned_rate_limits": { 00:23:58.798 "rw_ios_per_sec": 0, 00:23:58.798 "rw_mbytes_per_sec": 0, 00:23:58.798 "r_mbytes_per_sec": 0, 00:23:58.798 "w_mbytes_per_sec": 0 00:23:58.798 }, 00:23:58.798 "claimed": true, 00:23:58.798 "claim_type": "exclusive_write", 00:23:58.798 "zoned": false, 00:23:58.798 "supported_io_types": { 00:23:58.798 "read": true, 00:23:58.798 "write": true, 00:23:58.798 "unmap": true, 00:23:58.798 "flush": true, 00:23:58.798 "reset": true, 00:23:58.798 "nvme_admin": false, 00:23:58.798 "nvme_io": false, 00:23:58.798 "nvme_io_md": false, 00:23:58.798 "write_zeroes": true, 00:23:58.798 "zcopy": true, 00:23:58.798 "get_zone_info": false, 00:23:58.798 "zone_management": false, 00:23:58.798 "zone_append": false, 00:23:58.798 "compare": false, 00:23:58.798 "compare_and_write": false, 00:23:58.798 "abort": true, 00:23:58.798 "seek_hole": false, 00:23:58.798 "seek_data": false, 00:23:58.798 "copy": true, 00:23:58.798 "nvme_iov_md": false 00:23:58.798 }, 00:23:58.798 "memory_domains": [ 00:23:58.798 { 00:23:58.798 "dma_device_id": "system", 00:23:58.798 "dma_device_type": 1 00:23:58.798 }, 00:23:58.798 { 00:23:58.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:58.798 "dma_device_type": 2 00:23:58.798 } 00:23:58.798 ], 00:23:58.798 "driver_specific": {} 00:23:58.798 }' 00:23:58.798 21:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:58.798 21:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:59.060 21:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:59.060 21:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:59.060 21:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:59.060 21:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:59.060 21:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:59.060 21:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:59.320 21:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:59.320 21:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:59.320 21:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:59.320 21:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:59.320 21:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:59.580 [2024-07-15 21:37:32.730483] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:59.580 [2024-07-15 21:37:32.730599] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:59.580 [2024-07-15 21:37:32.730702] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:59.580 [2024-07-15 21:37:32.730799] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:59.580 [2024-07-15 21:37:32.730831] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name Existed_Raid, state offline 00:23:59.580 21:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 136264 00:23:59.580 21:37:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 136264 ']' 00:23:59.580 21:37:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 136264 00:23:59.580 21:37:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:23:59.580 21:37:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:59.580 21:37:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 136264 00:23:59.580 killing process with pid 136264 00:23:59.580 21:37:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:59.580 21:37:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:59.580 21:37:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 136264' 00:23:59.580 21:37:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 136264 00:23:59.580 21:37:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 136264 00:23:59.580 [2024-07-15 21:37:32.770720] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:59.839 [2024-07-15 21:37:33.152158] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:01.215 ************************************ 00:24:01.215 END TEST raid_state_function_test_sb 00:24:01.215 ************************************ 00:24:01.215 21:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:24:01.215 00:24:01.215 real 0m31.284s 00:24:01.215 user 0m58.014s 00:24:01.215 sys 0m3.709s 00:24:01.215 21:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:01.215 21:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:01.215 21:37:34 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:24:01.215 21:37:34 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:24:01.215 21:37:34 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:24:01.215 21:37:34 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:01.215 21:37:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:01.215 ************************************ 00:24:01.215 START TEST raid_superblock_test 00:24:01.215 ************************************ 00:24:01.215 21:37:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 4 00:24:01.215 21:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:24:01.215 21:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:24:01.215 21:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:24:01.215 21:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:24:01.215 21:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:24:01.215 21:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:24:01.215 21:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:24:01.215 21:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:24:01.215 21:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:24:01.215 21:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:24:01.215 21:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:24:01.215 21:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:24:01.215 21:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:24:01.215 21:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:24:01.215 21:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:24:01.215 21:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:24:01.215 21:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=137378 00:24:01.215 21:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 137378 /var/tmp/spdk-raid.sock 00:24:01.215 21:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:24:01.215 21:37:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 137378 ']' 00:24:01.215 21:37:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:01.215 21:37:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:01.215 21:37:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:01.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:01.215 21:37:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:01.215 21:37:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.215 [2024-07-15 21:37:34.490404] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:24:01.215 [2024-07-15 21:37:34.490623] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137378 ] 00:24:01.473 [2024-07-15 21:37:34.628884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.473 [2024-07-15 21:37:34.821386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.731 [2024-07-15 21:37:35.008759] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:01.990 21:37:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:01.990 21:37:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:24:01.990 21:37:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:24:01.990 21:37:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:01.990 21:37:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:24:01.990 21:37:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:24:01.990 21:37:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:01.990 21:37:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:01.990 21:37:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:24:01.990 21:37:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:01.990 21:37:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:24:02.250 malloc1 00:24:02.250 21:37:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:02.509 [2024-07-15 21:37:35.702968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:02.509 [2024-07-15 21:37:35.703121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:02.509 [2024-07-15 21:37:35.703180] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:24:02.509 [2024-07-15 21:37:35.703217] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:02.509 [2024-07-15 21:37:35.705099] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:02.509 [2024-07-15 21:37:35.705170] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:02.509 pt1 00:24:02.509 21:37:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:24:02.509 21:37:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:02.509 21:37:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:24:02.509 21:37:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:24:02.509 21:37:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:02.509 21:37:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:02.509 21:37:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:24:02.509 21:37:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:02.509 21:37:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:24:02.768 malloc2 00:24:02.768 21:37:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:02.768 [2024-07-15 21:37:36.107887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:02.768 [2024-07-15 21:37:36.108034] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:02.768 [2024-07-15 21:37:36.108080] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:24:02.768 [2024-07-15 21:37:36.108119] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:02.768 [2024-07-15 21:37:36.110118] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:02.768 [2024-07-15 21:37:36.110187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:02.768 pt2 00:24:02.768 21:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:24:02.768 21:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:02.768 21:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:24:02.768 21:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:24:02.768 21:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:24:02.768 21:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:02.768 21:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:24:02.768 21:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:02.768 21:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:24:03.026 malloc3 00:24:03.026 21:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:03.284 [2024-07-15 21:37:36.539555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:03.284 [2024-07-15 21:37:36.539719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:03.284 [2024-07-15 21:37:36.539764] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:24:03.284 [2024-07-15 21:37:36.539806] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:03.284 [2024-07-15 21:37:36.541716] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:03.284 [2024-07-15 21:37:36.541800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:03.284 pt3 00:24:03.284 21:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:24:03.284 21:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:03.284 21:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:24:03.284 21:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:24:03.284 21:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:24:03.284 21:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:03.284 21:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:24:03.284 21:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:03.284 21:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:24:03.543 malloc4 00:24:03.543 21:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:03.802 [2024-07-15 21:37:36.975722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:03.802 [2024-07-15 21:37:36.975906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:03.802 [2024-07-15 21:37:36.975950] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:24:03.802 [2024-07-15 21:37:36.975989] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:03.802 [2024-07-15 21:37:36.977854] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:03.802 [2024-07-15 21:37:36.977932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:03.802 pt4 00:24:03.802 21:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:24:03.802 21:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:03.802 21:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:24:04.061 [2024-07-15 21:37:37.227298] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:04.061 [2024-07-15 21:37:37.228994] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:04.061 [2024-07-15 21:37:37.229106] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:04.061 [2024-07-15 21:37:37.229166] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:04.061 [2024-07-15 21:37:37.229399] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:24:04.061 [2024-07-15 21:37:37.229437] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:04.061 [2024-07-15 21:37:37.229600] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:24:04.061 [2024-07-15 21:37:37.229908] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:24:04.061 [2024-07-15 21:37:37.229946] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:24:04.061 [2024-07-15 21:37:37.230093] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:04.061 21:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:24:04.061 21:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:04.061 21:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:04.061 21:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:04.061 21:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:04.061 21:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:04.061 21:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:04.061 21:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:04.061 21:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:04.061 21:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:04.061 21:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.061 21:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.319 21:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:04.319 "name": "raid_bdev1", 00:24:04.319 "uuid": "bcdf0341-510a-4543-a090-0138f7f29515", 00:24:04.319 "strip_size_kb": 64, 00:24:04.319 "state": "online", 00:24:04.319 "raid_level": "raid0", 00:24:04.319 "superblock": true, 00:24:04.319 "num_base_bdevs": 4, 00:24:04.319 "num_base_bdevs_discovered": 4, 00:24:04.319 "num_base_bdevs_operational": 4, 00:24:04.319 "base_bdevs_list": [ 00:24:04.319 { 00:24:04.319 "name": "pt1", 00:24:04.319 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:04.319 "is_configured": true, 00:24:04.319 "data_offset": 2048, 00:24:04.319 "data_size": 63488 00:24:04.319 }, 00:24:04.319 { 00:24:04.319 "name": "pt2", 00:24:04.319 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:04.319 "is_configured": true, 00:24:04.319 "data_offset": 2048, 00:24:04.319 "data_size": 63488 00:24:04.319 }, 00:24:04.319 { 00:24:04.319 "name": "pt3", 00:24:04.319 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:04.319 "is_configured": true, 00:24:04.319 "data_offset": 2048, 00:24:04.319 "data_size": 63488 00:24:04.319 }, 00:24:04.319 { 00:24:04.319 "name": "pt4", 00:24:04.319 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:04.319 "is_configured": true, 00:24:04.319 "data_offset": 2048, 00:24:04.319 "data_size": 63488 00:24:04.319 } 00:24:04.319 ] 00:24:04.319 }' 00:24:04.319 21:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:04.319 21:37:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.906 21:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:24:04.906 21:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:24:04.906 21:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:04.906 21:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:04.906 21:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:04.906 21:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:04.906 21:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:04.906 21:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:05.167 [2024-07-15 21:37:38.281838] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:05.167 21:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:05.167 "name": "raid_bdev1", 00:24:05.167 "aliases": [ 00:24:05.167 "bcdf0341-510a-4543-a090-0138f7f29515" 00:24:05.167 ], 00:24:05.167 "product_name": "Raid Volume", 00:24:05.167 "block_size": 512, 00:24:05.167 "num_blocks": 253952, 00:24:05.167 "uuid": "bcdf0341-510a-4543-a090-0138f7f29515", 00:24:05.167 "assigned_rate_limits": { 00:24:05.167 "rw_ios_per_sec": 0, 00:24:05.167 "rw_mbytes_per_sec": 0, 00:24:05.167 "r_mbytes_per_sec": 0, 00:24:05.167 "w_mbytes_per_sec": 0 00:24:05.167 }, 00:24:05.167 "claimed": false, 00:24:05.167 "zoned": false, 00:24:05.167 "supported_io_types": { 00:24:05.167 "read": true, 00:24:05.167 "write": true, 00:24:05.167 "unmap": true, 00:24:05.167 "flush": true, 00:24:05.167 "reset": true, 00:24:05.167 "nvme_admin": false, 00:24:05.167 "nvme_io": false, 00:24:05.167 "nvme_io_md": false, 00:24:05.167 "write_zeroes": true, 00:24:05.167 "zcopy": false, 00:24:05.167 "get_zone_info": false, 00:24:05.167 "zone_management": false, 00:24:05.167 "zone_append": false, 00:24:05.167 "compare": false, 00:24:05.167 "compare_and_write": false, 00:24:05.167 "abort": false, 00:24:05.167 "seek_hole": false, 00:24:05.167 "seek_data": false, 00:24:05.167 "copy": false, 00:24:05.167 "nvme_iov_md": false 00:24:05.167 }, 00:24:05.167 "memory_domains": [ 00:24:05.167 { 00:24:05.167 "dma_device_id": "system", 00:24:05.167 "dma_device_type": 1 00:24:05.167 }, 00:24:05.167 { 00:24:05.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:05.167 "dma_device_type": 2 00:24:05.167 }, 00:24:05.167 { 00:24:05.167 "dma_device_id": "system", 00:24:05.167 "dma_device_type": 1 00:24:05.168 }, 00:24:05.168 { 00:24:05.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:05.168 "dma_device_type": 2 00:24:05.168 }, 00:24:05.168 { 00:24:05.168 "dma_device_id": "system", 00:24:05.168 "dma_device_type": 1 00:24:05.168 }, 00:24:05.168 { 00:24:05.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:05.168 "dma_device_type": 2 00:24:05.168 }, 00:24:05.168 { 00:24:05.168 "dma_device_id": "system", 00:24:05.168 "dma_device_type": 1 00:24:05.168 }, 00:24:05.168 { 00:24:05.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:05.168 "dma_device_type": 2 00:24:05.168 } 00:24:05.168 ], 00:24:05.168 "driver_specific": { 00:24:05.168 "raid": { 00:24:05.168 "uuid": "bcdf0341-510a-4543-a090-0138f7f29515", 00:24:05.168 "strip_size_kb": 64, 00:24:05.168 "state": "online", 00:24:05.168 "raid_level": "raid0", 00:24:05.168 "superblock": true, 00:24:05.168 "num_base_bdevs": 4, 00:24:05.168 "num_base_bdevs_discovered": 4, 00:24:05.168 "num_base_bdevs_operational": 4, 00:24:05.168 "base_bdevs_list": [ 00:24:05.168 { 00:24:05.168 "name": "pt1", 00:24:05.168 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:05.168 "is_configured": true, 00:24:05.168 "data_offset": 2048, 00:24:05.168 "data_size": 63488 00:24:05.168 }, 00:24:05.168 { 00:24:05.168 "name": "pt2", 00:24:05.168 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:05.168 "is_configured": true, 00:24:05.168 "data_offset": 2048, 00:24:05.168 "data_size": 63488 00:24:05.168 }, 00:24:05.168 { 00:24:05.168 "name": "pt3", 00:24:05.168 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:05.168 "is_configured": true, 00:24:05.168 "data_offset": 2048, 00:24:05.168 "data_size": 63488 00:24:05.168 }, 00:24:05.168 { 00:24:05.168 "name": "pt4", 00:24:05.168 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:05.168 "is_configured": true, 00:24:05.168 "data_offset": 2048, 00:24:05.168 "data_size": 63488 00:24:05.168 } 00:24:05.168 ] 00:24:05.168 } 00:24:05.168 } 00:24:05.168 }' 00:24:05.168 21:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:05.168 21:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:24:05.168 pt2 00:24:05.168 pt3 00:24:05.168 pt4' 00:24:05.168 21:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:05.168 21:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:05.168 21:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:24:05.427 21:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:05.427 "name": "pt1", 00:24:05.427 "aliases": [ 00:24:05.427 "00000000-0000-0000-0000-000000000001" 00:24:05.427 ], 00:24:05.427 "product_name": "passthru", 00:24:05.427 "block_size": 512, 00:24:05.427 "num_blocks": 65536, 00:24:05.427 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:05.427 "assigned_rate_limits": { 00:24:05.427 "rw_ios_per_sec": 0, 00:24:05.427 "rw_mbytes_per_sec": 0, 00:24:05.427 "r_mbytes_per_sec": 0, 00:24:05.427 "w_mbytes_per_sec": 0 00:24:05.427 }, 00:24:05.427 "claimed": true, 00:24:05.427 "claim_type": "exclusive_write", 00:24:05.427 "zoned": false, 00:24:05.427 "supported_io_types": { 00:24:05.427 "read": true, 00:24:05.427 "write": true, 00:24:05.427 "unmap": true, 00:24:05.427 "flush": true, 00:24:05.427 "reset": true, 00:24:05.427 "nvme_admin": false, 00:24:05.427 "nvme_io": false, 00:24:05.427 "nvme_io_md": false, 00:24:05.427 "write_zeroes": true, 00:24:05.427 "zcopy": true, 00:24:05.427 "get_zone_info": false, 00:24:05.427 "zone_management": false, 00:24:05.427 "zone_append": false, 00:24:05.427 "compare": false, 00:24:05.427 "compare_and_write": false, 00:24:05.427 "abort": true, 00:24:05.427 "seek_hole": false, 00:24:05.427 "seek_data": false, 00:24:05.427 "copy": true, 00:24:05.427 "nvme_iov_md": false 00:24:05.427 }, 00:24:05.427 "memory_domains": [ 00:24:05.427 { 00:24:05.427 "dma_device_id": "system", 00:24:05.427 "dma_device_type": 1 00:24:05.427 }, 00:24:05.427 { 00:24:05.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:05.427 "dma_device_type": 2 00:24:05.427 } 00:24:05.427 ], 00:24:05.427 "driver_specific": { 00:24:05.427 "passthru": { 00:24:05.427 "name": "pt1", 00:24:05.427 "base_bdev_name": "malloc1" 00:24:05.427 } 00:24:05.427 } 00:24:05.427 }' 00:24:05.427 21:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:05.427 21:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:05.427 21:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:05.427 21:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:05.427 21:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:05.427 21:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:05.427 21:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:05.685 21:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:05.685 21:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:05.685 21:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:05.685 21:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:05.685 21:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:05.685 21:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:05.685 21:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:24:05.685 21:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:05.944 21:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:05.944 "name": "pt2", 00:24:05.944 "aliases": [ 00:24:05.944 "00000000-0000-0000-0000-000000000002" 00:24:05.944 ], 00:24:05.944 "product_name": "passthru", 00:24:05.944 "block_size": 512, 00:24:05.944 "num_blocks": 65536, 00:24:05.944 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:05.944 "assigned_rate_limits": { 00:24:05.944 "rw_ios_per_sec": 0, 00:24:05.944 "rw_mbytes_per_sec": 0, 00:24:05.944 "r_mbytes_per_sec": 0, 00:24:05.944 "w_mbytes_per_sec": 0 00:24:05.944 }, 00:24:05.944 "claimed": true, 00:24:05.944 "claim_type": "exclusive_write", 00:24:05.944 "zoned": false, 00:24:05.944 "supported_io_types": { 00:24:05.944 "read": true, 00:24:05.944 "write": true, 00:24:05.944 "unmap": true, 00:24:05.944 "flush": true, 00:24:05.944 "reset": true, 00:24:05.944 "nvme_admin": false, 00:24:05.944 "nvme_io": false, 00:24:05.944 "nvme_io_md": false, 00:24:05.944 "write_zeroes": true, 00:24:05.944 "zcopy": true, 00:24:05.944 "get_zone_info": false, 00:24:05.944 "zone_management": false, 00:24:05.944 "zone_append": false, 00:24:05.944 "compare": false, 00:24:05.944 "compare_and_write": false, 00:24:05.944 "abort": true, 00:24:05.944 "seek_hole": false, 00:24:05.944 "seek_data": false, 00:24:05.944 "copy": true, 00:24:05.944 "nvme_iov_md": false 00:24:05.944 }, 00:24:05.944 "memory_domains": [ 00:24:05.944 { 00:24:05.944 "dma_device_id": "system", 00:24:05.944 "dma_device_type": 1 00:24:05.944 }, 00:24:05.944 { 00:24:05.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:05.944 "dma_device_type": 2 00:24:05.944 } 00:24:05.944 ], 00:24:05.944 "driver_specific": { 00:24:05.944 "passthru": { 00:24:05.944 "name": "pt2", 00:24:05.944 "base_bdev_name": "malloc2" 00:24:05.944 } 00:24:05.944 } 00:24:05.944 }' 00:24:05.944 21:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:05.944 21:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:05.944 21:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:05.944 21:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:06.203 21:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:06.203 21:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:06.203 21:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:06.203 21:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:06.203 21:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:06.203 21:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:06.203 21:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:06.203 21:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:06.203 21:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:06.463 21:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:06.463 21:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:24:06.463 21:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:06.463 "name": "pt3", 00:24:06.463 "aliases": [ 00:24:06.463 "00000000-0000-0000-0000-000000000003" 00:24:06.463 ], 00:24:06.463 "product_name": "passthru", 00:24:06.463 "block_size": 512, 00:24:06.463 "num_blocks": 65536, 00:24:06.463 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:06.463 "assigned_rate_limits": { 00:24:06.463 "rw_ios_per_sec": 0, 00:24:06.463 "rw_mbytes_per_sec": 0, 00:24:06.463 "r_mbytes_per_sec": 0, 00:24:06.463 "w_mbytes_per_sec": 0 00:24:06.463 }, 00:24:06.463 "claimed": true, 00:24:06.463 "claim_type": "exclusive_write", 00:24:06.463 "zoned": false, 00:24:06.463 "supported_io_types": { 00:24:06.463 "read": true, 00:24:06.463 "write": true, 00:24:06.463 "unmap": true, 00:24:06.463 "flush": true, 00:24:06.463 "reset": true, 00:24:06.463 "nvme_admin": false, 00:24:06.463 "nvme_io": false, 00:24:06.463 "nvme_io_md": false, 00:24:06.463 "write_zeroes": true, 00:24:06.463 "zcopy": true, 00:24:06.463 "get_zone_info": false, 00:24:06.463 "zone_management": false, 00:24:06.463 "zone_append": false, 00:24:06.463 "compare": false, 00:24:06.463 "compare_and_write": false, 00:24:06.463 "abort": true, 00:24:06.463 "seek_hole": false, 00:24:06.463 "seek_data": false, 00:24:06.463 "copy": true, 00:24:06.463 "nvme_iov_md": false 00:24:06.463 }, 00:24:06.463 "memory_domains": [ 00:24:06.463 { 00:24:06.463 "dma_device_id": "system", 00:24:06.463 "dma_device_type": 1 00:24:06.463 }, 00:24:06.463 { 00:24:06.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:06.463 "dma_device_type": 2 00:24:06.463 } 00:24:06.463 ], 00:24:06.463 "driver_specific": { 00:24:06.463 "passthru": { 00:24:06.463 "name": "pt3", 00:24:06.463 "base_bdev_name": "malloc3" 00:24:06.463 } 00:24:06.463 } 00:24:06.463 }' 00:24:06.463 21:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:06.463 21:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:06.721 21:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:06.721 21:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:06.721 21:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:06.721 21:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:06.721 21:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:06.721 21:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:06.721 21:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:06.721 21:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:06.978 21:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:06.978 21:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:06.978 21:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:06.978 21:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:24:06.978 21:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:07.238 21:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:07.238 "name": "pt4", 00:24:07.238 "aliases": [ 00:24:07.238 "00000000-0000-0000-0000-000000000004" 00:24:07.238 ], 00:24:07.238 "product_name": "passthru", 00:24:07.238 "block_size": 512, 00:24:07.238 "num_blocks": 65536, 00:24:07.238 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:07.238 "assigned_rate_limits": { 00:24:07.238 "rw_ios_per_sec": 0, 00:24:07.238 "rw_mbytes_per_sec": 0, 00:24:07.238 "r_mbytes_per_sec": 0, 00:24:07.238 "w_mbytes_per_sec": 0 00:24:07.238 }, 00:24:07.238 "claimed": true, 00:24:07.238 "claim_type": "exclusive_write", 00:24:07.238 "zoned": false, 00:24:07.238 "supported_io_types": { 00:24:07.238 "read": true, 00:24:07.238 "write": true, 00:24:07.238 "unmap": true, 00:24:07.238 "flush": true, 00:24:07.238 "reset": true, 00:24:07.238 "nvme_admin": false, 00:24:07.238 "nvme_io": false, 00:24:07.238 "nvme_io_md": false, 00:24:07.238 "write_zeroes": true, 00:24:07.238 "zcopy": true, 00:24:07.238 "get_zone_info": false, 00:24:07.238 "zone_management": false, 00:24:07.238 "zone_append": false, 00:24:07.238 "compare": false, 00:24:07.238 "compare_and_write": false, 00:24:07.238 "abort": true, 00:24:07.238 "seek_hole": false, 00:24:07.238 "seek_data": false, 00:24:07.238 "copy": true, 00:24:07.238 "nvme_iov_md": false 00:24:07.238 }, 00:24:07.238 "memory_domains": [ 00:24:07.238 { 00:24:07.238 "dma_device_id": "system", 00:24:07.238 "dma_device_type": 1 00:24:07.238 }, 00:24:07.238 { 00:24:07.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:07.238 "dma_device_type": 2 00:24:07.238 } 00:24:07.238 ], 00:24:07.238 "driver_specific": { 00:24:07.238 "passthru": { 00:24:07.238 "name": "pt4", 00:24:07.238 "base_bdev_name": "malloc4" 00:24:07.238 } 00:24:07.238 } 00:24:07.238 }' 00:24:07.238 21:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:07.238 21:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:07.238 21:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:07.238 21:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:07.238 21:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:07.497 21:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:07.497 21:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:07.497 21:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:07.497 21:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:07.497 21:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:07.497 21:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:07.497 21:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:07.497 21:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:07.497 21:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:24:07.756 [2024-07-15 21:37:41.017318] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:07.756 21:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=bcdf0341-510a-4543-a090-0138f7f29515 00:24:07.756 21:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z bcdf0341-510a-4543-a090-0138f7f29515 ']' 00:24:07.756 21:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:08.015 [2024-07-15 21:37:41.212629] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:08.015 [2024-07-15 21:37:41.212770] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:08.015 [2024-07-15 21:37:41.212895] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:08.015 [2024-07-15 21:37:41.212984] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:08.015 [2024-07-15 21:37:41.213013] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:24:08.015 21:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.015 21:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:24:08.274 21:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:24:08.274 21:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:24:08.274 21:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:08.274 21:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:08.274 21:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:08.274 21:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:08.534 21:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:08.534 21:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:08.792 21:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:08.792 21:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:24:09.062 21:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:24:09.062 21:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:09.062 21:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:24:09.062 21:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:09.062 21:37:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:24:09.062 21:37:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:09.062 21:37:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:09.063 21:37:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:09.063 21:37:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:09.063 21:37:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:09.063 21:37:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:09.063 21:37:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:09.063 21:37:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:09.063 21:37:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:09.063 21:37:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:09.321 [2024-07-15 21:37:42.550421] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:09.321 [2024-07-15 21:37:42.552702] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:09.321 [2024-07-15 21:37:42.552809] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:24:09.321 [2024-07-15 21:37:42.552873] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:24:09.321 [2024-07-15 21:37:42.552951] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:24:09.321 [2024-07-15 21:37:42.553062] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:24:09.321 [2024-07-15 21:37:42.553119] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:24:09.321 [2024-07-15 21:37:42.553180] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:24:09.321 [2024-07-15 21:37:42.553222] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:09.321 [2024-07-15 21:37:42.553251] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state configuring 00:24:09.321 request: 00:24:09.321 { 00:24:09.321 "name": "raid_bdev1", 00:24:09.321 "raid_level": "raid0", 00:24:09.321 "base_bdevs": [ 00:24:09.321 "malloc1", 00:24:09.321 "malloc2", 00:24:09.321 "malloc3", 00:24:09.321 "malloc4" 00:24:09.321 ], 00:24:09.321 "strip_size_kb": 64, 00:24:09.321 "superblock": false, 00:24:09.321 "method": "bdev_raid_create", 00:24:09.321 "req_id": 1 00:24:09.321 } 00:24:09.321 Got JSON-RPC error response 00:24:09.321 response: 00:24:09.321 { 00:24:09.321 "code": -17, 00:24:09.321 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:09.321 } 00:24:09.321 21:37:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:24:09.321 21:37:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:09.321 21:37:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:09.321 21:37:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:09.321 21:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.321 21:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:24:09.579 21:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:24:09.579 21:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:24:09.579 21:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:09.837 [2024-07-15 21:37:42.981569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:09.837 [2024-07-15 21:37:42.981771] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:09.837 [2024-07-15 21:37:42.981818] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:24:09.837 [2024-07-15 21:37:42.981882] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:09.837 [2024-07-15 21:37:42.984538] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:09.837 [2024-07-15 21:37:42.984627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:09.837 [2024-07-15 21:37:42.984779] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:09.837 [2024-07-15 21:37:42.984896] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:09.837 pt1 00:24:09.837 21:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:24:09.837 21:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:09.837 21:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:09.837 21:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:09.837 21:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:09.837 21:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:09.837 21:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:09.837 21:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:09.837 21:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:09.837 21:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:09.837 21:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.837 21:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.837 21:37:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:09.837 "name": "raid_bdev1", 00:24:09.837 "uuid": "bcdf0341-510a-4543-a090-0138f7f29515", 00:24:09.837 "strip_size_kb": 64, 00:24:09.837 "state": "configuring", 00:24:09.837 "raid_level": "raid0", 00:24:09.837 "superblock": true, 00:24:09.837 "num_base_bdevs": 4, 00:24:09.837 "num_base_bdevs_discovered": 1, 00:24:09.837 "num_base_bdevs_operational": 4, 00:24:09.837 "base_bdevs_list": [ 00:24:09.837 { 00:24:09.837 "name": "pt1", 00:24:09.837 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:09.837 "is_configured": true, 00:24:09.837 "data_offset": 2048, 00:24:09.837 "data_size": 63488 00:24:09.837 }, 00:24:09.837 { 00:24:09.837 "name": null, 00:24:09.837 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:09.837 "is_configured": false, 00:24:09.837 "data_offset": 2048, 00:24:09.837 "data_size": 63488 00:24:09.837 }, 00:24:09.837 { 00:24:09.837 "name": null, 00:24:09.837 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:09.837 "is_configured": false, 00:24:09.837 "data_offset": 2048, 00:24:09.837 "data_size": 63488 00:24:09.837 }, 00:24:09.837 { 00:24:09.837 "name": null, 00:24:09.837 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:09.837 "is_configured": false, 00:24:09.837 "data_offset": 2048, 00:24:09.837 "data_size": 63488 00:24:09.837 } 00:24:09.837 ] 00:24:09.837 }' 00:24:09.837 21:37:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:09.837 21:37:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.779 21:37:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:24:10.779 21:37:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:10.779 [2024-07-15 21:37:44.011846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:10.779 [2024-07-15 21:37:44.012046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:10.779 [2024-07-15 21:37:44.012114] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:24:10.779 [2024-07-15 21:37:44.012185] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:10.779 [2024-07-15 21:37:44.012746] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:10.779 [2024-07-15 21:37:44.012816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:10.779 [2024-07-15 21:37:44.012983] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:10.779 [2024-07-15 21:37:44.013039] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:10.779 pt2 00:24:10.779 21:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:11.037 [2024-07-15 21:37:44.227554] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:24:11.037 21:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:24:11.037 21:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:11.037 21:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:11.037 21:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:11.037 21:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:11.037 21:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:11.037 21:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:11.037 21:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:11.037 21:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:11.037 21:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:11.037 21:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.037 21:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.296 21:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:11.296 "name": "raid_bdev1", 00:24:11.296 "uuid": "bcdf0341-510a-4543-a090-0138f7f29515", 00:24:11.296 "strip_size_kb": 64, 00:24:11.296 "state": "configuring", 00:24:11.296 "raid_level": "raid0", 00:24:11.296 "superblock": true, 00:24:11.296 "num_base_bdevs": 4, 00:24:11.296 "num_base_bdevs_discovered": 1, 00:24:11.296 "num_base_bdevs_operational": 4, 00:24:11.296 "base_bdevs_list": [ 00:24:11.296 { 00:24:11.296 "name": "pt1", 00:24:11.296 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:11.296 "is_configured": true, 00:24:11.296 "data_offset": 2048, 00:24:11.296 "data_size": 63488 00:24:11.296 }, 00:24:11.296 { 00:24:11.296 "name": null, 00:24:11.296 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:11.296 "is_configured": false, 00:24:11.296 "data_offset": 2048, 00:24:11.296 "data_size": 63488 00:24:11.296 }, 00:24:11.296 { 00:24:11.296 "name": null, 00:24:11.296 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:11.296 "is_configured": false, 00:24:11.296 "data_offset": 2048, 00:24:11.296 "data_size": 63488 00:24:11.296 }, 00:24:11.296 { 00:24:11.296 "name": null, 00:24:11.296 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:11.296 "is_configured": false, 00:24:11.296 "data_offset": 2048, 00:24:11.296 "data_size": 63488 00:24:11.296 } 00:24:11.296 ] 00:24:11.296 }' 00:24:11.296 21:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:11.296 21:37:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.867 21:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:24:11.867 21:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:24:11.867 21:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:12.133 [2024-07-15 21:37:45.281801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:12.133 [2024-07-15 21:37:45.282005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:12.133 [2024-07-15 21:37:45.282060] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:24:12.133 [2024-07-15 21:37:45.282134] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:12.133 [2024-07-15 21:37:45.282709] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:12.133 [2024-07-15 21:37:45.282792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:12.133 [2024-07-15 21:37:45.282931] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:12.133 [2024-07-15 21:37:45.282983] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:12.133 pt2 00:24:12.133 21:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:24:12.133 21:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:24:12.133 21:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:12.133 [2024-07-15 21:37:45.481471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:12.133 [2024-07-15 21:37:45.481666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:12.133 [2024-07-15 21:37:45.481714] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:24:12.133 [2024-07-15 21:37:45.481784] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:12.133 [2024-07-15 21:37:45.482324] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:12.133 [2024-07-15 21:37:45.482391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:12.133 [2024-07-15 21:37:45.482546] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:24:12.133 [2024-07-15 21:37:45.482618] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:12.133 pt3 00:24:12.133 21:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:24:12.133 21:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:24:12.133 21:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:12.393 [2024-07-15 21:37:45.653115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:12.393 [2024-07-15 21:37:45.653279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:12.393 [2024-07-15 21:37:45.653354] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:24:12.393 [2024-07-15 21:37:45.653446] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:12.393 [2024-07-15 21:37:45.654007] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:12.393 [2024-07-15 21:37:45.654079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:12.393 [2024-07-15 21:37:45.654222] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:24:12.393 [2024-07-15 21:37:45.654278] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:12.393 [2024-07-15 21:37:45.654463] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:24:12.393 [2024-07-15 21:37:45.654495] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:12.393 [2024-07-15 21:37:45.654666] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:24:12.393 [2024-07-15 21:37:45.655059] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:24:12.393 [2024-07-15 21:37:45.655104] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:24:12.393 [2024-07-15 21:37:45.655283] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:12.393 pt4 00:24:12.393 21:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:24:12.393 21:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:24:12.393 21:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:24:12.393 21:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:12.393 21:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:12.393 21:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:12.393 21:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:12.393 21:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:12.393 21:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:12.393 21:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:12.393 21:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:12.393 21:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:12.393 21:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.393 21:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.652 21:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:12.652 "name": "raid_bdev1", 00:24:12.652 "uuid": "bcdf0341-510a-4543-a090-0138f7f29515", 00:24:12.652 "strip_size_kb": 64, 00:24:12.652 "state": "online", 00:24:12.652 "raid_level": "raid0", 00:24:12.652 "superblock": true, 00:24:12.652 "num_base_bdevs": 4, 00:24:12.652 "num_base_bdevs_discovered": 4, 00:24:12.652 "num_base_bdevs_operational": 4, 00:24:12.652 "base_bdevs_list": [ 00:24:12.652 { 00:24:12.652 "name": "pt1", 00:24:12.652 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:12.652 "is_configured": true, 00:24:12.652 "data_offset": 2048, 00:24:12.652 "data_size": 63488 00:24:12.652 }, 00:24:12.652 { 00:24:12.652 "name": "pt2", 00:24:12.652 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:12.652 "is_configured": true, 00:24:12.652 "data_offset": 2048, 00:24:12.652 "data_size": 63488 00:24:12.652 }, 00:24:12.652 { 00:24:12.652 "name": "pt3", 00:24:12.652 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:12.652 "is_configured": true, 00:24:12.652 "data_offset": 2048, 00:24:12.652 "data_size": 63488 00:24:12.652 }, 00:24:12.652 { 00:24:12.652 "name": "pt4", 00:24:12.652 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:12.652 "is_configured": true, 00:24:12.652 "data_offset": 2048, 00:24:12.652 "data_size": 63488 00:24:12.652 } 00:24:12.652 ] 00:24:12.652 }' 00:24:12.652 21:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:12.652 21:37:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:13.221 21:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:24:13.221 21:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:24:13.221 21:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:13.221 21:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:13.221 21:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:13.221 21:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:13.221 21:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:13.221 21:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:13.480 [2024-07-15 21:37:46.711655] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:13.480 21:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:13.480 "name": "raid_bdev1", 00:24:13.480 "aliases": [ 00:24:13.480 "bcdf0341-510a-4543-a090-0138f7f29515" 00:24:13.480 ], 00:24:13.480 "product_name": "Raid Volume", 00:24:13.480 "block_size": 512, 00:24:13.480 "num_blocks": 253952, 00:24:13.480 "uuid": "bcdf0341-510a-4543-a090-0138f7f29515", 00:24:13.480 "assigned_rate_limits": { 00:24:13.480 "rw_ios_per_sec": 0, 00:24:13.480 "rw_mbytes_per_sec": 0, 00:24:13.480 "r_mbytes_per_sec": 0, 00:24:13.480 "w_mbytes_per_sec": 0 00:24:13.480 }, 00:24:13.480 "claimed": false, 00:24:13.480 "zoned": false, 00:24:13.480 "supported_io_types": { 00:24:13.480 "read": true, 00:24:13.480 "write": true, 00:24:13.480 "unmap": true, 00:24:13.480 "flush": true, 00:24:13.480 "reset": true, 00:24:13.480 "nvme_admin": false, 00:24:13.480 "nvme_io": false, 00:24:13.480 "nvme_io_md": false, 00:24:13.480 "write_zeroes": true, 00:24:13.480 "zcopy": false, 00:24:13.480 "get_zone_info": false, 00:24:13.480 "zone_management": false, 00:24:13.480 "zone_append": false, 00:24:13.480 "compare": false, 00:24:13.480 "compare_and_write": false, 00:24:13.480 "abort": false, 00:24:13.480 "seek_hole": false, 00:24:13.480 "seek_data": false, 00:24:13.480 "copy": false, 00:24:13.480 "nvme_iov_md": false 00:24:13.480 }, 00:24:13.480 "memory_domains": [ 00:24:13.480 { 00:24:13.480 "dma_device_id": "system", 00:24:13.480 "dma_device_type": 1 00:24:13.480 }, 00:24:13.480 { 00:24:13.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:13.480 "dma_device_type": 2 00:24:13.480 }, 00:24:13.480 { 00:24:13.480 "dma_device_id": "system", 00:24:13.480 "dma_device_type": 1 00:24:13.480 }, 00:24:13.480 { 00:24:13.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:13.480 "dma_device_type": 2 00:24:13.480 }, 00:24:13.480 { 00:24:13.480 "dma_device_id": "system", 00:24:13.480 "dma_device_type": 1 00:24:13.480 }, 00:24:13.480 { 00:24:13.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:13.480 "dma_device_type": 2 00:24:13.480 }, 00:24:13.480 { 00:24:13.480 "dma_device_id": "system", 00:24:13.480 "dma_device_type": 1 00:24:13.480 }, 00:24:13.480 { 00:24:13.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:13.480 "dma_device_type": 2 00:24:13.480 } 00:24:13.480 ], 00:24:13.480 "driver_specific": { 00:24:13.480 "raid": { 00:24:13.480 "uuid": "bcdf0341-510a-4543-a090-0138f7f29515", 00:24:13.480 "strip_size_kb": 64, 00:24:13.480 "state": "online", 00:24:13.480 "raid_level": "raid0", 00:24:13.480 "superblock": true, 00:24:13.480 "num_base_bdevs": 4, 00:24:13.480 "num_base_bdevs_discovered": 4, 00:24:13.480 "num_base_bdevs_operational": 4, 00:24:13.480 "base_bdevs_list": [ 00:24:13.480 { 00:24:13.480 "name": "pt1", 00:24:13.480 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:13.480 "is_configured": true, 00:24:13.480 "data_offset": 2048, 00:24:13.480 "data_size": 63488 00:24:13.480 }, 00:24:13.480 { 00:24:13.480 "name": "pt2", 00:24:13.480 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:13.480 "is_configured": true, 00:24:13.480 "data_offset": 2048, 00:24:13.480 "data_size": 63488 00:24:13.480 }, 00:24:13.480 { 00:24:13.480 "name": "pt3", 00:24:13.480 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:13.480 "is_configured": true, 00:24:13.480 "data_offset": 2048, 00:24:13.480 "data_size": 63488 00:24:13.480 }, 00:24:13.480 { 00:24:13.480 "name": "pt4", 00:24:13.481 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:13.481 "is_configured": true, 00:24:13.481 "data_offset": 2048, 00:24:13.481 "data_size": 63488 00:24:13.481 } 00:24:13.481 ] 00:24:13.481 } 00:24:13.481 } 00:24:13.481 }' 00:24:13.481 21:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:13.481 21:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:24:13.481 pt2 00:24:13.481 pt3 00:24:13.481 pt4' 00:24:13.481 21:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:13.481 21:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:24:13.481 21:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:13.740 21:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:13.740 "name": "pt1", 00:24:13.740 "aliases": [ 00:24:13.740 "00000000-0000-0000-0000-000000000001" 00:24:13.740 ], 00:24:13.740 "product_name": "passthru", 00:24:13.740 "block_size": 512, 00:24:13.740 "num_blocks": 65536, 00:24:13.740 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:13.740 "assigned_rate_limits": { 00:24:13.740 "rw_ios_per_sec": 0, 00:24:13.740 "rw_mbytes_per_sec": 0, 00:24:13.740 "r_mbytes_per_sec": 0, 00:24:13.740 "w_mbytes_per_sec": 0 00:24:13.740 }, 00:24:13.740 "claimed": true, 00:24:13.740 "claim_type": "exclusive_write", 00:24:13.740 "zoned": false, 00:24:13.740 "supported_io_types": { 00:24:13.740 "read": true, 00:24:13.740 "write": true, 00:24:13.740 "unmap": true, 00:24:13.740 "flush": true, 00:24:13.740 "reset": true, 00:24:13.740 "nvme_admin": false, 00:24:13.740 "nvme_io": false, 00:24:13.740 "nvme_io_md": false, 00:24:13.740 "write_zeroes": true, 00:24:13.740 "zcopy": true, 00:24:13.740 "get_zone_info": false, 00:24:13.740 "zone_management": false, 00:24:13.740 "zone_append": false, 00:24:13.740 "compare": false, 00:24:13.740 "compare_and_write": false, 00:24:13.740 "abort": true, 00:24:13.740 "seek_hole": false, 00:24:13.740 "seek_data": false, 00:24:13.740 "copy": true, 00:24:13.740 "nvme_iov_md": false 00:24:13.740 }, 00:24:13.740 "memory_domains": [ 00:24:13.740 { 00:24:13.740 "dma_device_id": "system", 00:24:13.740 "dma_device_type": 1 00:24:13.740 }, 00:24:13.740 { 00:24:13.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:13.740 "dma_device_type": 2 00:24:13.740 } 00:24:13.740 ], 00:24:13.740 "driver_specific": { 00:24:13.740 "passthru": { 00:24:13.740 "name": "pt1", 00:24:13.740 "base_bdev_name": "malloc1" 00:24:13.740 } 00:24:13.740 } 00:24:13.740 }' 00:24:13.740 21:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:13.740 21:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:14.000 21:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:14.000 21:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:14.000 21:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:14.000 21:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:14.000 21:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:14.000 21:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:14.286 21:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:14.286 21:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:14.286 21:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:14.286 21:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:14.286 21:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:14.286 21:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:24:14.286 21:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:14.544 21:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:14.544 "name": "pt2", 00:24:14.544 "aliases": [ 00:24:14.544 "00000000-0000-0000-0000-000000000002" 00:24:14.544 ], 00:24:14.544 "product_name": "passthru", 00:24:14.544 "block_size": 512, 00:24:14.544 "num_blocks": 65536, 00:24:14.544 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:14.544 "assigned_rate_limits": { 00:24:14.544 "rw_ios_per_sec": 0, 00:24:14.544 "rw_mbytes_per_sec": 0, 00:24:14.544 "r_mbytes_per_sec": 0, 00:24:14.544 "w_mbytes_per_sec": 0 00:24:14.544 }, 00:24:14.544 "claimed": true, 00:24:14.544 "claim_type": "exclusive_write", 00:24:14.544 "zoned": false, 00:24:14.544 "supported_io_types": { 00:24:14.544 "read": true, 00:24:14.544 "write": true, 00:24:14.544 "unmap": true, 00:24:14.544 "flush": true, 00:24:14.544 "reset": true, 00:24:14.544 "nvme_admin": false, 00:24:14.544 "nvme_io": false, 00:24:14.544 "nvme_io_md": false, 00:24:14.544 "write_zeroes": true, 00:24:14.544 "zcopy": true, 00:24:14.544 "get_zone_info": false, 00:24:14.544 "zone_management": false, 00:24:14.544 "zone_append": false, 00:24:14.544 "compare": false, 00:24:14.544 "compare_and_write": false, 00:24:14.544 "abort": true, 00:24:14.545 "seek_hole": false, 00:24:14.545 "seek_data": false, 00:24:14.545 "copy": true, 00:24:14.545 "nvme_iov_md": false 00:24:14.545 }, 00:24:14.545 "memory_domains": [ 00:24:14.545 { 00:24:14.545 "dma_device_id": "system", 00:24:14.545 "dma_device_type": 1 00:24:14.545 }, 00:24:14.545 { 00:24:14.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:14.545 "dma_device_type": 2 00:24:14.545 } 00:24:14.545 ], 00:24:14.545 "driver_specific": { 00:24:14.545 "passthru": { 00:24:14.545 "name": "pt2", 00:24:14.545 "base_bdev_name": "malloc2" 00:24:14.545 } 00:24:14.545 } 00:24:14.545 }' 00:24:14.545 21:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:14.545 21:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:14.545 21:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:14.545 21:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:14.545 21:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:14.802 21:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:14.802 21:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:14.802 21:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:14.802 21:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:14.802 21:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:14.802 21:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:14.802 21:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:14.802 21:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:14.802 21:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:24:14.802 21:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:15.120 21:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:15.120 "name": "pt3", 00:24:15.120 "aliases": [ 00:24:15.120 "00000000-0000-0000-0000-000000000003" 00:24:15.120 ], 00:24:15.120 "product_name": "passthru", 00:24:15.120 "block_size": 512, 00:24:15.120 "num_blocks": 65536, 00:24:15.120 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:15.120 "assigned_rate_limits": { 00:24:15.120 "rw_ios_per_sec": 0, 00:24:15.120 "rw_mbytes_per_sec": 0, 00:24:15.120 "r_mbytes_per_sec": 0, 00:24:15.120 "w_mbytes_per_sec": 0 00:24:15.120 }, 00:24:15.120 "claimed": true, 00:24:15.120 "claim_type": "exclusive_write", 00:24:15.120 "zoned": false, 00:24:15.120 "supported_io_types": { 00:24:15.120 "read": true, 00:24:15.120 "write": true, 00:24:15.120 "unmap": true, 00:24:15.120 "flush": true, 00:24:15.120 "reset": true, 00:24:15.120 "nvme_admin": false, 00:24:15.120 "nvme_io": false, 00:24:15.120 "nvme_io_md": false, 00:24:15.120 "write_zeroes": true, 00:24:15.120 "zcopy": true, 00:24:15.120 "get_zone_info": false, 00:24:15.120 "zone_management": false, 00:24:15.120 "zone_append": false, 00:24:15.120 "compare": false, 00:24:15.120 "compare_and_write": false, 00:24:15.120 "abort": true, 00:24:15.120 "seek_hole": false, 00:24:15.120 "seek_data": false, 00:24:15.120 "copy": true, 00:24:15.120 "nvme_iov_md": false 00:24:15.120 }, 00:24:15.120 "memory_domains": [ 00:24:15.120 { 00:24:15.120 "dma_device_id": "system", 00:24:15.120 "dma_device_type": 1 00:24:15.120 }, 00:24:15.120 { 00:24:15.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:15.120 "dma_device_type": 2 00:24:15.120 } 00:24:15.120 ], 00:24:15.120 "driver_specific": { 00:24:15.120 "passthru": { 00:24:15.120 "name": "pt3", 00:24:15.120 "base_bdev_name": "malloc3" 00:24:15.120 } 00:24:15.120 } 00:24:15.120 }' 00:24:15.120 21:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:15.120 21:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:15.120 21:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:15.120 21:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:15.411 21:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:15.411 21:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:15.411 21:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:15.411 21:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:15.411 21:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:15.411 21:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:15.411 21:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:15.670 21:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:15.670 21:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:15.670 21:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:24:15.670 21:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:15.670 21:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:15.670 "name": "pt4", 00:24:15.670 "aliases": [ 00:24:15.670 "00000000-0000-0000-0000-000000000004" 00:24:15.670 ], 00:24:15.670 "product_name": "passthru", 00:24:15.670 "block_size": 512, 00:24:15.670 "num_blocks": 65536, 00:24:15.670 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:15.670 "assigned_rate_limits": { 00:24:15.670 "rw_ios_per_sec": 0, 00:24:15.670 "rw_mbytes_per_sec": 0, 00:24:15.670 "r_mbytes_per_sec": 0, 00:24:15.670 "w_mbytes_per_sec": 0 00:24:15.670 }, 00:24:15.670 "claimed": true, 00:24:15.670 "claim_type": "exclusive_write", 00:24:15.670 "zoned": false, 00:24:15.670 "supported_io_types": { 00:24:15.670 "read": true, 00:24:15.670 "write": true, 00:24:15.670 "unmap": true, 00:24:15.670 "flush": true, 00:24:15.670 "reset": true, 00:24:15.670 "nvme_admin": false, 00:24:15.670 "nvme_io": false, 00:24:15.670 "nvme_io_md": false, 00:24:15.670 "write_zeroes": true, 00:24:15.670 "zcopy": true, 00:24:15.670 "get_zone_info": false, 00:24:15.670 "zone_management": false, 00:24:15.670 "zone_append": false, 00:24:15.670 "compare": false, 00:24:15.670 "compare_and_write": false, 00:24:15.670 "abort": true, 00:24:15.670 "seek_hole": false, 00:24:15.670 "seek_data": false, 00:24:15.670 "copy": true, 00:24:15.670 "nvme_iov_md": false 00:24:15.670 }, 00:24:15.670 "memory_domains": [ 00:24:15.670 { 00:24:15.670 "dma_device_id": "system", 00:24:15.670 "dma_device_type": 1 00:24:15.670 }, 00:24:15.670 { 00:24:15.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:15.670 "dma_device_type": 2 00:24:15.670 } 00:24:15.670 ], 00:24:15.670 "driver_specific": { 00:24:15.670 "passthru": { 00:24:15.670 "name": "pt4", 00:24:15.670 "base_bdev_name": "malloc4" 00:24:15.670 } 00:24:15.670 } 00:24:15.670 }' 00:24:15.670 21:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:15.929 21:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:15.929 21:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:15.929 21:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:15.929 21:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:15.929 21:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:15.929 21:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:15.929 21:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:16.188 21:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:16.188 21:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:16.188 21:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:16.188 21:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:16.188 21:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:16.188 21:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:24:16.448 [2024-07-15 21:37:49.587923] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:16.448 21:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' bcdf0341-510a-4543-a090-0138f7f29515 '!=' bcdf0341-510a-4543-a090-0138f7f29515 ']' 00:24:16.448 21:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:24:16.448 21:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:16.448 21:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:24:16.448 21:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 137378 00:24:16.448 21:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 137378 ']' 00:24:16.448 21:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 137378 00:24:16.448 21:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:24:16.448 21:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:16.448 21:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 137378 00:24:16.449 killing process with pid 137378 00:24:16.449 21:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:16.449 21:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:16.449 21:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 137378' 00:24:16.449 21:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 137378 00:24:16.449 21:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 137378 00:24:16.449 [2024-07-15 21:37:49.632882] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:16.449 [2024-07-15 21:37:49.632978] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:16.449 [2024-07-15 21:37:49.633122] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:16.449 [2024-07-15 21:37:49.633159] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:24:16.709 [2024-07-15 21:37:50.008741] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:18.615 ************************************ 00:24:18.615 END TEST raid_superblock_test 00:24:18.615 ************************************ 00:24:18.615 21:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:24:18.615 00:24:18.615 real 0m17.090s 00:24:18.615 user 0m30.404s 00:24:18.615 sys 0m2.082s 00:24:18.615 21:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:18.615 21:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:18.615 21:37:51 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:24:18.615 21:37:51 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:24:18.615 21:37:51 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:24:18.615 21:37:51 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:18.615 21:37:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:18.615 ************************************ 00:24:18.615 START TEST raid_read_error_test 00:24:18.615 ************************************ 00:24:18.615 21:37:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 read 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.Mq4Qp6JKMx 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=137958 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 137958 /var/tmp/spdk-raid.sock 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 137958 ']' 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:18.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:18.616 21:37:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:18.616 [2024-07-15 21:37:51.669786] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:24:18.616 [2024-07-15 21:37:51.669985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137958 ] 00:24:18.616 [2024-07-15 21:37:51.830247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.875 [2024-07-15 21:37:52.132781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.133 [2024-07-15 21:37:52.387087] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:19.133 21:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:19.133 21:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:24:19.133 21:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:19.133 21:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:19.392 BaseBdev1_malloc 00:24:19.392 21:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:24:19.650 true 00:24:19.650 21:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:19.909 [2024-07-15 21:37:53.114681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:19.909 [2024-07-15 21:37:53.114908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:19.909 [2024-07-15 21:37:53.114989] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:19.909 [2024-07-15 21:37:53.115037] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:19.909 [2024-07-15 21:37:53.117854] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:19.909 [2024-07-15 21:37:53.117965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:19.909 BaseBdev1 00:24:19.909 21:37:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:19.909 21:37:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:20.167 BaseBdev2_malloc 00:24:20.167 21:37:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:24:20.423 true 00:24:20.423 21:37:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:20.679 [2024-07-15 21:37:53.795019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:20.679 [2024-07-15 21:37:53.795281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:20.679 [2024-07-15 21:37:53.795366] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:24:20.679 [2024-07-15 21:37:53.795415] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:20.679 [2024-07-15 21:37:53.798101] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:20.679 [2024-07-15 21:37:53.798213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:20.679 BaseBdev2 00:24:20.679 21:37:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:20.680 21:37:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:20.680 BaseBdev3_malloc 00:24:20.936 21:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:24:20.936 true 00:24:20.936 21:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:24:21.193 [2024-07-15 21:37:54.437996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:24:21.193 [2024-07-15 21:37:54.438219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:21.193 [2024-07-15 21:37:54.438283] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:21.193 [2024-07-15 21:37:54.438338] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:21.193 [2024-07-15 21:37:54.441071] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:21.193 [2024-07-15 21:37:54.441193] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:21.193 BaseBdev3 00:24:21.193 21:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:21.193 21:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:21.452 BaseBdev4_malloc 00:24:21.452 21:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:24:21.710 true 00:24:21.710 21:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:24:21.710 [2024-07-15 21:37:55.067220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:24:21.710 [2024-07-15 21:37:55.067461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:21.710 [2024-07-15 21:37:55.067529] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:24:21.710 [2024-07-15 21:37:55.067580] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:21.710 [2024-07-15 21:37:55.070293] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:21.710 [2024-07-15 21:37:55.070388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:21.710 BaseBdev4 00:24:21.969 21:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:24:21.969 [2024-07-15 21:37:55.275001] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:21.969 [2024-07-15 21:37:55.277307] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:21.969 [2024-07-15 21:37:55.277438] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:21.969 [2024-07-15 21:37:55.277521] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:21.969 [2024-07-15 21:37:55.277823] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:24:21.969 [2024-07-15 21:37:55.277868] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:21.969 [2024-07-15 21:37:55.278042] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:24:21.969 [2024-07-15 21:37:55.278471] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:24:21.969 [2024-07-15 21:37:55.278515] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:24:21.969 [2024-07-15 21:37:55.278734] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:21.969 21:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:24:21.969 21:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:21.969 21:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:21.969 21:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:21.969 21:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:21.969 21:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:21.969 21:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:21.969 21:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:21.969 21:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:21.969 21:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:21.969 21:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.969 21:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:22.227 21:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:22.227 "name": "raid_bdev1", 00:24:22.227 "uuid": "6de77a08-d8b6-4475-bbf9-ab8ff2f51b5e", 00:24:22.227 "strip_size_kb": 64, 00:24:22.227 "state": "online", 00:24:22.227 "raid_level": "raid0", 00:24:22.227 "superblock": true, 00:24:22.227 "num_base_bdevs": 4, 00:24:22.228 "num_base_bdevs_discovered": 4, 00:24:22.228 "num_base_bdevs_operational": 4, 00:24:22.228 "base_bdevs_list": [ 00:24:22.228 { 00:24:22.228 "name": "BaseBdev1", 00:24:22.228 "uuid": "e1e02471-ab3e-500d-b47a-8fe00f358a8f", 00:24:22.228 "is_configured": true, 00:24:22.228 "data_offset": 2048, 00:24:22.228 "data_size": 63488 00:24:22.228 }, 00:24:22.228 { 00:24:22.228 "name": "BaseBdev2", 00:24:22.228 "uuid": "d515dfb6-94c6-5632-95fa-d7414bd97b3b", 00:24:22.228 "is_configured": true, 00:24:22.228 "data_offset": 2048, 00:24:22.228 "data_size": 63488 00:24:22.228 }, 00:24:22.228 { 00:24:22.228 "name": "BaseBdev3", 00:24:22.228 "uuid": "8bb678ee-dfdd-5b23-9c62-15d792c8dde6", 00:24:22.228 "is_configured": true, 00:24:22.228 "data_offset": 2048, 00:24:22.228 "data_size": 63488 00:24:22.228 }, 00:24:22.228 { 00:24:22.228 "name": "BaseBdev4", 00:24:22.228 "uuid": "ab02d91c-e269-51c2-9d83-d29af411f07d", 00:24:22.228 "is_configured": true, 00:24:22.228 "data_offset": 2048, 00:24:22.228 "data_size": 63488 00:24:22.228 } 00:24:22.228 ] 00:24:22.228 }' 00:24:22.228 21:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:22.228 21:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:22.795 21:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:24:22.796 21:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:22.796 [2024-07-15 21:37:56.163310] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:23.732 21:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:24:23.992 21:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:24:23.992 21:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:24:23.992 21:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:24:23.992 21:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:24:23.992 21:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:23.992 21:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:23.992 21:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:23.992 21:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:23.992 21:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:23.992 21:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:23.992 21:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:23.992 21:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:23.992 21:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:23.992 21:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.992 21:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.252 21:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:24.252 "name": "raid_bdev1", 00:24:24.252 "uuid": "6de77a08-d8b6-4475-bbf9-ab8ff2f51b5e", 00:24:24.252 "strip_size_kb": 64, 00:24:24.252 "state": "online", 00:24:24.252 "raid_level": "raid0", 00:24:24.252 "superblock": true, 00:24:24.252 "num_base_bdevs": 4, 00:24:24.252 "num_base_bdevs_discovered": 4, 00:24:24.252 "num_base_bdevs_operational": 4, 00:24:24.252 "base_bdevs_list": [ 00:24:24.252 { 00:24:24.252 "name": "BaseBdev1", 00:24:24.252 "uuid": "e1e02471-ab3e-500d-b47a-8fe00f358a8f", 00:24:24.252 "is_configured": true, 00:24:24.252 "data_offset": 2048, 00:24:24.252 "data_size": 63488 00:24:24.252 }, 00:24:24.252 { 00:24:24.252 "name": "BaseBdev2", 00:24:24.252 "uuid": "d515dfb6-94c6-5632-95fa-d7414bd97b3b", 00:24:24.252 "is_configured": true, 00:24:24.252 "data_offset": 2048, 00:24:24.252 "data_size": 63488 00:24:24.252 }, 00:24:24.252 { 00:24:24.252 "name": "BaseBdev3", 00:24:24.252 "uuid": "8bb678ee-dfdd-5b23-9c62-15d792c8dde6", 00:24:24.252 "is_configured": true, 00:24:24.252 "data_offset": 2048, 00:24:24.252 "data_size": 63488 00:24:24.252 }, 00:24:24.252 { 00:24:24.252 "name": "BaseBdev4", 00:24:24.252 "uuid": "ab02d91c-e269-51c2-9d83-d29af411f07d", 00:24:24.252 "is_configured": true, 00:24:24.252 "data_offset": 2048, 00:24:24.252 "data_size": 63488 00:24:24.252 } 00:24:24.252 ] 00:24:24.252 }' 00:24:24.252 21:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:24.252 21:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.822 21:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:25.082 [2024-07-15 21:37:58.311619] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:25.082 [2024-07-15 21:37:58.311774] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:25.082 [2024-07-15 21:37:58.314566] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:25.082 [2024-07-15 21:37:58.314649] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:25.082 [2024-07-15 21:37:58.314704] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:25.082 [2024-07-15 21:37:58.314728] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:24:25.082 0 00:24:25.082 21:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 137958 00:24:25.082 21:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 137958 ']' 00:24:25.082 21:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 137958 00:24:25.082 21:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:24:25.082 21:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:25.082 21:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 137958 00:24:25.082 killing process with pid 137958 00:24:25.082 21:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:25.082 21:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:25.082 21:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 137958' 00:24:25.082 21:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 137958 00:24:25.082 21:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 137958 00:24:25.082 [2024-07-15 21:37:58.354790] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:25.650 [2024-07-15 21:37:58.744041] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:27.030 21:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.Mq4Qp6JKMx 00:24:27.030 21:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:24:27.030 21:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:24:27.030 ************************************ 00:24:27.030 END TEST raid_read_error_test 00:24:27.030 ************************************ 00:24:27.030 21:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.47 00:24:27.030 21:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:24:27.030 21:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:27.030 21:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:24:27.030 21:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.47 != \0\.\0\0 ]] 00:24:27.030 00:24:27.030 real 0m8.621s 00:24:27.030 user 0m12.716s 00:24:27.030 sys 0m1.057s 00:24:27.030 21:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:27.030 21:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:27.030 21:38:00 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:24:27.030 21:38:00 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:24:27.030 21:38:00 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:24:27.030 21:38:00 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:27.030 21:38:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:27.030 ************************************ 00:24:27.030 START TEST raid_write_error_test 00:24:27.030 ************************************ 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 write 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:24:27.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.xelBwpd08U 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=138192 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 138192 /var/tmp/spdk-raid.sock 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 138192 ']' 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:27.030 21:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:27.030 [2024-07-15 21:38:00.361928] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:24:27.030 [2024-07-15 21:38:00.362534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138192 ] 00:24:27.289 [2024-07-15 21:38:00.522417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.555 [2024-07-15 21:38:00.713145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.555 [2024-07-15 21:38:00.896865] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:27.825 21:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:27.825 21:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:24:27.825 21:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:27.825 21:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:28.083 BaseBdev1_malloc 00:24:28.083 21:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:24:28.341 true 00:24:28.341 21:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:28.600 [2024-07-15 21:38:01.762608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:28.600 [2024-07-15 21:38:01.762773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:28.600 [2024-07-15 21:38:01.762842] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:28.600 [2024-07-15 21:38:01.762893] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:28.600 [2024-07-15 21:38:01.764921] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:28.600 [2024-07-15 21:38:01.765000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:28.600 BaseBdev1 00:24:28.600 21:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:28.600 21:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:28.858 BaseBdev2_malloc 00:24:28.858 21:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:24:28.858 true 00:24:28.858 21:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:29.117 [2024-07-15 21:38:02.404120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:29.117 [2024-07-15 21:38:02.404281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:29.117 [2024-07-15 21:38:02.404334] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:24:29.117 [2024-07-15 21:38:02.404376] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:29.117 [2024-07-15 21:38:02.406447] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:29.117 [2024-07-15 21:38:02.406539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:29.117 BaseBdev2 00:24:29.117 21:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:29.117 21:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:29.375 BaseBdev3_malloc 00:24:29.375 21:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:24:29.635 true 00:24:29.635 21:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:24:29.635 [2024-07-15 21:38:03.003102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:24:29.635 [2024-07-15 21:38:03.003284] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:29.635 [2024-07-15 21:38:03.003343] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:29.635 [2024-07-15 21:38:03.003397] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:29.635 [2024-07-15 21:38:03.005466] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:29.635 [2024-07-15 21:38:03.005551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:29.894 BaseBdev3 00:24:29.894 21:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:29.894 21:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:29.894 BaseBdev4_malloc 00:24:29.894 21:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:24:30.153 true 00:24:30.153 21:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:24:30.412 [2024-07-15 21:38:03.629373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:24:30.412 [2024-07-15 21:38:03.629548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:30.412 [2024-07-15 21:38:03.629602] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:24:30.412 [2024-07-15 21:38:03.629671] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:30.412 [2024-07-15 21:38:03.631929] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:30.412 [2024-07-15 21:38:03.632020] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:30.412 BaseBdev4 00:24:30.412 21:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:24:30.670 [2024-07-15 21:38:03.817072] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:30.670 [2024-07-15 21:38:03.819004] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:30.670 [2024-07-15 21:38:03.819150] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:30.670 [2024-07-15 21:38:03.819230] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:30.670 [2024-07-15 21:38:03.819478] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:24:30.670 [2024-07-15 21:38:03.819521] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:30.670 [2024-07-15 21:38:03.819690] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:24:30.670 [2024-07-15 21:38:03.820047] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:24:30.670 [2024-07-15 21:38:03.820095] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:24:30.670 [2024-07-15 21:38:03.820261] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:30.670 21:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:24:30.670 21:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:30.670 21:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:30.670 21:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:30.671 21:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:30.671 21:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:30.671 21:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:30.671 21:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:30.671 21:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:30.671 21:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:30.671 21:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.671 21:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:30.671 21:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:30.671 "name": "raid_bdev1", 00:24:30.671 "uuid": "dd2431c3-2ecd-4e83-971d-9e8fe72477ee", 00:24:30.671 "strip_size_kb": 64, 00:24:30.671 "state": "online", 00:24:30.671 "raid_level": "raid0", 00:24:30.671 "superblock": true, 00:24:30.671 "num_base_bdevs": 4, 00:24:30.671 "num_base_bdevs_discovered": 4, 00:24:30.671 "num_base_bdevs_operational": 4, 00:24:30.671 "base_bdevs_list": [ 00:24:30.671 { 00:24:30.671 "name": "BaseBdev1", 00:24:30.671 "uuid": "374e6b53-45d9-5270-a702-d2287287cf9e", 00:24:30.671 "is_configured": true, 00:24:30.671 "data_offset": 2048, 00:24:30.671 "data_size": 63488 00:24:30.671 }, 00:24:30.671 { 00:24:30.671 "name": "BaseBdev2", 00:24:30.671 "uuid": "eb743c2a-6d9f-57a7-aaf8-fa5aa1288dda", 00:24:30.671 "is_configured": true, 00:24:30.671 "data_offset": 2048, 00:24:30.671 "data_size": 63488 00:24:30.671 }, 00:24:30.671 { 00:24:30.671 "name": "BaseBdev3", 00:24:30.671 "uuid": "f28d0ead-9b51-54e1-a045-45d386f0fced", 00:24:30.671 "is_configured": true, 00:24:30.671 "data_offset": 2048, 00:24:30.671 "data_size": 63488 00:24:30.671 }, 00:24:30.671 { 00:24:30.671 "name": "BaseBdev4", 00:24:30.671 "uuid": "d91618b4-0a28-5a66-90bb-1b064f1da7f1", 00:24:30.671 "is_configured": true, 00:24:30.671 "data_offset": 2048, 00:24:30.671 "data_size": 63488 00:24:30.671 } 00:24:30.671 ] 00:24:30.671 }' 00:24:30.671 21:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:30.671 21:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:31.239 21:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:24:31.239 21:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:31.500 [2024-07-15 21:38:04.676778] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:32.437 21:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:24:32.437 21:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:24:32.437 21:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:24:32.437 21:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:24:32.437 21:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:24:32.437 21:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:32.437 21:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:32.437 21:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:32.437 21:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:32.437 21:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:32.437 21:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:32.437 21:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:32.437 21:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:32.437 21:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:32.437 21:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.437 21:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.695 21:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:32.695 "name": "raid_bdev1", 00:24:32.695 "uuid": "dd2431c3-2ecd-4e83-971d-9e8fe72477ee", 00:24:32.695 "strip_size_kb": 64, 00:24:32.695 "state": "online", 00:24:32.695 "raid_level": "raid0", 00:24:32.695 "superblock": true, 00:24:32.695 "num_base_bdevs": 4, 00:24:32.695 "num_base_bdevs_discovered": 4, 00:24:32.695 "num_base_bdevs_operational": 4, 00:24:32.695 "base_bdevs_list": [ 00:24:32.695 { 00:24:32.695 "name": "BaseBdev1", 00:24:32.695 "uuid": "374e6b53-45d9-5270-a702-d2287287cf9e", 00:24:32.695 "is_configured": true, 00:24:32.695 "data_offset": 2048, 00:24:32.695 "data_size": 63488 00:24:32.695 }, 00:24:32.695 { 00:24:32.695 "name": "BaseBdev2", 00:24:32.695 "uuid": "eb743c2a-6d9f-57a7-aaf8-fa5aa1288dda", 00:24:32.695 "is_configured": true, 00:24:32.695 "data_offset": 2048, 00:24:32.695 "data_size": 63488 00:24:32.695 }, 00:24:32.695 { 00:24:32.695 "name": "BaseBdev3", 00:24:32.695 "uuid": "f28d0ead-9b51-54e1-a045-45d386f0fced", 00:24:32.695 "is_configured": true, 00:24:32.695 "data_offset": 2048, 00:24:32.695 "data_size": 63488 00:24:32.695 }, 00:24:32.695 { 00:24:32.695 "name": "BaseBdev4", 00:24:32.695 "uuid": "d91618b4-0a28-5a66-90bb-1b064f1da7f1", 00:24:32.695 "is_configured": true, 00:24:32.695 "data_offset": 2048, 00:24:32.695 "data_size": 63488 00:24:32.695 } 00:24:32.695 ] 00:24:32.695 }' 00:24:32.695 21:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:32.695 21:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:33.629 21:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:33.629 [2024-07-15 21:38:06.825158] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:33.629 [2024-07-15 21:38:06.825272] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:33.629 [2024-07-15 21:38:06.827781] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:33.629 [2024-07-15 21:38:06.827872] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:33.629 [2024-07-15 21:38:06.827930] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:33.629 [2024-07-15 21:38:06.827956] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:24:33.629 0 00:24:33.629 21:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 138192 00:24:33.629 21:38:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 138192 ']' 00:24:33.629 21:38:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 138192 00:24:33.629 21:38:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:24:33.629 21:38:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:33.629 21:38:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 138192 00:24:33.629 killing process with pid 138192 00:24:33.629 21:38:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:33.629 21:38:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:33.629 21:38:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 138192' 00:24:33.629 21:38:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 138192 00:24:33.629 21:38:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 138192 00:24:33.629 [2024-07-15 21:38:06.854786] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:33.896 [2024-07-15 21:38:07.178053] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:35.272 21:38:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.xelBwpd08U 00:24:35.272 21:38:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:24:35.272 21:38:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:24:35.272 ************************************ 00:24:35.272 END TEST raid_write_error_test 00:24:35.272 ************************************ 00:24:35.272 21:38:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.46 00:24:35.272 21:38:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:24:35.272 21:38:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:35.272 21:38:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:24:35.272 21:38:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.46 != \0\.\0\0 ]] 00:24:35.272 00:24:35.272 real 0m8.232s 00:24:35.272 user 0m12.251s 00:24:35.272 sys 0m0.984s 00:24:35.272 21:38:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:35.272 21:38:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:35.272 21:38:08 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:24:35.272 21:38:08 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:24:35.272 21:38:08 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:24:35.272 21:38:08 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:24:35.273 21:38:08 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:35.273 21:38:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:35.273 ************************************ 00:24:35.273 START TEST raid_state_function_test 00:24:35.273 ************************************ 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 false 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=138412 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 138412' 00:24:35.273 Process raid pid: 138412 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 138412 /var/tmp/spdk-raid.sock 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 138412 ']' 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:35.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:35.273 21:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:35.531 [2024-07-15 21:38:08.663789] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:24:35.531 [2024-07-15 21:38:08.664041] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.531 [2024-07-15 21:38:08.826023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.789 [2024-07-15 21:38:09.025936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.046 [2024-07-15 21:38:09.226844] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:36.303 21:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:36.303 21:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:24:36.303 21:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:36.303 [2024-07-15 21:38:09.670604] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:36.303 [2024-07-15 21:38:09.670770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:36.303 [2024-07-15 21:38:09.670807] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:36.303 [2024-07-15 21:38:09.670859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:36.303 [2024-07-15 21:38:09.670896] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:36.303 [2024-07-15 21:38:09.670927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:36.303 [2024-07-15 21:38:09.670964] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:36.303 [2024-07-15 21:38:09.671002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:36.560 21:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:36.560 21:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:36.560 21:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:36.560 21:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:36.560 21:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:36.560 21:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:36.561 21:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:36.561 21:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:36.561 21:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:36.561 21:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:36.561 21:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.561 21:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:36.561 21:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:36.561 "name": "Existed_Raid", 00:24:36.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.561 "strip_size_kb": 64, 00:24:36.561 "state": "configuring", 00:24:36.561 "raid_level": "concat", 00:24:36.561 "superblock": false, 00:24:36.561 "num_base_bdevs": 4, 00:24:36.561 "num_base_bdevs_discovered": 0, 00:24:36.561 "num_base_bdevs_operational": 4, 00:24:36.561 "base_bdevs_list": [ 00:24:36.561 { 00:24:36.561 "name": "BaseBdev1", 00:24:36.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.561 "is_configured": false, 00:24:36.561 "data_offset": 0, 00:24:36.561 "data_size": 0 00:24:36.561 }, 00:24:36.561 { 00:24:36.561 "name": "BaseBdev2", 00:24:36.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.561 "is_configured": false, 00:24:36.561 "data_offset": 0, 00:24:36.561 "data_size": 0 00:24:36.561 }, 00:24:36.561 { 00:24:36.561 "name": "BaseBdev3", 00:24:36.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.561 "is_configured": false, 00:24:36.561 "data_offset": 0, 00:24:36.561 "data_size": 0 00:24:36.561 }, 00:24:36.561 { 00:24:36.561 "name": "BaseBdev4", 00:24:36.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.561 "is_configured": false, 00:24:36.561 "data_offset": 0, 00:24:36.561 "data_size": 0 00:24:36.561 } 00:24:36.561 ] 00:24:36.561 }' 00:24:36.561 21:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:36.561 21:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:37.496 21:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:37.496 [2024-07-15 21:38:10.708963] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:37.496 [2024-07-15 21:38:10.709069] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:24:37.496 21:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:37.755 [2024-07-15 21:38:10.900644] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:37.755 [2024-07-15 21:38:10.900796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:37.755 [2024-07-15 21:38:10.900833] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:37.755 [2024-07-15 21:38:10.900891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:37.755 [2024-07-15 21:38:10.900910] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:37.755 [2024-07-15 21:38:10.900975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:37.755 [2024-07-15 21:38:10.901000] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:37.755 [2024-07-15 21:38:10.901029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:37.755 21:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:38.013 [2024-07-15 21:38:11.132960] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:38.013 BaseBdev1 00:24:38.013 21:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:24:38.013 21:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:24:38.013 21:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:38.013 21:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:38.013 21:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:38.013 21:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:38.013 21:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:38.013 21:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:38.272 [ 00:24:38.272 { 00:24:38.272 "name": "BaseBdev1", 00:24:38.272 "aliases": [ 00:24:38.272 "c931328f-29d0-4e41-b589-10b172fc1f00" 00:24:38.272 ], 00:24:38.272 "product_name": "Malloc disk", 00:24:38.272 "block_size": 512, 00:24:38.272 "num_blocks": 65536, 00:24:38.272 "uuid": "c931328f-29d0-4e41-b589-10b172fc1f00", 00:24:38.272 "assigned_rate_limits": { 00:24:38.272 "rw_ios_per_sec": 0, 00:24:38.272 "rw_mbytes_per_sec": 0, 00:24:38.272 "r_mbytes_per_sec": 0, 00:24:38.272 "w_mbytes_per_sec": 0 00:24:38.272 }, 00:24:38.272 "claimed": true, 00:24:38.272 "claim_type": "exclusive_write", 00:24:38.272 "zoned": false, 00:24:38.272 "supported_io_types": { 00:24:38.272 "read": true, 00:24:38.272 "write": true, 00:24:38.272 "unmap": true, 00:24:38.272 "flush": true, 00:24:38.272 "reset": true, 00:24:38.272 "nvme_admin": false, 00:24:38.272 "nvme_io": false, 00:24:38.272 "nvme_io_md": false, 00:24:38.272 "write_zeroes": true, 00:24:38.272 "zcopy": true, 00:24:38.272 "get_zone_info": false, 00:24:38.272 "zone_management": false, 00:24:38.272 "zone_append": false, 00:24:38.272 "compare": false, 00:24:38.272 "compare_and_write": false, 00:24:38.272 "abort": true, 00:24:38.272 "seek_hole": false, 00:24:38.272 "seek_data": false, 00:24:38.272 "copy": true, 00:24:38.272 "nvme_iov_md": false 00:24:38.272 }, 00:24:38.272 "memory_domains": [ 00:24:38.272 { 00:24:38.272 "dma_device_id": "system", 00:24:38.273 "dma_device_type": 1 00:24:38.273 }, 00:24:38.273 { 00:24:38.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:38.273 "dma_device_type": 2 00:24:38.273 } 00:24:38.273 ], 00:24:38.273 "driver_specific": {} 00:24:38.273 } 00:24:38.273 ] 00:24:38.273 21:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:38.273 21:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:38.273 21:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:38.273 21:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:38.273 21:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:38.273 21:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:38.273 21:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:38.273 21:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:38.273 21:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:38.273 21:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:38.273 21:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:38.273 21:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.273 21:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:38.532 21:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:38.532 "name": "Existed_Raid", 00:24:38.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.532 "strip_size_kb": 64, 00:24:38.532 "state": "configuring", 00:24:38.532 "raid_level": "concat", 00:24:38.532 "superblock": false, 00:24:38.532 "num_base_bdevs": 4, 00:24:38.532 "num_base_bdevs_discovered": 1, 00:24:38.532 "num_base_bdevs_operational": 4, 00:24:38.532 "base_bdevs_list": [ 00:24:38.532 { 00:24:38.532 "name": "BaseBdev1", 00:24:38.532 "uuid": "c931328f-29d0-4e41-b589-10b172fc1f00", 00:24:38.532 "is_configured": true, 00:24:38.532 "data_offset": 0, 00:24:38.532 "data_size": 65536 00:24:38.532 }, 00:24:38.532 { 00:24:38.532 "name": "BaseBdev2", 00:24:38.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.532 "is_configured": false, 00:24:38.532 "data_offset": 0, 00:24:38.532 "data_size": 0 00:24:38.532 }, 00:24:38.532 { 00:24:38.532 "name": "BaseBdev3", 00:24:38.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.532 "is_configured": false, 00:24:38.532 "data_offset": 0, 00:24:38.532 "data_size": 0 00:24:38.532 }, 00:24:38.532 { 00:24:38.532 "name": "BaseBdev4", 00:24:38.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.532 "is_configured": false, 00:24:38.532 "data_offset": 0, 00:24:38.532 "data_size": 0 00:24:38.532 } 00:24:38.532 ] 00:24:38.532 }' 00:24:38.532 21:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:38.532 21:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.101 21:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:39.359 [2024-07-15 21:38:12.538760] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:39.359 [2024-07-15 21:38:12.538924] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:24:39.359 21:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:39.359 [2024-07-15 21:38:12.726492] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:39.359 [2024-07-15 21:38:12.728418] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:39.359 [2024-07-15 21:38:12.728512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:39.359 [2024-07-15 21:38:12.728547] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:39.359 [2024-07-15 21:38:12.728585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:39.359 [2024-07-15 21:38:12.728619] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:39.359 [2024-07-15 21:38:12.728672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:39.618 21:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:24:39.618 21:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:39.618 21:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:39.618 21:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:39.618 21:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:39.618 21:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:39.618 21:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:39.618 21:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:39.618 21:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:39.618 21:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:39.618 21:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:39.618 21:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:39.618 21:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.618 21:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:39.618 21:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:39.618 "name": "Existed_Raid", 00:24:39.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:39.618 "strip_size_kb": 64, 00:24:39.618 "state": "configuring", 00:24:39.618 "raid_level": "concat", 00:24:39.618 "superblock": false, 00:24:39.618 "num_base_bdevs": 4, 00:24:39.618 "num_base_bdevs_discovered": 1, 00:24:39.618 "num_base_bdevs_operational": 4, 00:24:39.618 "base_bdevs_list": [ 00:24:39.618 { 00:24:39.618 "name": "BaseBdev1", 00:24:39.618 "uuid": "c931328f-29d0-4e41-b589-10b172fc1f00", 00:24:39.618 "is_configured": true, 00:24:39.618 "data_offset": 0, 00:24:39.618 "data_size": 65536 00:24:39.618 }, 00:24:39.618 { 00:24:39.618 "name": "BaseBdev2", 00:24:39.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:39.618 "is_configured": false, 00:24:39.618 "data_offset": 0, 00:24:39.618 "data_size": 0 00:24:39.618 }, 00:24:39.618 { 00:24:39.618 "name": "BaseBdev3", 00:24:39.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:39.618 "is_configured": false, 00:24:39.618 "data_offset": 0, 00:24:39.618 "data_size": 0 00:24:39.618 }, 00:24:39.618 { 00:24:39.618 "name": "BaseBdev4", 00:24:39.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:39.618 "is_configured": false, 00:24:39.618 "data_offset": 0, 00:24:39.618 "data_size": 0 00:24:39.618 } 00:24:39.618 ] 00:24:39.618 }' 00:24:39.618 21:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:39.618 21:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:40.556 21:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:40.556 [2024-07-15 21:38:13.801478] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:40.556 BaseBdev2 00:24:40.556 21:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:24:40.556 21:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:24:40.556 21:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:40.556 21:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:40.556 21:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:40.556 21:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:40.556 21:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:40.815 21:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:41.073 [ 00:24:41.073 { 00:24:41.073 "name": "BaseBdev2", 00:24:41.073 "aliases": [ 00:24:41.073 "4c6fe01b-bc6e-4558-9cfb-c672db58f3d1" 00:24:41.073 ], 00:24:41.073 "product_name": "Malloc disk", 00:24:41.073 "block_size": 512, 00:24:41.073 "num_blocks": 65536, 00:24:41.073 "uuid": "4c6fe01b-bc6e-4558-9cfb-c672db58f3d1", 00:24:41.073 "assigned_rate_limits": { 00:24:41.073 "rw_ios_per_sec": 0, 00:24:41.073 "rw_mbytes_per_sec": 0, 00:24:41.073 "r_mbytes_per_sec": 0, 00:24:41.073 "w_mbytes_per_sec": 0 00:24:41.073 }, 00:24:41.073 "claimed": true, 00:24:41.073 "claim_type": "exclusive_write", 00:24:41.073 "zoned": false, 00:24:41.073 "supported_io_types": { 00:24:41.073 "read": true, 00:24:41.073 "write": true, 00:24:41.073 "unmap": true, 00:24:41.073 "flush": true, 00:24:41.073 "reset": true, 00:24:41.073 "nvme_admin": false, 00:24:41.073 "nvme_io": false, 00:24:41.073 "nvme_io_md": false, 00:24:41.073 "write_zeroes": true, 00:24:41.073 "zcopy": true, 00:24:41.073 "get_zone_info": false, 00:24:41.073 "zone_management": false, 00:24:41.073 "zone_append": false, 00:24:41.073 "compare": false, 00:24:41.073 "compare_and_write": false, 00:24:41.073 "abort": true, 00:24:41.073 "seek_hole": false, 00:24:41.073 "seek_data": false, 00:24:41.073 "copy": true, 00:24:41.073 "nvme_iov_md": false 00:24:41.073 }, 00:24:41.073 "memory_domains": [ 00:24:41.073 { 00:24:41.073 "dma_device_id": "system", 00:24:41.073 "dma_device_type": 1 00:24:41.073 }, 00:24:41.073 { 00:24:41.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:41.073 "dma_device_type": 2 00:24:41.073 } 00:24:41.073 ], 00:24:41.073 "driver_specific": {} 00:24:41.073 } 00:24:41.073 ] 00:24:41.073 21:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:41.073 21:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:41.073 21:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:41.073 21:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:41.073 21:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:41.073 21:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:41.073 21:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:41.073 21:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:41.073 21:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:41.073 21:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:41.073 21:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:41.073 21:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:41.073 21:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:41.073 21:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.073 21:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:41.073 21:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:41.073 "name": "Existed_Raid", 00:24:41.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.073 "strip_size_kb": 64, 00:24:41.073 "state": "configuring", 00:24:41.073 "raid_level": "concat", 00:24:41.073 "superblock": false, 00:24:41.073 "num_base_bdevs": 4, 00:24:41.073 "num_base_bdevs_discovered": 2, 00:24:41.073 "num_base_bdevs_operational": 4, 00:24:41.073 "base_bdevs_list": [ 00:24:41.073 { 00:24:41.073 "name": "BaseBdev1", 00:24:41.073 "uuid": "c931328f-29d0-4e41-b589-10b172fc1f00", 00:24:41.073 "is_configured": true, 00:24:41.073 "data_offset": 0, 00:24:41.073 "data_size": 65536 00:24:41.073 }, 00:24:41.073 { 00:24:41.073 "name": "BaseBdev2", 00:24:41.073 "uuid": "4c6fe01b-bc6e-4558-9cfb-c672db58f3d1", 00:24:41.073 "is_configured": true, 00:24:41.073 "data_offset": 0, 00:24:41.073 "data_size": 65536 00:24:41.073 }, 00:24:41.073 { 00:24:41.073 "name": "BaseBdev3", 00:24:41.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.073 "is_configured": false, 00:24:41.073 "data_offset": 0, 00:24:41.073 "data_size": 0 00:24:41.073 }, 00:24:41.073 { 00:24:41.073 "name": "BaseBdev4", 00:24:41.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.073 "is_configured": false, 00:24:41.073 "data_offset": 0, 00:24:41.073 "data_size": 0 00:24:41.073 } 00:24:41.073 ] 00:24:41.073 }' 00:24:41.073 21:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:41.073 21:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:42.009 21:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:42.009 [2024-07-15 21:38:15.316644] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:42.009 BaseBdev3 00:24:42.009 21:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:24:42.009 21:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:24:42.009 21:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:42.009 21:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:42.009 21:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:42.009 21:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:42.009 21:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:42.268 21:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:42.527 [ 00:24:42.527 { 00:24:42.527 "name": "BaseBdev3", 00:24:42.527 "aliases": [ 00:24:42.527 "14ad730d-7c35-4dcc-87ea-cedae06c4b39" 00:24:42.527 ], 00:24:42.527 "product_name": "Malloc disk", 00:24:42.527 "block_size": 512, 00:24:42.527 "num_blocks": 65536, 00:24:42.527 "uuid": "14ad730d-7c35-4dcc-87ea-cedae06c4b39", 00:24:42.527 "assigned_rate_limits": { 00:24:42.527 "rw_ios_per_sec": 0, 00:24:42.527 "rw_mbytes_per_sec": 0, 00:24:42.527 "r_mbytes_per_sec": 0, 00:24:42.527 "w_mbytes_per_sec": 0 00:24:42.527 }, 00:24:42.527 "claimed": true, 00:24:42.527 "claim_type": "exclusive_write", 00:24:42.527 "zoned": false, 00:24:42.527 "supported_io_types": { 00:24:42.527 "read": true, 00:24:42.527 "write": true, 00:24:42.527 "unmap": true, 00:24:42.527 "flush": true, 00:24:42.527 "reset": true, 00:24:42.527 "nvme_admin": false, 00:24:42.527 "nvme_io": false, 00:24:42.527 "nvme_io_md": false, 00:24:42.527 "write_zeroes": true, 00:24:42.527 "zcopy": true, 00:24:42.527 "get_zone_info": false, 00:24:42.527 "zone_management": false, 00:24:42.527 "zone_append": false, 00:24:42.527 "compare": false, 00:24:42.527 "compare_and_write": false, 00:24:42.527 "abort": true, 00:24:42.527 "seek_hole": false, 00:24:42.527 "seek_data": false, 00:24:42.527 "copy": true, 00:24:42.527 "nvme_iov_md": false 00:24:42.527 }, 00:24:42.527 "memory_domains": [ 00:24:42.527 { 00:24:42.527 "dma_device_id": "system", 00:24:42.527 "dma_device_type": 1 00:24:42.527 }, 00:24:42.527 { 00:24:42.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:42.527 "dma_device_type": 2 00:24:42.527 } 00:24:42.527 ], 00:24:42.527 "driver_specific": {} 00:24:42.527 } 00:24:42.527 ] 00:24:42.527 21:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:42.527 21:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:42.527 21:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:42.527 21:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:42.527 21:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:42.527 21:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:42.527 21:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:42.527 21:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:42.527 21:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:42.527 21:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:42.527 21:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:42.527 21:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:42.527 21:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:42.527 21:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:42.528 21:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:42.787 21:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:42.787 "name": "Existed_Raid", 00:24:42.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:42.787 "strip_size_kb": 64, 00:24:42.787 "state": "configuring", 00:24:42.787 "raid_level": "concat", 00:24:42.787 "superblock": false, 00:24:42.787 "num_base_bdevs": 4, 00:24:42.787 "num_base_bdevs_discovered": 3, 00:24:42.787 "num_base_bdevs_operational": 4, 00:24:42.787 "base_bdevs_list": [ 00:24:42.787 { 00:24:42.787 "name": "BaseBdev1", 00:24:42.787 "uuid": "c931328f-29d0-4e41-b589-10b172fc1f00", 00:24:42.787 "is_configured": true, 00:24:42.787 "data_offset": 0, 00:24:42.787 "data_size": 65536 00:24:42.787 }, 00:24:42.787 { 00:24:42.787 "name": "BaseBdev2", 00:24:42.787 "uuid": "4c6fe01b-bc6e-4558-9cfb-c672db58f3d1", 00:24:42.787 "is_configured": true, 00:24:42.787 "data_offset": 0, 00:24:42.787 "data_size": 65536 00:24:42.787 }, 00:24:42.787 { 00:24:42.787 "name": "BaseBdev3", 00:24:42.787 "uuid": "14ad730d-7c35-4dcc-87ea-cedae06c4b39", 00:24:42.787 "is_configured": true, 00:24:42.787 "data_offset": 0, 00:24:42.787 "data_size": 65536 00:24:42.787 }, 00:24:42.787 { 00:24:42.787 "name": "BaseBdev4", 00:24:42.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:42.787 "is_configured": false, 00:24:42.787 "data_offset": 0, 00:24:42.787 "data_size": 0 00:24:42.787 } 00:24:42.787 ] 00:24:42.787 }' 00:24:42.787 21:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:42.787 21:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.356 21:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:43.616 [2024-07-15 21:38:16.844850] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:43.616 [2024-07-15 21:38:16.844987] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:24:43.616 [2024-07-15 21:38:16.845011] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:24:43.616 [2024-07-15 21:38:16.845186] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:24:43.616 [2024-07-15 21:38:16.845554] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:24:43.616 [2024-07-15 21:38:16.845599] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:24:43.616 [2024-07-15 21:38:16.845898] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:43.616 BaseBdev4 00:24:43.616 21:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:24:43.616 21:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:24:43.616 21:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:43.616 21:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:43.617 21:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:43.617 21:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:43.617 21:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:43.876 21:38:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:44.136 [ 00:24:44.136 { 00:24:44.136 "name": "BaseBdev4", 00:24:44.136 "aliases": [ 00:24:44.136 "30cb14a4-da0c-438b-9fb9-7794c0096e6b" 00:24:44.136 ], 00:24:44.136 "product_name": "Malloc disk", 00:24:44.136 "block_size": 512, 00:24:44.136 "num_blocks": 65536, 00:24:44.136 "uuid": "30cb14a4-da0c-438b-9fb9-7794c0096e6b", 00:24:44.136 "assigned_rate_limits": { 00:24:44.136 "rw_ios_per_sec": 0, 00:24:44.136 "rw_mbytes_per_sec": 0, 00:24:44.136 "r_mbytes_per_sec": 0, 00:24:44.136 "w_mbytes_per_sec": 0 00:24:44.136 }, 00:24:44.136 "claimed": true, 00:24:44.136 "claim_type": "exclusive_write", 00:24:44.136 "zoned": false, 00:24:44.136 "supported_io_types": { 00:24:44.136 "read": true, 00:24:44.136 "write": true, 00:24:44.136 "unmap": true, 00:24:44.136 "flush": true, 00:24:44.136 "reset": true, 00:24:44.136 "nvme_admin": false, 00:24:44.136 "nvme_io": false, 00:24:44.136 "nvme_io_md": false, 00:24:44.136 "write_zeroes": true, 00:24:44.136 "zcopy": true, 00:24:44.136 "get_zone_info": false, 00:24:44.136 "zone_management": false, 00:24:44.136 "zone_append": false, 00:24:44.136 "compare": false, 00:24:44.136 "compare_and_write": false, 00:24:44.136 "abort": true, 00:24:44.136 "seek_hole": false, 00:24:44.136 "seek_data": false, 00:24:44.136 "copy": true, 00:24:44.136 "nvme_iov_md": false 00:24:44.136 }, 00:24:44.136 "memory_domains": [ 00:24:44.136 { 00:24:44.136 "dma_device_id": "system", 00:24:44.136 "dma_device_type": 1 00:24:44.136 }, 00:24:44.136 { 00:24:44.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:44.136 "dma_device_type": 2 00:24:44.136 } 00:24:44.136 ], 00:24:44.136 "driver_specific": {} 00:24:44.136 } 00:24:44.136 ] 00:24:44.136 21:38:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:44.136 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:44.136 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:44.136 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:24:44.136 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:44.136 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:44.136 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:44.136 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:44.136 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:44.136 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:44.136 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:44.136 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:44.136 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:44.136 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.136 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:44.395 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:44.395 "name": "Existed_Raid", 00:24:44.395 "uuid": "d52799b1-a496-46da-a98e-bd8e81583123", 00:24:44.395 "strip_size_kb": 64, 00:24:44.395 "state": "online", 00:24:44.395 "raid_level": "concat", 00:24:44.395 "superblock": false, 00:24:44.395 "num_base_bdevs": 4, 00:24:44.395 "num_base_bdevs_discovered": 4, 00:24:44.395 "num_base_bdevs_operational": 4, 00:24:44.395 "base_bdevs_list": [ 00:24:44.395 { 00:24:44.395 "name": "BaseBdev1", 00:24:44.395 "uuid": "c931328f-29d0-4e41-b589-10b172fc1f00", 00:24:44.395 "is_configured": true, 00:24:44.395 "data_offset": 0, 00:24:44.395 "data_size": 65536 00:24:44.395 }, 00:24:44.395 { 00:24:44.395 "name": "BaseBdev2", 00:24:44.395 "uuid": "4c6fe01b-bc6e-4558-9cfb-c672db58f3d1", 00:24:44.395 "is_configured": true, 00:24:44.395 "data_offset": 0, 00:24:44.395 "data_size": 65536 00:24:44.395 }, 00:24:44.395 { 00:24:44.395 "name": "BaseBdev3", 00:24:44.395 "uuid": "14ad730d-7c35-4dcc-87ea-cedae06c4b39", 00:24:44.395 "is_configured": true, 00:24:44.395 "data_offset": 0, 00:24:44.395 "data_size": 65536 00:24:44.395 }, 00:24:44.395 { 00:24:44.395 "name": "BaseBdev4", 00:24:44.395 "uuid": "30cb14a4-da0c-438b-9fb9-7794c0096e6b", 00:24:44.395 "is_configured": true, 00:24:44.395 "data_offset": 0, 00:24:44.395 "data_size": 65536 00:24:44.395 } 00:24:44.395 ] 00:24:44.395 }' 00:24:44.395 21:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:44.395 21:38:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.964 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:24:44.964 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:44.964 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:44.964 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:44.964 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:44.964 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:44.964 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:44.964 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:45.223 [2024-07-15 21:38:18.378688] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:45.223 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:45.223 "name": "Existed_Raid", 00:24:45.224 "aliases": [ 00:24:45.224 "d52799b1-a496-46da-a98e-bd8e81583123" 00:24:45.224 ], 00:24:45.224 "product_name": "Raid Volume", 00:24:45.224 "block_size": 512, 00:24:45.224 "num_blocks": 262144, 00:24:45.224 "uuid": "d52799b1-a496-46da-a98e-bd8e81583123", 00:24:45.224 "assigned_rate_limits": { 00:24:45.224 "rw_ios_per_sec": 0, 00:24:45.224 "rw_mbytes_per_sec": 0, 00:24:45.224 "r_mbytes_per_sec": 0, 00:24:45.224 "w_mbytes_per_sec": 0 00:24:45.224 }, 00:24:45.224 "claimed": false, 00:24:45.224 "zoned": false, 00:24:45.224 "supported_io_types": { 00:24:45.224 "read": true, 00:24:45.224 "write": true, 00:24:45.224 "unmap": true, 00:24:45.224 "flush": true, 00:24:45.224 "reset": true, 00:24:45.224 "nvme_admin": false, 00:24:45.224 "nvme_io": false, 00:24:45.224 "nvme_io_md": false, 00:24:45.224 "write_zeroes": true, 00:24:45.224 "zcopy": false, 00:24:45.224 "get_zone_info": false, 00:24:45.224 "zone_management": false, 00:24:45.224 "zone_append": false, 00:24:45.224 "compare": false, 00:24:45.224 "compare_and_write": false, 00:24:45.224 "abort": false, 00:24:45.224 "seek_hole": false, 00:24:45.224 "seek_data": false, 00:24:45.224 "copy": false, 00:24:45.224 "nvme_iov_md": false 00:24:45.224 }, 00:24:45.224 "memory_domains": [ 00:24:45.224 { 00:24:45.224 "dma_device_id": "system", 00:24:45.224 "dma_device_type": 1 00:24:45.224 }, 00:24:45.224 { 00:24:45.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:45.224 "dma_device_type": 2 00:24:45.224 }, 00:24:45.224 { 00:24:45.224 "dma_device_id": "system", 00:24:45.224 "dma_device_type": 1 00:24:45.224 }, 00:24:45.224 { 00:24:45.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:45.224 "dma_device_type": 2 00:24:45.224 }, 00:24:45.224 { 00:24:45.224 "dma_device_id": "system", 00:24:45.224 "dma_device_type": 1 00:24:45.224 }, 00:24:45.224 { 00:24:45.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:45.224 "dma_device_type": 2 00:24:45.224 }, 00:24:45.224 { 00:24:45.224 "dma_device_id": "system", 00:24:45.224 "dma_device_type": 1 00:24:45.224 }, 00:24:45.224 { 00:24:45.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:45.224 "dma_device_type": 2 00:24:45.224 } 00:24:45.224 ], 00:24:45.224 "driver_specific": { 00:24:45.224 "raid": { 00:24:45.224 "uuid": "d52799b1-a496-46da-a98e-bd8e81583123", 00:24:45.224 "strip_size_kb": 64, 00:24:45.224 "state": "online", 00:24:45.224 "raid_level": "concat", 00:24:45.224 "superblock": false, 00:24:45.224 "num_base_bdevs": 4, 00:24:45.224 "num_base_bdevs_discovered": 4, 00:24:45.224 "num_base_bdevs_operational": 4, 00:24:45.224 "base_bdevs_list": [ 00:24:45.224 { 00:24:45.224 "name": "BaseBdev1", 00:24:45.224 "uuid": "c931328f-29d0-4e41-b589-10b172fc1f00", 00:24:45.224 "is_configured": true, 00:24:45.224 "data_offset": 0, 00:24:45.224 "data_size": 65536 00:24:45.224 }, 00:24:45.224 { 00:24:45.224 "name": "BaseBdev2", 00:24:45.224 "uuid": "4c6fe01b-bc6e-4558-9cfb-c672db58f3d1", 00:24:45.224 "is_configured": true, 00:24:45.224 "data_offset": 0, 00:24:45.224 "data_size": 65536 00:24:45.224 }, 00:24:45.224 { 00:24:45.224 "name": "BaseBdev3", 00:24:45.224 "uuid": "14ad730d-7c35-4dcc-87ea-cedae06c4b39", 00:24:45.224 "is_configured": true, 00:24:45.224 "data_offset": 0, 00:24:45.224 "data_size": 65536 00:24:45.224 }, 00:24:45.224 { 00:24:45.224 "name": "BaseBdev4", 00:24:45.224 "uuid": "30cb14a4-da0c-438b-9fb9-7794c0096e6b", 00:24:45.224 "is_configured": true, 00:24:45.224 "data_offset": 0, 00:24:45.224 "data_size": 65536 00:24:45.224 } 00:24:45.224 ] 00:24:45.224 } 00:24:45.224 } 00:24:45.224 }' 00:24:45.224 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:45.224 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:24:45.224 BaseBdev2 00:24:45.224 BaseBdev3 00:24:45.224 BaseBdev4' 00:24:45.224 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:45.224 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:45.224 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:24:45.484 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:45.484 "name": "BaseBdev1", 00:24:45.484 "aliases": [ 00:24:45.484 "c931328f-29d0-4e41-b589-10b172fc1f00" 00:24:45.484 ], 00:24:45.484 "product_name": "Malloc disk", 00:24:45.484 "block_size": 512, 00:24:45.484 "num_blocks": 65536, 00:24:45.484 "uuid": "c931328f-29d0-4e41-b589-10b172fc1f00", 00:24:45.484 "assigned_rate_limits": { 00:24:45.484 "rw_ios_per_sec": 0, 00:24:45.484 "rw_mbytes_per_sec": 0, 00:24:45.484 "r_mbytes_per_sec": 0, 00:24:45.484 "w_mbytes_per_sec": 0 00:24:45.484 }, 00:24:45.484 "claimed": true, 00:24:45.484 "claim_type": "exclusive_write", 00:24:45.484 "zoned": false, 00:24:45.484 "supported_io_types": { 00:24:45.484 "read": true, 00:24:45.484 "write": true, 00:24:45.484 "unmap": true, 00:24:45.484 "flush": true, 00:24:45.484 "reset": true, 00:24:45.484 "nvme_admin": false, 00:24:45.484 "nvme_io": false, 00:24:45.484 "nvme_io_md": false, 00:24:45.484 "write_zeroes": true, 00:24:45.484 "zcopy": true, 00:24:45.484 "get_zone_info": false, 00:24:45.484 "zone_management": false, 00:24:45.484 "zone_append": false, 00:24:45.484 "compare": false, 00:24:45.484 "compare_and_write": false, 00:24:45.484 "abort": true, 00:24:45.484 "seek_hole": false, 00:24:45.484 "seek_data": false, 00:24:45.484 "copy": true, 00:24:45.484 "nvme_iov_md": false 00:24:45.484 }, 00:24:45.484 "memory_domains": [ 00:24:45.484 { 00:24:45.484 "dma_device_id": "system", 00:24:45.484 "dma_device_type": 1 00:24:45.484 }, 00:24:45.484 { 00:24:45.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:45.484 "dma_device_type": 2 00:24:45.484 } 00:24:45.484 ], 00:24:45.484 "driver_specific": {} 00:24:45.484 }' 00:24:45.484 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:45.484 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:45.484 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:45.484 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:45.484 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:45.743 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:45.743 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:45.743 21:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:45.743 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:45.743 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:45.743 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:45.743 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:45.743 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:45.743 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:45.743 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:46.002 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:46.002 "name": "BaseBdev2", 00:24:46.002 "aliases": [ 00:24:46.002 "4c6fe01b-bc6e-4558-9cfb-c672db58f3d1" 00:24:46.002 ], 00:24:46.002 "product_name": "Malloc disk", 00:24:46.002 "block_size": 512, 00:24:46.002 "num_blocks": 65536, 00:24:46.002 "uuid": "4c6fe01b-bc6e-4558-9cfb-c672db58f3d1", 00:24:46.002 "assigned_rate_limits": { 00:24:46.002 "rw_ios_per_sec": 0, 00:24:46.002 "rw_mbytes_per_sec": 0, 00:24:46.002 "r_mbytes_per_sec": 0, 00:24:46.002 "w_mbytes_per_sec": 0 00:24:46.002 }, 00:24:46.002 "claimed": true, 00:24:46.002 "claim_type": "exclusive_write", 00:24:46.002 "zoned": false, 00:24:46.002 "supported_io_types": { 00:24:46.002 "read": true, 00:24:46.002 "write": true, 00:24:46.002 "unmap": true, 00:24:46.002 "flush": true, 00:24:46.002 "reset": true, 00:24:46.002 "nvme_admin": false, 00:24:46.002 "nvme_io": false, 00:24:46.002 "nvme_io_md": false, 00:24:46.002 "write_zeroes": true, 00:24:46.002 "zcopy": true, 00:24:46.002 "get_zone_info": false, 00:24:46.002 "zone_management": false, 00:24:46.002 "zone_append": false, 00:24:46.002 "compare": false, 00:24:46.002 "compare_and_write": false, 00:24:46.002 "abort": true, 00:24:46.002 "seek_hole": false, 00:24:46.002 "seek_data": false, 00:24:46.002 "copy": true, 00:24:46.002 "nvme_iov_md": false 00:24:46.002 }, 00:24:46.002 "memory_domains": [ 00:24:46.002 { 00:24:46.002 "dma_device_id": "system", 00:24:46.002 "dma_device_type": 1 00:24:46.002 }, 00:24:46.002 { 00:24:46.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:46.002 "dma_device_type": 2 00:24:46.002 } 00:24:46.002 ], 00:24:46.002 "driver_specific": {} 00:24:46.002 }' 00:24:46.002 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:46.260 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:46.260 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:46.260 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:46.260 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:46.260 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:46.260 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:46.260 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:46.518 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:46.518 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:46.518 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:46.518 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:46.518 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:46.518 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:46.518 21:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:46.776 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:46.776 "name": "BaseBdev3", 00:24:46.776 "aliases": [ 00:24:46.776 "14ad730d-7c35-4dcc-87ea-cedae06c4b39" 00:24:46.776 ], 00:24:46.776 "product_name": "Malloc disk", 00:24:46.776 "block_size": 512, 00:24:46.776 "num_blocks": 65536, 00:24:46.776 "uuid": "14ad730d-7c35-4dcc-87ea-cedae06c4b39", 00:24:46.776 "assigned_rate_limits": { 00:24:46.776 "rw_ios_per_sec": 0, 00:24:46.776 "rw_mbytes_per_sec": 0, 00:24:46.776 "r_mbytes_per_sec": 0, 00:24:46.776 "w_mbytes_per_sec": 0 00:24:46.776 }, 00:24:46.776 "claimed": true, 00:24:46.776 "claim_type": "exclusive_write", 00:24:46.776 "zoned": false, 00:24:46.776 "supported_io_types": { 00:24:46.776 "read": true, 00:24:46.776 "write": true, 00:24:46.776 "unmap": true, 00:24:46.776 "flush": true, 00:24:46.776 "reset": true, 00:24:46.776 "nvme_admin": false, 00:24:46.776 "nvme_io": false, 00:24:46.776 "nvme_io_md": false, 00:24:46.776 "write_zeroes": true, 00:24:46.776 "zcopy": true, 00:24:46.776 "get_zone_info": false, 00:24:46.776 "zone_management": false, 00:24:46.776 "zone_append": false, 00:24:46.776 "compare": false, 00:24:46.776 "compare_and_write": false, 00:24:46.776 "abort": true, 00:24:46.776 "seek_hole": false, 00:24:46.776 "seek_data": false, 00:24:46.777 "copy": true, 00:24:46.777 "nvme_iov_md": false 00:24:46.777 }, 00:24:46.777 "memory_domains": [ 00:24:46.777 { 00:24:46.777 "dma_device_id": "system", 00:24:46.777 "dma_device_type": 1 00:24:46.777 }, 00:24:46.777 { 00:24:46.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:46.777 "dma_device_type": 2 00:24:46.777 } 00:24:46.777 ], 00:24:46.777 "driver_specific": {} 00:24:46.777 }' 00:24:46.777 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:46.777 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:46.777 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:46.777 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:47.041 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:47.041 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:47.041 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:47.041 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:47.041 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:47.041 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:47.041 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:47.322 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:47.322 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:47.322 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:47.322 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:47.580 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:47.580 "name": "BaseBdev4", 00:24:47.580 "aliases": [ 00:24:47.580 "30cb14a4-da0c-438b-9fb9-7794c0096e6b" 00:24:47.580 ], 00:24:47.580 "product_name": "Malloc disk", 00:24:47.580 "block_size": 512, 00:24:47.580 "num_blocks": 65536, 00:24:47.580 "uuid": "30cb14a4-da0c-438b-9fb9-7794c0096e6b", 00:24:47.580 "assigned_rate_limits": { 00:24:47.580 "rw_ios_per_sec": 0, 00:24:47.580 "rw_mbytes_per_sec": 0, 00:24:47.580 "r_mbytes_per_sec": 0, 00:24:47.580 "w_mbytes_per_sec": 0 00:24:47.580 }, 00:24:47.580 "claimed": true, 00:24:47.580 "claim_type": "exclusive_write", 00:24:47.580 "zoned": false, 00:24:47.580 "supported_io_types": { 00:24:47.580 "read": true, 00:24:47.580 "write": true, 00:24:47.580 "unmap": true, 00:24:47.580 "flush": true, 00:24:47.580 "reset": true, 00:24:47.580 "nvme_admin": false, 00:24:47.580 "nvme_io": false, 00:24:47.580 "nvme_io_md": false, 00:24:47.580 "write_zeroes": true, 00:24:47.580 "zcopy": true, 00:24:47.580 "get_zone_info": false, 00:24:47.580 "zone_management": false, 00:24:47.580 "zone_append": false, 00:24:47.580 "compare": false, 00:24:47.580 "compare_and_write": false, 00:24:47.580 "abort": true, 00:24:47.580 "seek_hole": false, 00:24:47.580 "seek_data": false, 00:24:47.580 "copy": true, 00:24:47.580 "nvme_iov_md": false 00:24:47.580 }, 00:24:47.580 "memory_domains": [ 00:24:47.580 { 00:24:47.580 "dma_device_id": "system", 00:24:47.580 "dma_device_type": 1 00:24:47.580 }, 00:24:47.580 { 00:24:47.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:47.580 "dma_device_type": 2 00:24:47.580 } 00:24:47.580 ], 00:24:47.580 "driver_specific": {} 00:24:47.580 }' 00:24:47.580 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:47.580 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:47.580 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:47.580 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:47.580 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:47.580 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:47.580 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:47.839 21:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:47.840 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:47.840 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:47.840 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:47.840 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:47.840 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:48.097 [2024-07-15 21:38:21.369446] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:48.097 [2024-07-15 21:38:21.369565] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:48.097 [2024-07-15 21:38:21.369652] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:48.355 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:24:48.355 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:24:48.355 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:48.355 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:24:48.355 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:24:48.355 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:24:48.355 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:48.355 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:24:48.355 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:48.355 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:48.355 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:48.355 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:48.355 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:48.355 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:48.355 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:48.355 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.355 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:48.614 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:48.614 "name": "Existed_Raid", 00:24:48.614 "uuid": "d52799b1-a496-46da-a98e-bd8e81583123", 00:24:48.614 "strip_size_kb": 64, 00:24:48.614 "state": "offline", 00:24:48.614 "raid_level": "concat", 00:24:48.614 "superblock": false, 00:24:48.614 "num_base_bdevs": 4, 00:24:48.614 "num_base_bdevs_discovered": 3, 00:24:48.614 "num_base_bdevs_operational": 3, 00:24:48.614 "base_bdevs_list": [ 00:24:48.614 { 00:24:48.614 "name": null, 00:24:48.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.614 "is_configured": false, 00:24:48.614 "data_offset": 0, 00:24:48.614 "data_size": 65536 00:24:48.614 }, 00:24:48.614 { 00:24:48.614 "name": "BaseBdev2", 00:24:48.614 "uuid": "4c6fe01b-bc6e-4558-9cfb-c672db58f3d1", 00:24:48.614 "is_configured": true, 00:24:48.614 "data_offset": 0, 00:24:48.614 "data_size": 65536 00:24:48.614 }, 00:24:48.614 { 00:24:48.614 "name": "BaseBdev3", 00:24:48.614 "uuid": "14ad730d-7c35-4dcc-87ea-cedae06c4b39", 00:24:48.614 "is_configured": true, 00:24:48.614 "data_offset": 0, 00:24:48.614 "data_size": 65536 00:24:48.614 }, 00:24:48.614 { 00:24:48.614 "name": "BaseBdev4", 00:24:48.614 "uuid": "30cb14a4-da0c-438b-9fb9-7794c0096e6b", 00:24:48.614 "is_configured": true, 00:24:48.614 "data_offset": 0, 00:24:48.614 "data_size": 65536 00:24:48.614 } 00:24:48.614 ] 00:24:48.614 }' 00:24:48.614 21:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:48.614 21:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.181 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:24:49.181 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:49.181 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:49.181 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:49.440 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:49.440 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:49.440 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:49.699 [2024-07-15 21:38:22.853621] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:49.699 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:49.699 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:49.699 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:49.699 21:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:49.958 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:49.958 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:49.958 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:50.217 [2024-07-15 21:38:23.423558] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:50.217 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:50.217 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:50.217 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:50.217 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:50.476 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:50.476 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:50.476 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:50.733 [2024-07-15 21:38:24.032967] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:50.733 [2024-07-15 21:38:24.033129] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:24:50.991 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:50.991 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:50.991 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:50.991 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:24:51.248 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:24:51.248 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:24:51.248 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:24:51.248 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:24:51.248 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:51.248 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:51.506 BaseBdev2 00:24:51.506 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:24:51.506 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:24:51.506 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:51.506 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:51.506 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:51.506 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:51.506 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:51.764 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:52.054 [ 00:24:52.054 { 00:24:52.054 "name": "BaseBdev2", 00:24:52.054 "aliases": [ 00:24:52.054 "8962235b-f0a4-4c2f-b807-caa05eea713d" 00:24:52.054 ], 00:24:52.054 "product_name": "Malloc disk", 00:24:52.054 "block_size": 512, 00:24:52.054 "num_blocks": 65536, 00:24:52.054 "uuid": "8962235b-f0a4-4c2f-b807-caa05eea713d", 00:24:52.054 "assigned_rate_limits": { 00:24:52.054 "rw_ios_per_sec": 0, 00:24:52.054 "rw_mbytes_per_sec": 0, 00:24:52.054 "r_mbytes_per_sec": 0, 00:24:52.054 "w_mbytes_per_sec": 0 00:24:52.054 }, 00:24:52.054 "claimed": false, 00:24:52.054 "zoned": false, 00:24:52.054 "supported_io_types": { 00:24:52.054 "read": true, 00:24:52.054 "write": true, 00:24:52.054 "unmap": true, 00:24:52.054 "flush": true, 00:24:52.054 "reset": true, 00:24:52.054 "nvme_admin": false, 00:24:52.054 "nvme_io": false, 00:24:52.054 "nvme_io_md": false, 00:24:52.054 "write_zeroes": true, 00:24:52.054 "zcopy": true, 00:24:52.054 "get_zone_info": false, 00:24:52.054 "zone_management": false, 00:24:52.054 "zone_append": false, 00:24:52.054 "compare": false, 00:24:52.054 "compare_and_write": false, 00:24:52.054 "abort": true, 00:24:52.054 "seek_hole": false, 00:24:52.054 "seek_data": false, 00:24:52.054 "copy": true, 00:24:52.054 "nvme_iov_md": false 00:24:52.054 }, 00:24:52.054 "memory_domains": [ 00:24:52.054 { 00:24:52.054 "dma_device_id": "system", 00:24:52.054 "dma_device_type": 1 00:24:52.054 }, 00:24:52.054 { 00:24:52.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:52.054 "dma_device_type": 2 00:24:52.054 } 00:24:52.054 ], 00:24:52.054 "driver_specific": {} 00:24:52.054 } 00:24:52.054 ] 00:24:52.054 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:52.054 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:52.054 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:52.054 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:52.311 BaseBdev3 00:24:52.311 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:24:52.311 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:24:52.311 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:52.311 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:52.311 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:52.311 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:52.311 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:52.311 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:52.568 [ 00:24:52.568 { 00:24:52.568 "name": "BaseBdev3", 00:24:52.568 "aliases": [ 00:24:52.568 "3ef62939-e71d-4c55-b450-2dea6580c749" 00:24:52.568 ], 00:24:52.568 "product_name": "Malloc disk", 00:24:52.568 "block_size": 512, 00:24:52.568 "num_blocks": 65536, 00:24:52.568 "uuid": "3ef62939-e71d-4c55-b450-2dea6580c749", 00:24:52.568 "assigned_rate_limits": { 00:24:52.568 "rw_ios_per_sec": 0, 00:24:52.568 "rw_mbytes_per_sec": 0, 00:24:52.568 "r_mbytes_per_sec": 0, 00:24:52.568 "w_mbytes_per_sec": 0 00:24:52.568 }, 00:24:52.568 "claimed": false, 00:24:52.568 "zoned": false, 00:24:52.568 "supported_io_types": { 00:24:52.568 "read": true, 00:24:52.568 "write": true, 00:24:52.568 "unmap": true, 00:24:52.568 "flush": true, 00:24:52.568 "reset": true, 00:24:52.568 "nvme_admin": false, 00:24:52.568 "nvme_io": false, 00:24:52.568 "nvme_io_md": false, 00:24:52.568 "write_zeroes": true, 00:24:52.568 "zcopy": true, 00:24:52.568 "get_zone_info": false, 00:24:52.568 "zone_management": false, 00:24:52.568 "zone_append": false, 00:24:52.568 "compare": false, 00:24:52.568 "compare_and_write": false, 00:24:52.568 "abort": true, 00:24:52.568 "seek_hole": false, 00:24:52.568 "seek_data": false, 00:24:52.568 "copy": true, 00:24:52.568 "nvme_iov_md": false 00:24:52.568 }, 00:24:52.568 "memory_domains": [ 00:24:52.568 { 00:24:52.568 "dma_device_id": "system", 00:24:52.568 "dma_device_type": 1 00:24:52.568 }, 00:24:52.568 { 00:24:52.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:52.568 "dma_device_type": 2 00:24:52.568 } 00:24:52.568 ], 00:24:52.568 "driver_specific": {} 00:24:52.568 } 00:24:52.568 ] 00:24:52.568 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:52.568 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:52.568 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:52.568 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:52.826 BaseBdev4 00:24:52.826 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:24:52.826 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:24:52.826 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:52.826 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:52.826 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:52.826 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:52.826 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:53.084 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:53.342 [ 00:24:53.342 { 00:24:53.342 "name": "BaseBdev4", 00:24:53.342 "aliases": [ 00:24:53.342 "e3810126-1ec6-499b-a83c-c1800faebbff" 00:24:53.342 ], 00:24:53.342 "product_name": "Malloc disk", 00:24:53.342 "block_size": 512, 00:24:53.342 "num_blocks": 65536, 00:24:53.342 "uuid": "e3810126-1ec6-499b-a83c-c1800faebbff", 00:24:53.342 "assigned_rate_limits": { 00:24:53.342 "rw_ios_per_sec": 0, 00:24:53.342 "rw_mbytes_per_sec": 0, 00:24:53.342 "r_mbytes_per_sec": 0, 00:24:53.342 "w_mbytes_per_sec": 0 00:24:53.342 }, 00:24:53.342 "claimed": false, 00:24:53.342 "zoned": false, 00:24:53.342 "supported_io_types": { 00:24:53.342 "read": true, 00:24:53.342 "write": true, 00:24:53.342 "unmap": true, 00:24:53.342 "flush": true, 00:24:53.342 "reset": true, 00:24:53.342 "nvme_admin": false, 00:24:53.342 "nvme_io": false, 00:24:53.342 "nvme_io_md": false, 00:24:53.342 "write_zeroes": true, 00:24:53.342 "zcopy": true, 00:24:53.342 "get_zone_info": false, 00:24:53.342 "zone_management": false, 00:24:53.342 "zone_append": false, 00:24:53.342 "compare": false, 00:24:53.342 "compare_and_write": false, 00:24:53.342 "abort": true, 00:24:53.342 "seek_hole": false, 00:24:53.342 "seek_data": false, 00:24:53.342 "copy": true, 00:24:53.342 "nvme_iov_md": false 00:24:53.342 }, 00:24:53.342 "memory_domains": [ 00:24:53.342 { 00:24:53.342 "dma_device_id": "system", 00:24:53.342 "dma_device_type": 1 00:24:53.342 }, 00:24:53.342 { 00:24:53.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:53.342 "dma_device_type": 2 00:24:53.342 } 00:24:53.342 ], 00:24:53.342 "driver_specific": {} 00:24:53.342 } 00:24:53.342 ] 00:24:53.342 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:53.342 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:53.342 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:53.342 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:53.600 [2024-07-15 21:38:26.820963] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:53.600 [2024-07-15 21:38:26.821132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:53.600 [2024-07-15 21:38:26.821184] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:53.600 [2024-07-15 21:38:26.823049] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:53.600 [2024-07-15 21:38:26.823159] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:53.600 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:53.600 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:53.600 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:53.600 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:53.600 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:53.600 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:53.600 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:53.600 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:53.600 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:53.600 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:53.600 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.600 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:53.911 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:53.911 "name": "Existed_Raid", 00:24:53.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.911 "strip_size_kb": 64, 00:24:53.911 "state": "configuring", 00:24:53.911 "raid_level": "concat", 00:24:53.911 "superblock": false, 00:24:53.911 "num_base_bdevs": 4, 00:24:53.911 "num_base_bdevs_discovered": 3, 00:24:53.911 "num_base_bdevs_operational": 4, 00:24:53.911 "base_bdevs_list": [ 00:24:53.911 { 00:24:53.911 "name": "BaseBdev1", 00:24:53.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.911 "is_configured": false, 00:24:53.911 "data_offset": 0, 00:24:53.911 "data_size": 0 00:24:53.911 }, 00:24:53.911 { 00:24:53.911 "name": "BaseBdev2", 00:24:53.911 "uuid": "8962235b-f0a4-4c2f-b807-caa05eea713d", 00:24:53.911 "is_configured": true, 00:24:53.911 "data_offset": 0, 00:24:53.911 "data_size": 65536 00:24:53.911 }, 00:24:53.911 { 00:24:53.911 "name": "BaseBdev3", 00:24:53.911 "uuid": "3ef62939-e71d-4c55-b450-2dea6580c749", 00:24:53.911 "is_configured": true, 00:24:53.911 "data_offset": 0, 00:24:53.911 "data_size": 65536 00:24:53.911 }, 00:24:53.911 { 00:24:53.911 "name": "BaseBdev4", 00:24:53.911 "uuid": "e3810126-1ec6-499b-a83c-c1800faebbff", 00:24:53.911 "is_configured": true, 00:24:53.911 "data_offset": 0, 00:24:53.912 "data_size": 65536 00:24:53.912 } 00:24:53.912 ] 00:24:53.912 }' 00:24:53.912 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:53.912 21:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:54.479 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:54.737 [2024-07-15 21:38:27.935017] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:54.737 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:54.737 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:54.737 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:54.737 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:54.737 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:54.737 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:54.737 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:54.737 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:54.737 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:54.737 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:54.737 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:54.737 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.997 21:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:54.997 "name": "Existed_Raid", 00:24:54.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.997 "strip_size_kb": 64, 00:24:54.997 "state": "configuring", 00:24:54.997 "raid_level": "concat", 00:24:54.997 "superblock": false, 00:24:54.997 "num_base_bdevs": 4, 00:24:54.997 "num_base_bdevs_discovered": 2, 00:24:54.997 "num_base_bdevs_operational": 4, 00:24:54.997 "base_bdevs_list": [ 00:24:54.997 { 00:24:54.997 "name": "BaseBdev1", 00:24:54.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.997 "is_configured": false, 00:24:54.997 "data_offset": 0, 00:24:54.997 "data_size": 0 00:24:54.997 }, 00:24:54.997 { 00:24:54.997 "name": null, 00:24:54.997 "uuid": "8962235b-f0a4-4c2f-b807-caa05eea713d", 00:24:54.997 "is_configured": false, 00:24:54.997 "data_offset": 0, 00:24:54.997 "data_size": 65536 00:24:54.997 }, 00:24:54.997 { 00:24:54.997 "name": "BaseBdev3", 00:24:54.997 "uuid": "3ef62939-e71d-4c55-b450-2dea6580c749", 00:24:54.997 "is_configured": true, 00:24:54.997 "data_offset": 0, 00:24:54.997 "data_size": 65536 00:24:54.997 }, 00:24:54.997 { 00:24:54.997 "name": "BaseBdev4", 00:24:54.997 "uuid": "e3810126-1ec6-499b-a83c-c1800faebbff", 00:24:54.997 "is_configured": true, 00:24:54.997 "data_offset": 0, 00:24:54.997 "data_size": 65536 00:24:54.997 } 00:24:54.997 ] 00:24:54.997 }' 00:24:54.997 21:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:54.997 21:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:55.564 21:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.565 21:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:55.824 21:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:24:55.824 21:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:56.082 [2024-07-15 21:38:29.445461] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:56.082 BaseBdev1 00:24:56.342 21:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:24:56.342 21:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:24:56.342 21:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:56.342 21:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:56.342 21:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:56.342 21:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:56.342 21:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:56.601 21:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:56.601 [ 00:24:56.601 { 00:24:56.601 "name": "BaseBdev1", 00:24:56.601 "aliases": [ 00:24:56.601 "8896e3cc-805b-4702-b281-84f896205be3" 00:24:56.601 ], 00:24:56.601 "product_name": "Malloc disk", 00:24:56.601 "block_size": 512, 00:24:56.601 "num_blocks": 65536, 00:24:56.601 "uuid": "8896e3cc-805b-4702-b281-84f896205be3", 00:24:56.601 "assigned_rate_limits": { 00:24:56.601 "rw_ios_per_sec": 0, 00:24:56.601 "rw_mbytes_per_sec": 0, 00:24:56.601 "r_mbytes_per_sec": 0, 00:24:56.601 "w_mbytes_per_sec": 0 00:24:56.601 }, 00:24:56.601 "claimed": true, 00:24:56.601 "claim_type": "exclusive_write", 00:24:56.601 "zoned": false, 00:24:56.601 "supported_io_types": { 00:24:56.601 "read": true, 00:24:56.601 "write": true, 00:24:56.601 "unmap": true, 00:24:56.601 "flush": true, 00:24:56.601 "reset": true, 00:24:56.601 "nvme_admin": false, 00:24:56.601 "nvme_io": false, 00:24:56.601 "nvme_io_md": false, 00:24:56.601 "write_zeroes": true, 00:24:56.601 "zcopy": true, 00:24:56.601 "get_zone_info": false, 00:24:56.601 "zone_management": false, 00:24:56.601 "zone_append": false, 00:24:56.601 "compare": false, 00:24:56.601 "compare_and_write": false, 00:24:56.601 "abort": true, 00:24:56.601 "seek_hole": false, 00:24:56.601 "seek_data": false, 00:24:56.601 "copy": true, 00:24:56.601 "nvme_iov_md": false 00:24:56.601 }, 00:24:56.601 "memory_domains": [ 00:24:56.601 { 00:24:56.601 "dma_device_id": "system", 00:24:56.601 "dma_device_type": 1 00:24:56.601 }, 00:24:56.601 { 00:24:56.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:56.601 "dma_device_type": 2 00:24:56.601 } 00:24:56.601 ], 00:24:56.601 "driver_specific": {} 00:24:56.601 } 00:24:56.601 ] 00:24:56.601 21:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:56.601 21:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:56.601 21:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:56.601 21:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:56.601 21:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:56.601 21:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:56.601 21:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:56.601 21:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:56.601 21:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:56.601 21:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:56.601 21:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:56.601 21:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:56.601 21:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:56.860 21:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:56.860 "name": "Existed_Raid", 00:24:56.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.860 "strip_size_kb": 64, 00:24:56.860 "state": "configuring", 00:24:56.860 "raid_level": "concat", 00:24:56.860 "superblock": false, 00:24:56.860 "num_base_bdevs": 4, 00:24:56.860 "num_base_bdevs_discovered": 3, 00:24:56.860 "num_base_bdevs_operational": 4, 00:24:56.860 "base_bdevs_list": [ 00:24:56.860 { 00:24:56.860 "name": "BaseBdev1", 00:24:56.860 "uuid": "8896e3cc-805b-4702-b281-84f896205be3", 00:24:56.860 "is_configured": true, 00:24:56.860 "data_offset": 0, 00:24:56.860 "data_size": 65536 00:24:56.860 }, 00:24:56.860 { 00:24:56.860 "name": null, 00:24:56.860 "uuid": "8962235b-f0a4-4c2f-b807-caa05eea713d", 00:24:56.860 "is_configured": false, 00:24:56.860 "data_offset": 0, 00:24:56.860 "data_size": 65536 00:24:56.860 }, 00:24:56.860 { 00:24:56.860 "name": "BaseBdev3", 00:24:56.860 "uuid": "3ef62939-e71d-4c55-b450-2dea6580c749", 00:24:56.860 "is_configured": true, 00:24:56.860 "data_offset": 0, 00:24:56.860 "data_size": 65536 00:24:56.860 }, 00:24:56.860 { 00:24:56.860 "name": "BaseBdev4", 00:24:56.860 "uuid": "e3810126-1ec6-499b-a83c-c1800faebbff", 00:24:56.860 "is_configured": true, 00:24:56.860 "data_offset": 0, 00:24:56.860 "data_size": 65536 00:24:56.860 } 00:24:56.860 ] 00:24:56.860 }' 00:24:56.860 21:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:56.860 21:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:57.798 21:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.798 21:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:57.798 21:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:24:57.798 21:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:24:58.057 [2024-07-15 21:38:31.238675] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:58.057 21:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:58.057 21:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:58.057 21:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:58.057 21:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:58.057 21:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:58.057 21:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:58.057 21:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:58.057 21:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:58.057 21:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:58.057 21:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:58.057 21:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:58.057 21:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:58.315 21:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:58.315 "name": "Existed_Raid", 00:24:58.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.315 "strip_size_kb": 64, 00:24:58.315 "state": "configuring", 00:24:58.315 "raid_level": "concat", 00:24:58.315 "superblock": false, 00:24:58.315 "num_base_bdevs": 4, 00:24:58.315 "num_base_bdevs_discovered": 2, 00:24:58.315 "num_base_bdevs_operational": 4, 00:24:58.315 "base_bdevs_list": [ 00:24:58.315 { 00:24:58.315 "name": "BaseBdev1", 00:24:58.315 "uuid": "8896e3cc-805b-4702-b281-84f896205be3", 00:24:58.315 "is_configured": true, 00:24:58.315 "data_offset": 0, 00:24:58.315 "data_size": 65536 00:24:58.315 }, 00:24:58.315 { 00:24:58.315 "name": null, 00:24:58.315 "uuid": "8962235b-f0a4-4c2f-b807-caa05eea713d", 00:24:58.315 "is_configured": false, 00:24:58.315 "data_offset": 0, 00:24:58.315 "data_size": 65536 00:24:58.315 }, 00:24:58.315 { 00:24:58.315 "name": null, 00:24:58.315 "uuid": "3ef62939-e71d-4c55-b450-2dea6580c749", 00:24:58.315 "is_configured": false, 00:24:58.316 "data_offset": 0, 00:24:58.316 "data_size": 65536 00:24:58.316 }, 00:24:58.316 { 00:24:58.316 "name": "BaseBdev4", 00:24:58.316 "uuid": "e3810126-1ec6-499b-a83c-c1800faebbff", 00:24:58.316 "is_configured": true, 00:24:58.316 "data_offset": 0, 00:24:58.316 "data_size": 65536 00:24:58.316 } 00:24:58.316 ] 00:24:58.316 }' 00:24:58.316 21:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:58.316 21:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.884 21:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:58.884 21:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.144 21:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:24:59.144 21:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:24:59.403 [2024-07-15 21:38:32.520622] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:59.403 21:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:59.403 21:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:59.403 21:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:59.403 21:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:59.403 21:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:59.403 21:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:59.403 21:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:59.403 21:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:59.403 21:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:59.403 21:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:59.403 21:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:59.403 21:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.403 21:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:59.403 "name": "Existed_Raid", 00:24:59.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:59.403 "strip_size_kb": 64, 00:24:59.403 "state": "configuring", 00:24:59.403 "raid_level": "concat", 00:24:59.403 "superblock": false, 00:24:59.403 "num_base_bdevs": 4, 00:24:59.403 "num_base_bdevs_discovered": 3, 00:24:59.403 "num_base_bdevs_operational": 4, 00:24:59.403 "base_bdevs_list": [ 00:24:59.403 { 00:24:59.403 "name": "BaseBdev1", 00:24:59.403 "uuid": "8896e3cc-805b-4702-b281-84f896205be3", 00:24:59.403 "is_configured": true, 00:24:59.403 "data_offset": 0, 00:24:59.403 "data_size": 65536 00:24:59.403 }, 00:24:59.403 { 00:24:59.403 "name": null, 00:24:59.403 "uuid": "8962235b-f0a4-4c2f-b807-caa05eea713d", 00:24:59.403 "is_configured": false, 00:24:59.403 "data_offset": 0, 00:24:59.403 "data_size": 65536 00:24:59.403 }, 00:24:59.403 { 00:24:59.403 "name": "BaseBdev3", 00:24:59.403 "uuid": "3ef62939-e71d-4c55-b450-2dea6580c749", 00:24:59.403 "is_configured": true, 00:24:59.403 "data_offset": 0, 00:24:59.403 "data_size": 65536 00:24:59.403 }, 00:24:59.403 { 00:24:59.403 "name": "BaseBdev4", 00:24:59.403 "uuid": "e3810126-1ec6-499b-a83c-c1800faebbff", 00:24:59.403 "is_configured": true, 00:24:59.403 "data_offset": 0, 00:24:59.403 "data_size": 65536 00:24:59.403 } 00:24:59.403 ] 00:24:59.403 }' 00:24:59.403 21:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:59.403 21:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.340 21:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.340 21:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:00.599 21:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:25:00.599 21:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:00.599 [2024-07-15 21:38:33.898280] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:00.858 21:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:00.858 21:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:00.858 21:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:00.858 21:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:00.858 21:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:00.858 21:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:00.858 21:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:00.858 21:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:00.858 21:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:00.858 21:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:00.858 21:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.858 21:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:01.118 21:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:01.118 "name": "Existed_Raid", 00:25:01.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.118 "strip_size_kb": 64, 00:25:01.118 "state": "configuring", 00:25:01.118 "raid_level": "concat", 00:25:01.118 "superblock": false, 00:25:01.118 "num_base_bdevs": 4, 00:25:01.118 "num_base_bdevs_discovered": 2, 00:25:01.118 "num_base_bdevs_operational": 4, 00:25:01.118 "base_bdevs_list": [ 00:25:01.118 { 00:25:01.118 "name": null, 00:25:01.118 "uuid": "8896e3cc-805b-4702-b281-84f896205be3", 00:25:01.118 "is_configured": false, 00:25:01.118 "data_offset": 0, 00:25:01.118 "data_size": 65536 00:25:01.118 }, 00:25:01.118 { 00:25:01.118 "name": null, 00:25:01.118 "uuid": "8962235b-f0a4-4c2f-b807-caa05eea713d", 00:25:01.118 "is_configured": false, 00:25:01.118 "data_offset": 0, 00:25:01.118 "data_size": 65536 00:25:01.118 }, 00:25:01.118 { 00:25:01.118 "name": "BaseBdev3", 00:25:01.118 "uuid": "3ef62939-e71d-4c55-b450-2dea6580c749", 00:25:01.118 "is_configured": true, 00:25:01.118 "data_offset": 0, 00:25:01.118 "data_size": 65536 00:25:01.118 }, 00:25:01.118 { 00:25:01.118 "name": "BaseBdev4", 00:25:01.118 "uuid": "e3810126-1ec6-499b-a83c-c1800faebbff", 00:25:01.118 "is_configured": true, 00:25:01.118 "data_offset": 0, 00:25:01.118 "data_size": 65536 00:25:01.118 } 00:25:01.118 ] 00:25:01.118 }' 00:25:01.118 21:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:01.118 21:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.686 21:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.686 21:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:01.945 21:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:25:01.945 21:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:02.205 [2024-07-15 21:38:35.439359] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:02.205 21:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:02.205 21:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:02.205 21:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:02.205 21:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:02.205 21:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:02.205 21:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:02.205 21:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:02.205 21:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:02.205 21:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:02.205 21:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:02.205 21:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:02.205 21:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:02.464 21:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:02.464 "name": "Existed_Raid", 00:25:02.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.464 "strip_size_kb": 64, 00:25:02.464 "state": "configuring", 00:25:02.464 "raid_level": "concat", 00:25:02.464 "superblock": false, 00:25:02.464 "num_base_bdevs": 4, 00:25:02.464 "num_base_bdevs_discovered": 3, 00:25:02.464 "num_base_bdevs_operational": 4, 00:25:02.464 "base_bdevs_list": [ 00:25:02.464 { 00:25:02.464 "name": null, 00:25:02.464 "uuid": "8896e3cc-805b-4702-b281-84f896205be3", 00:25:02.464 "is_configured": false, 00:25:02.464 "data_offset": 0, 00:25:02.464 "data_size": 65536 00:25:02.464 }, 00:25:02.464 { 00:25:02.464 "name": "BaseBdev2", 00:25:02.464 "uuid": "8962235b-f0a4-4c2f-b807-caa05eea713d", 00:25:02.464 "is_configured": true, 00:25:02.464 "data_offset": 0, 00:25:02.464 "data_size": 65536 00:25:02.464 }, 00:25:02.464 { 00:25:02.464 "name": "BaseBdev3", 00:25:02.464 "uuid": "3ef62939-e71d-4c55-b450-2dea6580c749", 00:25:02.464 "is_configured": true, 00:25:02.464 "data_offset": 0, 00:25:02.464 "data_size": 65536 00:25:02.464 }, 00:25:02.464 { 00:25:02.464 "name": "BaseBdev4", 00:25:02.464 "uuid": "e3810126-1ec6-499b-a83c-c1800faebbff", 00:25:02.464 "is_configured": true, 00:25:02.464 "data_offset": 0, 00:25:02.464 "data_size": 65536 00:25:02.464 } 00:25:02.464 ] 00:25:02.464 }' 00:25:02.464 21:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:02.464 21:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.032 21:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.032 21:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:03.290 21:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:25:03.290 21:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.290 21:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:03.548 21:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 8896e3cc-805b-4702-b281-84f896205be3 00:25:03.808 [2024-07-15 21:38:37.008535] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:03.808 [2024-07-15 21:38:37.008677] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:25:03.808 [2024-07-15 21:38:37.008700] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:25:03.808 [2024-07-15 21:38:37.008854] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:03.808 [2024-07-15 21:38:37.009206] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:25:03.808 [2024-07-15 21:38:37.009254] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009680 00:25:03.808 [2024-07-15 21:38:37.009545] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:03.808 NewBaseBdev 00:25:03.808 21:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:25:03.808 21:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:25:03.808 21:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:03.808 21:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:03.808 21:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:03.808 21:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:03.808 21:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:04.067 21:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:04.326 [ 00:25:04.326 { 00:25:04.326 "name": "NewBaseBdev", 00:25:04.326 "aliases": [ 00:25:04.326 "8896e3cc-805b-4702-b281-84f896205be3" 00:25:04.326 ], 00:25:04.326 "product_name": "Malloc disk", 00:25:04.326 "block_size": 512, 00:25:04.326 "num_blocks": 65536, 00:25:04.326 "uuid": "8896e3cc-805b-4702-b281-84f896205be3", 00:25:04.326 "assigned_rate_limits": { 00:25:04.326 "rw_ios_per_sec": 0, 00:25:04.326 "rw_mbytes_per_sec": 0, 00:25:04.326 "r_mbytes_per_sec": 0, 00:25:04.326 "w_mbytes_per_sec": 0 00:25:04.326 }, 00:25:04.326 "claimed": true, 00:25:04.326 "claim_type": "exclusive_write", 00:25:04.326 "zoned": false, 00:25:04.326 "supported_io_types": { 00:25:04.326 "read": true, 00:25:04.326 "write": true, 00:25:04.326 "unmap": true, 00:25:04.326 "flush": true, 00:25:04.326 "reset": true, 00:25:04.326 "nvme_admin": false, 00:25:04.326 "nvme_io": false, 00:25:04.326 "nvme_io_md": false, 00:25:04.326 "write_zeroes": true, 00:25:04.326 "zcopy": true, 00:25:04.326 "get_zone_info": false, 00:25:04.326 "zone_management": false, 00:25:04.326 "zone_append": false, 00:25:04.326 "compare": false, 00:25:04.326 "compare_and_write": false, 00:25:04.326 "abort": true, 00:25:04.326 "seek_hole": false, 00:25:04.326 "seek_data": false, 00:25:04.326 "copy": true, 00:25:04.326 "nvme_iov_md": false 00:25:04.326 }, 00:25:04.326 "memory_domains": [ 00:25:04.326 { 00:25:04.326 "dma_device_id": "system", 00:25:04.326 "dma_device_type": 1 00:25:04.326 }, 00:25:04.326 { 00:25:04.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:04.326 "dma_device_type": 2 00:25:04.326 } 00:25:04.326 ], 00:25:04.326 "driver_specific": {} 00:25:04.326 } 00:25:04.326 ] 00:25:04.326 21:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:04.326 21:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:25:04.326 21:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:04.326 21:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:04.326 21:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:04.326 21:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:04.326 21:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:04.326 21:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:04.326 21:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:04.326 21:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:04.326 21:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:04.326 21:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:04.326 21:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:04.584 21:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:04.584 "name": "Existed_Raid", 00:25:04.584 "uuid": "bd10261a-ce82-422a-a9f5-fdc638e32b88", 00:25:04.584 "strip_size_kb": 64, 00:25:04.584 "state": "online", 00:25:04.584 "raid_level": "concat", 00:25:04.584 "superblock": false, 00:25:04.584 "num_base_bdevs": 4, 00:25:04.584 "num_base_bdevs_discovered": 4, 00:25:04.584 "num_base_bdevs_operational": 4, 00:25:04.584 "base_bdevs_list": [ 00:25:04.584 { 00:25:04.584 "name": "NewBaseBdev", 00:25:04.584 "uuid": "8896e3cc-805b-4702-b281-84f896205be3", 00:25:04.584 "is_configured": true, 00:25:04.584 "data_offset": 0, 00:25:04.584 "data_size": 65536 00:25:04.584 }, 00:25:04.584 { 00:25:04.584 "name": "BaseBdev2", 00:25:04.584 "uuid": "8962235b-f0a4-4c2f-b807-caa05eea713d", 00:25:04.584 "is_configured": true, 00:25:04.584 "data_offset": 0, 00:25:04.584 "data_size": 65536 00:25:04.584 }, 00:25:04.584 { 00:25:04.584 "name": "BaseBdev3", 00:25:04.584 "uuid": "3ef62939-e71d-4c55-b450-2dea6580c749", 00:25:04.584 "is_configured": true, 00:25:04.584 "data_offset": 0, 00:25:04.584 "data_size": 65536 00:25:04.584 }, 00:25:04.584 { 00:25:04.584 "name": "BaseBdev4", 00:25:04.584 "uuid": "e3810126-1ec6-499b-a83c-c1800faebbff", 00:25:04.584 "is_configured": true, 00:25:04.584 "data_offset": 0, 00:25:04.584 "data_size": 65536 00:25:04.584 } 00:25:04.584 ] 00:25:04.584 }' 00:25:04.584 21:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:04.584 21:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.151 21:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:25:05.151 21:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:05.151 21:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:05.151 21:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:05.151 21:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:05.151 21:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:05.151 21:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:05.151 21:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:05.409 [2024-07-15 21:38:38.614235] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:05.409 21:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:05.409 "name": "Existed_Raid", 00:25:05.409 "aliases": [ 00:25:05.409 "bd10261a-ce82-422a-a9f5-fdc638e32b88" 00:25:05.409 ], 00:25:05.409 "product_name": "Raid Volume", 00:25:05.409 "block_size": 512, 00:25:05.409 "num_blocks": 262144, 00:25:05.409 "uuid": "bd10261a-ce82-422a-a9f5-fdc638e32b88", 00:25:05.409 "assigned_rate_limits": { 00:25:05.409 "rw_ios_per_sec": 0, 00:25:05.409 "rw_mbytes_per_sec": 0, 00:25:05.409 "r_mbytes_per_sec": 0, 00:25:05.409 "w_mbytes_per_sec": 0 00:25:05.409 }, 00:25:05.409 "claimed": false, 00:25:05.409 "zoned": false, 00:25:05.409 "supported_io_types": { 00:25:05.409 "read": true, 00:25:05.409 "write": true, 00:25:05.409 "unmap": true, 00:25:05.409 "flush": true, 00:25:05.409 "reset": true, 00:25:05.409 "nvme_admin": false, 00:25:05.409 "nvme_io": false, 00:25:05.409 "nvme_io_md": false, 00:25:05.409 "write_zeroes": true, 00:25:05.409 "zcopy": false, 00:25:05.409 "get_zone_info": false, 00:25:05.409 "zone_management": false, 00:25:05.409 "zone_append": false, 00:25:05.409 "compare": false, 00:25:05.409 "compare_and_write": false, 00:25:05.409 "abort": false, 00:25:05.409 "seek_hole": false, 00:25:05.409 "seek_data": false, 00:25:05.409 "copy": false, 00:25:05.409 "nvme_iov_md": false 00:25:05.409 }, 00:25:05.409 "memory_domains": [ 00:25:05.409 { 00:25:05.409 "dma_device_id": "system", 00:25:05.409 "dma_device_type": 1 00:25:05.409 }, 00:25:05.409 { 00:25:05.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:05.409 "dma_device_type": 2 00:25:05.409 }, 00:25:05.409 { 00:25:05.409 "dma_device_id": "system", 00:25:05.409 "dma_device_type": 1 00:25:05.409 }, 00:25:05.409 { 00:25:05.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:05.409 "dma_device_type": 2 00:25:05.409 }, 00:25:05.409 { 00:25:05.409 "dma_device_id": "system", 00:25:05.409 "dma_device_type": 1 00:25:05.409 }, 00:25:05.409 { 00:25:05.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:05.409 "dma_device_type": 2 00:25:05.409 }, 00:25:05.409 { 00:25:05.409 "dma_device_id": "system", 00:25:05.409 "dma_device_type": 1 00:25:05.409 }, 00:25:05.409 { 00:25:05.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:05.409 "dma_device_type": 2 00:25:05.409 } 00:25:05.409 ], 00:25:05.409 "driver_specific": { 00:25:05.409 "raid": { 00:25:05.409 "uuid": "bd10261a-ce82-422a-a9f5-fdc638e32b88", 00:25:05.409 "strip_size_kb": 64, 00:25:05.409 "state": "online", 00:25:05.409 "raid_level": "concat", 00:25:05.409 "superblock": false, 00:25:05.409 "num_base_bdevs": 4, 00:25:05.409 "num_base_bdevs_discovered": 4, 00:25:05.409 "num_base_bdevs_operational": 4, 00:25:05.409 "base_bdevs_list": [ 00:25:05.409 { 00:25:05.409 "name": "NewBaseBdev", 00:25:05.409 "uuid": "8896e3cc-805b-4702-b281-84f896205be3", 00:25:05.409 "is_configured": true, 00:25:05.409 "data_offset": 0, 00:25:05.409 "data_size": 65536 00:25:05.409 }, 00:25:05.409 { 00:25:05.409 "name": "BaseBdev2", 00:25:05.409 "uuid": "8962235b-f0a4-4c2f-b807-caa05eea713d", 00:25:05.409 "is_configured": true, 00:25:05.409 "data_offset": 0, 00:25:05.409 "data_size": 65536 00:25:05.409 }, 00:25:05.409 { 00:25:05.409 "name": "BaseBdev3", 00:25:05.409 "uuid": "3ef62939-e71d-4c55-b450-2dea6580c749", 00:25:05.409 "is_configured": true, 00:25:05.409 "data_offset": 0, 00:25:05.409 "data_size": 65536 00:25:05.409 }, 00:25:05.409 { 00:25:05.409 "name": "BaseBdev4", 00:25:05.409 "uuid": "e3810126-1ec6-499b-a83c-c1800faebbff", 00:25:05.409 "is_configured": true, 00:25:05.409 "data_offset": 0, 00:25:05.409 "data_size": 65536 00:25:05.409 } 00:25:05.409 ] 00:25:05.409 } 00:25:05.409 } 00:25:05.409 }' 00:25:05.409 21:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:05.409 21:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:25:05.409 BaseBdev2 00:25:05.409 BaseBdev3 00:25:05.409 BaseBdev4' 00:25:05.409 21:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:05.409 21:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:05.409 21:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:25:05.667 21:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:05.667 "name": "NewBaseBdev", 00:25:05.667 "aliases": [ 00:25:05.667 "8896e3cc-805b-4702-b281-84f896205be3" 00:25:05.667 ], 00:25:05.667 "product_name": "Malloc disk", 00:25:05.667 "block_size": 512, 00:25:05.667 "num_blocks": 65536, 00:25:05.667 "uuid": "8896e3cc-805b-4702-b281-84f896205be3", 00:25:05.667 "assigned_rate_limits": { 00:25:05.667 "rw_ios_per_sec": 0, 00:25:05.667 "rw_mbytes_per_sec": 0, 00:25:05.667 "r_mbytes_per_sec": 0, 00:25:05.667 "w_mbytes_per_sec": 0 00:25:05.667 }, 00:25:05.667 "claimed": true, 00:25:05.667 "claim_type": "exclusive_write", 00:25:05.667 "zoned": false, 00:25:05.667 "supported_io_types": { 00:25:05.667 "read": true, 00:25:05.667 "write": true, 00:25:05.667 "unmap": true, 00:25:05.667 "flush": true, 00:25:05.667 "reset": true, 00:25:05.667 "nvme_admin": false, 00:25:05.667 "nvme_io": false, 00:25:05.667 "nvme_io_md": false, 00:25:05.667 "write_zeroes": true, 00:25:05.667 "zcopy": true, 00:25:05.667 "get_zone_info": false, 00:25:05.667 "zone_management": false, 00:25:05.667 "zone_append": false, 00:25:05.667 "compare": false, 00:25:05.667 "compare_and_write": false, 00:25:05.667 "abort": true, 00:25:05.668 "seek_hole": false, 00:25:05.668 "seek_data": false, 00:25:05.668 "copy": true, 00:25:05.668 "nvme_iov_md": false 00:25:05.668 }, 00:25:05.668 "memory_domains": [ 00:25:05.668 { 00:25:05.668 "dma_device_id": "system", 00:25:05.668 "dma_device_type": 1 00:25:05.668 }, 00:25:05.668 { 00:25:05.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:05.668 "dma_device_type": 2 00:25:05.668 } 00:25:05.668 ], 00:25:05.668 "driver_specific": {} 00:25:05.668 }' 00:25:05.668 21:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:05.668 21:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:05.668 21:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:05.668 21:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:05.926 21:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:05.926 21:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:05.926 21:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:05.926 21:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:05.926 21:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:05.926 21:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:05.926 21:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:06.184 21:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:06.184 21:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:06.184 21:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:06.184 21:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:06.184 21:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:06.184 "name": "BaseBdev2", 00:25:06.184 "aliases": [ 00:25:06.184 "8962235b-f0a4-4c2f-b807-caa05eea713d" 00:25:06.184 ], 00:25:06.184 "product_name": "Malloc disk", 00:25:06.184 "block_size": 512, 00:25:06.184 "num_blocks": 65536, 00:25:06.184 "uuid": "8962235b-f0a4-4c2f-b807-caa05eea713d", 00:25:06.184 "assigned_rate_limits": { 00:25:06.184 "rw_ios_per_sec": 0, 00:25:06.184 "rw_mbytes_per_sec": 0, 00:25:06.184 "r_mbytes_per_sec": 0, 00:25:06.184 "w_mbytes_per_sec": 0 00:25:06.184 }, 00:25:06.184 "claimed": true, 00:25:06.184 "claim_type": "exclusive_write", 00:25:06.184 "zoned": false, 00:25:06.184 "supported_io_types": { 00:25:06.184 "read": true, 00:25:06.184 "write": true, 00:25:06.184 "unmap": true, 00:25:06.184 "flush": true, 00:25:06.184 "reset": true, 00:25:06.184 "nvme_admin": false, 00:25:06.184 "nvme_io": false, 00:25:06.184 "nvme_io_md": false, 00:25:06.184 "write_zeroes": true, 00:25:06.184 "zcopy": true, 00:25:06.184 "get_zone_info": false, 00:25:06.184 "zone_management": false, 00:25:06.184 "zone_append": false, 00:25:06.184 "compare": false, 00:25:06.184 "compare_and_write": false, 00:25:06.184 "abort": true, 00:25:06.184 "seek_hole": false, 00:25:06.184 "seek_data": false, 00:25:06.184 "copy": true, 00:25:06.184 "nvme_iov_md": false 00:25:06.184 }, 00:25:06.184 "memory_domains": [ 00:25:06.184 { 00:25:06.184 "dma_device_id": "system", 00:25:06.184 "dma_device_type": 1 00:25:06.184 }, 00:25:06.184 { 00:25:06.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:06.184 "dma_device_type": 2 00:25:06.184 } 00:25:06.184 ], 00:25:06.184 "driver_specific": {} 00:25:06.184 }' 00:25:06.184 21:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:06.442 21:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:06.442 21:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:06.442 21:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:06.442 21:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:06.442 21:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:06.442 21:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:06.442 21:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:06.700 21:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:06.700 21:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:06.700 21:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:06.700 21:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:06.700 21:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:06.700 21:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:06.700 21:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:06.957 21:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:06.957 "name": "BaseBdev3", 00:25:06.957 "aliases": [ 00:25:06.957 "3ef62939-e71d-4c55-b450-2dea6580c749" 00:25:06.957 ], 00:25:06.957 "product_name": "Malloc disk", 00:25:06.957 "block_size": 512, 00:25:06.957 "num_blocks": 65536, 00:25:06.957 "uuid": "3ef62939-e71d-4c55-b450-2dea6580c749", 00:25:06.957 "assigned_rate_limits": { 00:25:06.957 "rw_ios_per_sec": 0, 00:25:06.957 "rw_mbytes_per_sec": 0, 00:25:06.957 "r_mbytes_per_sec": 0, 00:25:06.957 "w_mbytes_per_sec": 0 00:25:06.957 }, 00:25:06.957 "claimed": true, 00:25:06.957 "claim_type": "exclusive_write", 00:25:06.957 "zoned": false, 00:25:06.957 "supported_io_types": { 00:25:06.957 "read": true, 00:25:06.957 "write": true, 00:25:06.957 "unmap": true, 00:25:06.957 "flush": true, 00:25:06.957 "reset": true, 00:25:06.957 "nvme_admin": false, 00:25:06.957 "nvme_io": false, 00:25:06.957 "nvme_io_md": false, 00:25:06.957 "write_zeroes": true, 00:25:06.957 "zcopy": true, 00:25:06.957 "get_zone_info": false, 00:25:06.957 "zone_management": false, 00:25:06.957 "zone_append": false, 00:25:06.957 "compare": false, 00:25:06.957 "compare_and_write": false, 00:25:06.957 "abort": true, 00:25:06.957 "seek_hole": false, 00:25:06.957 "seek_data": false, 00:25:06.957 "copy": true, 00:25:06.957 "nvme_iov_md": false 00:25:06.957 }, 00:25:06.957 "memory_domains": [ 00:25:06.957 { 00:25:06.957 "dma_device_id": "system", 00:25:06.957 "dma_device_type": 1 00:25:06.957 }, 00:25:06.957 { 00:25:06.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:06.957 "dma_device_type": 2 00:25:06.957 } 00:25:06.957 ], 00:25:06.957 "driver_specific": {} 00:25:06.957 }' 00:25:06.957 21:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:06.957 21:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:06.957 21:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:06.957 21:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:07.216 21:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:07.216 21:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:07.216 21:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:07.216 21:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:07.216 21:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:07.216 21:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:07.216 21:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:07.474 21:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:07.474 21:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:07.474 21:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:07.474 21:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:07.733 21:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:07.733 "name": "BaseBdev4", 00:25:07.733 "aliases": [ 00:25:07.733 "e3810126-1ec6-499b-a83c-c1800faebbff" 00:25:07.733 ], 00:25:07.733 "product_name": "Malloc disk", 00:25:07.733 "block_size": 512, 00:25:07.733 "num_blocks": 65536, 00:25:07.733 "uuid": "e3810126-1ec6-499b-a83c-c1800faebbff", 00:25:07.733 "assigned_rate_limits": { 00:25:07.733 "rw_ios_per_sec": 0, 00:25:07.733 "rw_mbytes_per_sec": 0, 00:25:07.733 "r_mbytes_per_sec": 0, 00:25:07.733 "w_mbytes_per_sec": 0 00:25:07.733 }, 00:25:07.733 "claimed": true, 00:25:07.733 "claim_type": "exclusive_write", 00:25:07.733 "zoned": false, 00:25:07.733 "supported_io_types": { 00:25:07.733 "read": true, 00:25:07.733 "write": true, 00:25:07.733 "unmap": true, 00:25:07.733 "flush": true, 00:25:07.733 "reset": true, 00:25:07.733 "nvme_admin": false, 00:25:07.733 "nvme_io": false, 00:25:07.733 "nvme_io_md": false, 00:25:07.733 "write_zeroes": true, 00:25:07.733 "zcopy": true, 00:25:07.733 "get_zone_info": false, 00:25:07.733 "zone_management": false, 00:25:07.733 "zone_append": false, 00:25:07.733 "compare": false, 00:25:07.733 "compare_and_write": false, 00:25:07.733 "abort": true, 00:25:07.733 "seek_hole": false, 00:25:07.733 "seek_data": false, 00:25:07.733 "copy": true, 00:25:07.733 "nvme_iov_md": false 00:25:07.733 }, 00:25:07.733 "memory_domains": [ 00:25:07.733 { 00:25:07.733 "dma_device_id": "system", 00:25:07.733 "dma_device_type": 1 00:25:07.733 }, 00:25:07.733 { 00:25:07.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:07.733 "dma_device_type": 2 00:25:07.733 } 00:25:07.733 ], 00:25:07.733 "driver_specific": {} 00:25:07.733 }' 00:25:07.733 21:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:07.733 21:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:07.733 21:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:07.733 21:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:07.733 21:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:07.733 21:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:07.733 21:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:07.992 21:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:07.992 21:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:07.992 21:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:07.992 21:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:07.992 21:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:07.992 21:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:08.251 [2024-07-15 21:38:41.526920] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:08.251 [2024-07-15 21:38:41.527040] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:08.251 [2024-07-15 21:38:41.527158] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:08.251 [2024-07-15 21:38:41.527248] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:08.251 [2024-07-15 21:38:41.527300] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name Existed_Raid, state offline 00:25:08.251 21:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 138412 00:25:08.251 21:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 138412 ']' 00:25:08.251 21:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 138412 00:25:08.251 21:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:25:08.251 21:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:08.251 21:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 138412 00:25:08.251 killing process with pid 138412 00:25:08.251 21:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:08.251 21:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:08.251 21:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 138412' 00:25:08.251 21:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 138412 00:25:08.251 21:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 138412 00:25:08.251 [2024-07-15 21:38:41.573750] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:08.819 [2024-07-15 21:38:41.983085] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:10.194 ************************************ 00:25:10.194 END TEST raid_state_function_test 00:25:10.194 ************************************ 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:25:10.194 00:25:10.194 real 0m34.714s 00:25:10.194 user 1m4.501s 00:25:10.194 sys 0m4.043s 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.194 21:38:43 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:25:10.194 21:38:43 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:25:10.194 21:38:43 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:25:10.194 21:38:43 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:10.194 21:38:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:10.194 ************************************ 00:25:10.194 START TEST raid_state_function_test_sb 00:25:10.194 ************************************ 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 true 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=139569 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:10.194 Process raid pid: 139569 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 139569' 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 139569 /var/tmp/spdk-raid.sock 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 139569 ']' 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:10.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:10.194 21:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:10.194 [2024-07-15 21:38:43.452350] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:25:10.194 [2024-07-15 21:38:43.452611] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:10.452 [2024-07-15 21:38:43.601807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.710 [2024-07-15 21:38:43.831209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.710 [2024-07-15 21:38:44.041496] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:10.967 21:38:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:10.967 21:38:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:25:10.967 21:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:11.223 [2024-07-15 21:38:44.511431] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:11.223 [2024-07-15 21:38:44.511567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:11.223 [2024-07-15 21:38:44.511605] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:11.223 [2024-07-15 21:38:44.511642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:11.223 [2024-07-15 21:38:44.511668] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:11.223 [2024-07-15 21:38:44.511694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:11.223 [2024-07-15 21:38:44.511712] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:11.223 [2024-07-15 21:38:44.511744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:11.223 21:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:11.223 21:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:11.223 21:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:11.223 21:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:11.223 21:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:11.223 21:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:11.223 21:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:11.223 21:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:11.223 21:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:11.223 21:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:11.223 21:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:11.223 21:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:11.479 21:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:11.479 "name": "Existed_Raid", 00:25:11.479 "uuid": "44223623-b5af-4b85-9d18-cafb2ea61f95", 00:25:11.479 "strip_size_kb": 64, 00:25:11.479 "state": "configuring", 00:25:11.479 "raid_level": "concat", 00:25:11.479 "superblock": true, 00:25:11.479 "num_base_bdevs": 4, 00:25:11.479 "num_base_bdevs_discovered": 0, 00:25:11.479 "num_base_bdevs_operational": 4, 00:25:11.479 "base_bdevs_list": [ 00:25:11.479 { 00:25:11.479 "name": "BaseBdev1", 00:25:11.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.479 "is_configured": false, 00:25:11.479 "data_offset": 0, 00:25:11.479 "data_size": 0 00:25:11.479 }, 00:25:11.479 { 00:25:11.479 "name": "BaseBdev2", 00:25:11.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.479 "is_configured": false, 00:25:11.479 "data_offset": 0, 00:25:11.479 "data_size": 0 00:25:11.479 }, 00:25:11.479 { 00:25:11.479 "name": "BaseBdev3", 00:25:11.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.479 "is_configured": false, 00:25:11.479 "data_offset": 0, 00:25:11.479 "data_size": 0 00:25:11.479 }, 00:25:11.479 { 00:25:11.479 "name": "BaseBdev4", 00:25:11.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.479 "is_configured": false, 00:25:11.479 "data_offset": 0, 00:25:11.479 "data_size": 0 00:25:11.479 } 00:25:11.479 ] 00:25:11.479 }' 00:25:11.479 21:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:11.479 21:38:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.042 21:38:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:12.300 [2024-07-15 21:38:45.589662] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:12.300 [2024-07-15 21:38:45.589759] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:25:12.300 21:38:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:12.559 [2024-07-15 21:38:45.785365] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:12.559 [2024-07-15 21:38:45.785468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:12.559 [2024-07-15 21:38:45.785490] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:12.559 [2024-07-15 21:38:45.785540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:12.559 [2024-07-15 21:38:45.785559] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:12.559 [2024-07-15 21:38:45.785591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:12.559 [2024-07-15 21:38:45.785606] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:12.559 [2024-07-15 21:38:45.785652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:12.559 21:38:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:12.818 [2024-07-15 21:38:45.998384] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:12.818 BaseBdev1 00:25:12.818 21:38:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:25:12.818 21:38:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:25:12.818 21:38:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:12.818 21:38:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:12.818 21:38:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:12.818 21:38:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:12.818 21:38:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:13.076 21:38:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:13.076 [ 00:25:13.076 { 00:25:13.076 "name": "BaseBdev1", 00:25:13.076 "aliases": [ 00:25:13.076 "c338ff65-d631-45c3-8c2a-76ff12554f37" 00:25:13.076 ], 00:25:13.076 "product_name": "Malloc disk", 00:25:13.076 "block_size": 512, 00:25:13.076 "num_blocks": 65536, 00:25:13.076 "uuid": "c338ff65-d631-45c3-8c2a-76ff12554f37", 00:25:13.076 "assigned_rate_limits": { 00:25:13.076 "rw_ios_per_sec": 0, 00:25:13.076 "rw_mbytes_per_sec": 0, 00:25:13.076 "r_mbytes_per_sec": 0, 00:25:13.076 "w_mbytes_per_sec": 0 00:25:13.076 }, 00:25:13.076 "claimed": true, 00:25:13.076 "claim_type": "exclusive_write", 00:25:13.076 "zoned": false, 00:25:13.076 "supported_io_types": { 00:25:13.076 "read": true, 00:25:13.076 "write": true, 00:25:13.076 "unmap": true, 00:25:13.076 "flush": true, 00:25:13.076 "reset": true, 00:25:13.076 "nvme_admin": false, 00:25:13.076 "nvme_io": false, 00:25:13.076 "nvme_io_md": false, 00:25:13.076 "write_zeroes": true, 00:25:13.076 "zcopy": true, 00:25:13.076 "get_zone_info": false, 00:25:13.076 "zone_management": false, 00:25:13.076 "zone_append": false, 00:25:13.076 "compare": false, 00:25:13.076 "compare_and_write": false, 00:25:13.076 "abort": true, 00:25:13.076 "seek_hole": false, 00:25:13.076 "seek_data": false, 00:25:13.076 "copy": true, 00:25:13.076 "nvme_iov_md": false 00:25:13.076 }, 00:25:13.076 "memory_domains": [ 00:25:13.076 { 00:25:13.076 "dma_device_id": "system", 00:25:13.076 "dma_device_type": 1 00:25:13.076 }, 00:25:13.076 { 00:25:13.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.076 "dma_device_type": 2 00:25:13.076 } 00:25:13.076 ], 00:25:13.076 "driver_specific": {} 00:25:13.076 } 00:25:13.076 ] 00:25:13.076 21:38:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:13.076 21:38:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:13.076 21:38:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:13.076 21:38:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:13.076 21:38:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:13.076 21:38:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:13.076 21:38:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:13.076 21:38:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:13.076 21:38:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:13.076 21:38:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:13.076 21:38:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:13.076 21:38:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.076 21:38:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:13.339 21:38:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:13.339 "name": "Existed_Raid", 00:25:13.339 "uuid": "93cfd901-1137-497c-adfa-9fcde1ff5ffa", 00:25:13.339 "strip_size_kb": 64, 00:25:13.339 "state": "configuring", 00:25:13.339 "raid_level": "concat", 00:25:13.339 "superblock": true, 00:25:13.339 "num_base_bdevs": 4, 00:25:13.339 "num_base_bdevs_discovered": 1, 00:25:13.339 "num_base_bdevs_operational": 4, 00:25:13.339 "base_bdevs_list": [ 00:25:13.339 { 00:25:13.339 "name": "BaseBdev1", 00:25:13.339 "uuid": "c338ff65-d631-45c3-8c2a-76ff12554f37", 00:25:13.339 "is_configured": true, 00:25:13.339 "data_offset": 2048, 00:25:13.339 "data_size": 63488 00:25:13.339 }, 00:25:13.339 { 00:25:13.339 "name": "BaseBdev2", 00:25:13.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.339 "is_configured": false, 00:25:13.339 "data_offset": 0, 00:25:13.339 "data_size": 0 00:25:13.339 }, 00:25:13.339 { 00:25:13.339 "name": "BaseBdev3", 00:25:13.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.339 "is_configured": false, 00:25:13.339 "data_offset": 0, 00:25:13.339 "data_size": 0 00:25:13.339 }, 00:25:13.339 { 00:25:13.339 "name": "BaseBdev4", 00:25:13.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.339 "is_configured": false, 00:25:13.339 "data_offset": 0, 00:25:13.339 "data_size": 0 00:25:13.339 } 00:25:13.339 ] 00:25:13.339 }' 00:25:13.339 21:38:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:13.339 21:38:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:13.906 21:38:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:14.165 [2024-07-15 21:38:47.459948] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:14.165 [2024-07-15 21:38:47.460072] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:25:14.165 21:38:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:14.423 [2024-07-15 21:38:47.651660] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:14.423 [2024-07-15 21:38:47.653561] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:14.423 [2024-07-15 21:38:47.653650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:14.423 [2024-07-15 21:38:47.653688] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:14.423 [2024-07-15 21:38:47.653723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:14.424 [2024-07-15 21:38:47.653742] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:14.424 [2024-07-15 21:38:47.653779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:14.424 21:38:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:25:14.424 21:38:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:14.424 21:38:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:14.424 21:38:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:14.424 21:38:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:14.424 21:38:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:14.424 21:38:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:14.424 21:38:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:14.424 21:38:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:14.424 21:38:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:14.424 21:38:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:14.424 21:38:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:14.424 21:38:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:14.424 21:38:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.682 21:38:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:14.682 "name": "Existed_Raid", 00:25:14.682 "uuid": "8ec63241-335f-4386-9333-82e6f9c04e9b", 00:25:14.682 "strip_size_kb": 64, 00:25:14.682 "state": "configuring", 00:25:14.682 "raid_level": "concat", 00:25:14.682 "superblock": true, 00:25:14.682 "num_base_bdevs": 4, 00:25:14.682 "num_base_bdevs_discovered": 1, 00:25:14.682 "num_base_bdevs_operational": 4, 00:25:14.682 "base_bdevs_list": [ 00:25:14.682 { 00:25:14.682 "name": "BaseBdev1", 00:25:14.682 "uuid": "c338ff65-d631-45c3-8c2a-76ff12554f37", 00:25:14.682 "is_configured": true, 00:25:14.682 "data_offset": 2048, 00:25:14.682 "data_size": 63488 00:25:14.682 }, 00:25:14.682 { 00:25:14.682 "name": "BaseBdev2", 00:25:14.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:14.682 "is_configured": false, 00:25:14.682 "data_offset": 0, 00:25:14.682 "data_size": 0 00:25:14.682 }, 00:25:14.682 { 00:25:14.682 "name": "BaseBdev3", 00:25:14.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:14.682 "is_configured": false, 00:25:14.682 "data_offset": 0, 00:25:14.682 "data_size": 0 00:25:14.682 }, 00:25:14.682 { 00:25:14.682 "name": "BaseBdev4", 00:25:14.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:14.682 "is_configured": false, 00:25:14.682 "data_offset": 0, 00:25:14.682 "data_size": 0 00:25:14.682 } 00:25:14.682 ] 00:25:14.682 }' 00:25:14.682 21:38:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:14.682 21:38:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:15.257 21:38:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:15.521 [2024-07-15 21:38:48.819706] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:15.521 BaseBdev2 00:25:15.521 21:38:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:25:15.521 21:38:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:25:15.521 21:38:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:15.521 21:38:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:15.521 21:38:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:15.521 21:38:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:15.521 21:38:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:15.779 21:38:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:16.038 [ 00:25:16.038 { 00:25:16.038 "name": "BaseBdev2", 00:25:16.038 "aliases": [ 00:25:16.038 "a7770eaf-6c3f-47c2-8272-1c61a1ad6d8e" 00:25:16.038 ], 00:25:16.038 "product_name": "Malloc disk", 00:25:16.038 "block_size": 512, 00:25:16.038 "num_blocks": 65536, 00:25:16.038 "uuid": "a7770eaf-6c3f-47c2-8272-1c61a1ad6d8e", 00:25:16.038 "assigned_rate_limits": { 00:25:16.038 "rw_ios_per_sec": 0, 00:25:16.038 "rw_mbytes_per_sec": 0, 00:25:16.038 "r_mbytes_per_sec": 0, 00:25:16.038 "w_mbytes_per_sec": 0 00:25:16.038 }, 00:25:16.038 "claimed": true, 00:25:16.038 "claim_type": "exclusive_write", 00:25:16.038 "zoned": false, 00:25:16.038 "supported_io_types": { 00:25:16.038 "read": true, 00:25:16.038 "write": true, 00:25:16.038 "unmap": true, 00:25:16.038 "flush": true, 00:25:16.038 "reset": true, 00:25:16.038 "nvme_admin": false, 00:25:16.038 "nvme_io": false, 00:25:16.038 "nvme_io_md": false, 00:25:16.038 "write_zeroes": true, 00:25:16.038 "zcopy": true, 00:25:16.038 "get_zone_info": false, 00:25:16.038 "zone_management": false, 00:25:16.038 "zone_append": false, 00:25:16.038 "compare": false, 00:25:16.038 "compare_and_write": false, 00:25:16.038 "abort": true, 00:25:16.038 "seek_hole": false, 00:25:16.038 "seek_data": false, 00:25:16.039 "copy": true, 00:25:16.039 "nvme_iov_md": false 00:25:16.039 }, 00:25:16.039 "memory_domains": [ 00:25:16.039 { 00:25:16.039 "dma_device_id": "system", 00:25:16.039 "dma_device_type": 1 00:25:16.039 }, 00:25:16.039 { 00:25:16.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:16.039 "dma_device_type": 2 00:25:16.039 } 00:25:16.039 ], 00:25:16.039 "driver_specific": {} 00:25:16.039 } 00:25:16.039 ] 00:25:16.039 21:38:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:16.039 21:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:16.039 21:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:16.039 21:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:16.039 21:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:16.039 21:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:16.039 21:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:16.039 21:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:16.039 21:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:16.039 21:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:16.039 21:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:16.039 21:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:16.039 21:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:16.039 21:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.039 21:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:16.379 21:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:16.379 "name": "Existed_Raid", 00:25:16.379 "uuid": "8ec63241-335f-4386-9333-82e6f9c04e9b", 00:25:16.379 "strip_size_kb": 64, 00:25:16.379 "state": "configuring", 00:25:16.379 "raid_level": "concat", 00:25:16.379 "superblock": true, 00:25:16.379 "num_base_bdevs": 4, 00:25:16.379 "num_base_bdevs_discovered": 2, 00:25:16.379 "num_base_bdevs_operational": 4, 00:25:16.379 "base_bdevs_list": [ 00:25:16.379 { 00:25:16.379 "name": "BaseBdev1", 00:25:16.379 "uuid": "c338ff65-d631-45c3-8c2a-76ff12554f37", 00:25:16.379 "is_configured": true, 00:25:16.379 "data_offset": 2048, 00:25:16.379 "data_size": 63488 00:25:16.379 }, 00:25:16.379 { 00:25:16.379 "name": "BaseBdev2", 00:25:16.379 "uuid": "a7770eaf-6c3f-47c2-8272-1c61a1ad6d8e", 00:25:16.380 "is_configured": true, 00:25:16.380 "data_offset": 2048, 00:25:16.380 "data_size": 63488 00:25:16.380 }, 00:25:16.380 { 00:25:16.380 "name": "BaseBdev3", 00:25:16.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.380 "is_configured": false, 00:25:16.380 "data_offset": 0, 00:25:16.380 "data_size": 0 00:25:16.380 }, 00:25:16.380 { 00:25:16.380 "name": "BaseBdev4", 00:25:16.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.380 "is_configured": false, 00:25:16.380 "data_offset": 0, 00:25:16.380 "data_size": 0 00:25:16.380 } 00:25:16.380 ] 00:25:16.380 }' 00:25:16.380 21:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:16.380 21:38:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:16.959 21:38:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:17.218 [2024-07-15 21:38:50.479176] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:17.218 BaseBdev3 00:25:17.218 21:38:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:25:17.218 21:38:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:25:17.218 21:38:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:17.218 21:38:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:17.218 21:38:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:17.218 21:38:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:17.218 21:38:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:17.477 21:38:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:17.736 [ 00:25:17.736 { 00:25:17.736 "name": "BaseBdev3", 00:25:17.736 "aliases": [ 00:25:17.736 "864c5b6e-8996-4590-8209-995c961cbd95" 00:25:17.736 ], 00:25:17.736 "product_name": "Malloc disk", 00:25:17.736 "block_size": 512, 00:25:17.736 "num_blocks": 65536, 00:25:17.736 "uuid": "864c5b6e-8996-4590-8209-995c961cbd95", 00:25:17.736 "assigned_rate_limits": { 00:25:17.736 "rw_ios_per_sec": 0, 00:25:17.736 "rw_mbytes_per_sec": 0, 00:25:17.736 "r_mbytes_per_sec": 0, 00:25:17.736 "w_mbytes_per_sec": 0 00:25:17.736 }, 00:25:17.736 "claimed": true, 00:25:17.736 "claim_type": "exclusive_write", 00:25:17.736 "zoned": false, 00:25:17.736 "supported_io_types": { 00:25:17.736 "read": true, 00:25:17.736 "write": true, 00:25:17.736 "unmap": true, 00:25:17.736 "flush": true, 00:25:17.736 "reset": true, 00:25:17.736 "nvme_admin": false, 00:25:17.736 "nvme_io": false, 00:25:17.736 "nvme_io_md": false, 00:25:17.736 "write_zeroes": true, 00:25:17.736 "zcopy": true, 00:25:17.736 "get_zone_info": false, 00:25:17.736 "zone_management": false, 00:25:17.736 "zone_append": false, 00:25:17.736 "compare": false, 00:25:17.736 "compare_and_write": false, 00:25:17.736 "abort": true, 00:25:17.736 "seek_hole": false, 00:25:17.736 "seek_data": false, 00:25:17.736 "copy": true, 00:25:17.736 "nvme_iov_md": false 00:25:17.736 }, 00:25:17.736 "memory_domains": [ 00:25:17.736 { 00:25:17.736 "dma_device_id": "system", 00:25:17.736 "dma_device_type": 1 00:25:17.736 }, 00:25:17.736 { 00:25:17.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:17.736 "dma_device_type": 2 00:25:17.736 } 00:25:17.736 ], 00:25:17.736 "driver_specific": {} 00:25:17.736 } 00:25:17.736 ] 00:25:17.736 21:38:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:17.736 21:38:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:17.736 21:38:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:17.736 21:38:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:17.736 21:38:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:17.736 21:38:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:17.736 21:38:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:17.736 21:38:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:17.736 21:38:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:17.736 21:38:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:17.736 21:38:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:17.736 21:38:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:17.736 21:38:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:17.736 21:38:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.736 21:38:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:17.995 21:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:17.995 "name": "Existed_Raid", 00:25:17.995 "uuid": "8ec63241-335f-4386-9333-82e6f9c04e9b", 00:25:17.995 "strip_size_kb": 64, 00:25:17.995 "state": "configuring", 00:25:17.995 "raid_level": "concat", 00:25:17.995 "superblock": true, 00:25:17.995 "num_base_bdevs": 4, 00:25:17.995 "num_base_bdevs_discovered": 3, 00:25:17.995 "num_base_bdevs_operational": 4, 00:25:17.995 "base_bdevs_list": [ 00:25:17.995 { 00:25:17.995 "name": "BaseBdev1", 00:25:17.995 "uuid": "c338ff65-d631-45c3-8c2a-76ff12554f37", 00:25:17.995 "is_configured": true, 00:25:17.995 "data_offset": 2048, 00:25:17.995 "data_size": 63488 00:25:17.995 }, 00:25:17.995 { 00:25:17.995 "name": "BaseBdev2", 00:25:17.995 "uuid": "a7770eaf-6c3f-47c2-8272-1c61a1ad6d8e", 00:25:17.995 "is_configured": true, 00:25:17.995 "data_offset": 2048, 00:25:17.995 "data_size": 63488 00:25:17.995 }, 00:25:17.995 { 00:25:17.995 "name": "BaseBdev3", 00:25:17.995 "uuid": "864c5b6e-8996-4590-8209-995c961cbd95", 00:25:17.995 "is_configured": true, 00:25:17.995 "data_offset": 2048, 00:25:17.995 "data_size": 63488 00:25:17.995 }, 00:25:17.995 { 00:25:17.995 "name": "BaseBdev4", 00:25:17.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:17.995 "is_configured": false, 00:25:17.995 "data_offset": 0, 00:25:17.995 "data_size": 0 00:25:17.995 } 00:25:17.995 ] 00:25:17.996 }' 00:25:17.996 21:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:17.996 21:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:18.579 21:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:18.838 [2024-07-15 21:38:52.033920] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:18.838 [2024-07-15 21:38:52.034302] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:25:18.838 [2024-07-15 21:38:52.034350] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:18.838 [2024-07-15 21:38:52.034526] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:25:18.838 BaseBdev4 00:25:18.838 [2024-07-15 21:38:52.034940] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:25:18.838 [2024-07-15 21:38:52.034955] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:25:18.838 [2024-07-15 21:38:52.035117] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:18.838 21:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:25:18.838 21:38:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:25:18.838 21:38:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:18.838 21:38:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:18.838 21:38:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:18.838 21:38:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:18.839 21:38:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:19.097 21:38:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:19.355 [ 00:25:19.355 { 00:25:19.355 "name": "BaseBdev4", 00:25:19.355 "aliases": [ 00:25:19.355 "7cf063f9-a94a-4868-be66-a3b9aaf3ec09" 00:25:19.355 ], 00:25:19.355 "product_name": "Malloc disk", 00:25:19.355 "block_size": 512, 00:25:19.355 "num_blocks": 65536, 00:25:19.355 "uuid": "7cf063f9-a94a-4868-be66-a3b9aaf3ec09", 00:25:19.355 "assigned_rate_limits": { 00:25:19.355 "rw_ios_per_sec": 0, 00:25:19.356 "rw_mbytes_per_sec": 0, 00:25:19.356 "r_mbytes_per_sec": 0, 00:25:19.356 "w_mbytes_per_sec": 0 00:25:19.356 }, 00:25:19.356 "claimed": true, 00:25:19.356 "claim_type": "exclusive_write", 00:25:19.356 "zoned": false, 00:25:19.356 "supported_io_types": { 00:25:19.356 "read": true, 00:25:19.356 "write": true, 00:25:19.356 "unmap": true, 00:25:19.356 "flush": true, 00:25:19.356 "reset": true, 00:25:19.356 "nvme_admin": false, 00:25:19.356 "nvme_io": false, 00:25:19.356 "nvme_io_md": false, 00:25:19.356 "write_zeroes": true, 00:25:19.356 "zcopy": true, 00:25:19.356 "get_zone_info": false, 00:25:19.356 "zone_management": false, 00:25:19.356 "zone_append": false, 00:25:19.356 "compare": false, 00:25:19.356 "compare_and_write": false, 00:25:19.356 "abort": true, 00:25:19.356 "seek_hole": false, 00:25:19.356 "seek_data": false, 00:25:19.356 "copy": true, 00:25:19.356 "nvme_iov_md": false 00:25:19.356 }, 00:25:19.356 "memory_domains": [ 00:25:19.356 { 00:25:19.356 "dma_device_id": "system", 00:25:19.356 "dma_device_type": 1 00:25:19.356 }, 00:25:19.356 { 00:25:19.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:19.356 "dma_device_type": 2 00:25:19.356 } 00:25:19.356 ], 00:25:19.356 "driver_specific": {} 00:25:19.356 } 00:25:19.356 ] 00:25:19.356 21:38:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:19.356 21:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:19.356 21:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:19.356 21:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:25:19.356 21:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:19.356 21:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:19.356 21:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:19.356 21:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:19.356 21:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:19.356 21:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:19.356 21:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:19.356 21:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:19.356 21:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:19.356 21:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.356 21:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:19.356 21:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:19.356 "name": "Existed_Raid", 00:25:19.356 "uuid": "8ec63241-335f-4386-9333-82e6f9c04e9b", 00:25:19.356 "strip_size_kb": 64, 00:25:19.356 "state": "online", 00:25:19.356 "raid_level": "concat", 00:25:19.356 "superblock": true, 00:25:19.356 "num_base_bdevs": 4, 00:25:19.356 "num_base_bdevs_discovered": 4, 00:25:19.356 "num_base_bdevs_operational": 4, 00:25:19.356 "base_bdevs_list": [ 00:25:19.356 { 00:25:19.356 "name": "BaseBdev1", 00:25:19.356 "uuid": "c338ff65-d631-45c3-8c2a-76ff12554f37", 00:25:19.356 "is_configured": true, 00:25:19.356 "data_offset": 2048, 00:25:19.356 "data_size": 63488 00:25:19.356 }, 00:25:19.356 { 00:25:19.356 "name": "BaseBdev2", 00:25:19.356 "uuid": "a7770eaf-6c3f-47c2-8272-1c61a1ad6d8e", 00:25:19.356 "is_configured": true, 00:25:19.356 "data_offset": 2048, 00:25:19.356 "data_size": 63488 00:25:19.356 }, 00:25:19.356 { 00:25:19.356 "name": "BaseBdev3", 00:25:19.356 "uuid": "864c5b6e-8996-4590-8209-995c961cbd95", 00:25:19.356 "is_configured": true, 00:25:19.356 "data_offset": 2048, 00:25:19.356 "data_size": 63488 00:25:19.356 }, 00:25:19.356 { 00:25:19.356 "name": "BaseBdev4", 00:25:19.356 "uuid": "7cf063f9-a94a-4868-be66-a3b9aaf3ec09", 00:25:19.356 "is_configured": true, 00:25:19.356 "data_offset": 2048, 00:25:19.356 "data_size": 63488 00:25:19.356 } 00:25:19.356 ] 00:25:19.356 }' 00:25:19.356 21:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:19.356 21:38:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:20.291 21:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:25:20.291 21:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:20.291 21:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:20.291 21:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:20.291 21:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:20.291 21:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:25:20.291 21:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:20.291 21:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:20.549 [2024-07-15 21:38:53.679615] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:20.549 21:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:20.549 "name": "Existed_Raid", 00:25:20.549 "aliases": [ 00:25:20.549 "8ec63241-335f-4386-9333-82e6f9c04e9b" 00:25:20.549 ], 00:25:20.549 "product_name": "Raid Volume", 00:25:20.549 "block_size": 512, 00:25:20.549 "num_blocks": 253952, 00:25:20.549 "uuid": "8ec63241-335f-4386-9333-82e6f9c04e9b", 00:25:20.549 "assigned_rate_limits": { 00:25:20.549 "rw_ios_per_sec": 0, 00:25:20.549 "rw_mbytes_per_sec": 0, 00:25:20.549 "r_mbytes_per_sec": 0, 00:25:20.549 "w_mbytes_per_sec": 0 00:25:20.549 }, 00:25:20.549 "claimed": false, 00:25:20.549 "zoned": false, 00:25:20.549 "supported_io_types": { 00:25:20.549 "read": true, 00:25:20.549 "write": true, 00:25:20.549 "unmap": true, 00:25:20.549 "flush": true, 00:25:20.549 "reset": true, 00:25:20.549 "nvme_admin": false, 00:25:20.549 "nvme_io": false, 00:25:20.549 "nvme_io_md": false, 00:25:20.549 "write_zeroes": true, 00:25:20.549 "zcopy": false, 00:25:20.549 "get_zone_info": false, 00:25:20.549 "zone_management": false, 00:25:20.549 "zone_append": false, 00:25:20.549 "compare": false, 00:25:20.549 "compare_and_write": false, 00:25:20.549 "abort": false, 00:25:20.549 "seek_hole": false, 00:25:20.549 "seek_data": false, 00:25:20.549 "copy": false, 00:25:20.549 "nvme_iov_md": false 00:25:20.549 }, 00:25:20.549 "memory_domains": [ 00:25:20.549 { 00:25:20.549 "dma_device_id": "system", 00:25:20.549 "dma_device_type": 1 00:25:20.549 }, 00:25:20.549 { 00:25:20.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:20.549 "dma_device_type": 2 00:25:20.549 }, 00:25:20.549 { 00:25:20.549 "dma_device_id": "system", 00:25:20.549 "dma_device_type": 1 00:25:20.549 }, 00:25:20.549 { 00:25:20.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:20.549 "dma_device_type": 2 00:25:20.549 }, 00:25:20.549 { 00:25:20.549 "dma_device_id": "system", 00:25:20.549 "dma_device_type": 1 00:25:20.549 }, 00:25:20.549 { 00:25:20.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:20.549 "dma_device_type": 2 00:25:20.549 }, 00:25:20.549 { 00:25:20.550 "dma_device_id": "system", 00:25:20.550 "dma_device_type": 1 00:25:20.550 }, 00:25:20.550 { 00:25:20.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:20.550 "dma_device_type": 2 00:25:20.550 } 00:25:20.550 ], 00:25:20.550 "driver_specific": { 00:25:20.550 "raid": { 00:25:20.550 "uuid": "8ec63241-335f-4386-9333-82e6f9c04e9b", 00:25:20.550 "strip_size_kb": 64, 00:25:20.550 "state": "online", 00:25:20.550 "raid_level": "concat", 00:25:20.550 "superblock": true, 00:25:20.550 "num_base_bdevs": 4, 00:25:20.550 "num_base_bdevs_discovered": 4, 00:25:20.550 "num_base_bdevs_operational": 4, 00:25:20.550 "base_bdevs_list": [ 00:25:20.550 { 00:25:20.550 "name": "BaseBdev1", 00:25:20.550 "uuid": "c338ff65-d631-45c3-8c2a-76ff12554f37", 00:25:20.550 "is_configured": true, 00:25:20.550 "data_offset": 2048, 00:25:20.550 "data_size": 63488 00:25:20.550 }, 00:25:20.550 { 00:25:20.550 "name": "BaseBdev2", 00:25:20.550 "uuid": "a7770eaf-6c3f-47c2-8272-1c61a1ad6d8e", 00:25:20.550 "is_configured": true, 00:25:20.550 "data_offset": 2048, 00:25:20.550 "data_size": 63488 00:25:20.550 }, 00:25:20.550 { 00:25:20.550 "name": "BaseBdev3", 00:25:20.550 "uuid": "864c5b6e-8996-4590-8209-995c961cbd95", 00:25:20.550 "is_configured": true, 00:25:20.550 "data_offset": 2048, 00:25:20.550 "data_size": 63488 00:25:20.550 }, 00:25:20.550 { 00:25:20.550 "name": "BaseBdev4", 00:25:20.550 "uuid": "7cf063f9-a94a-4868-be66-a3b9aaf3ec09", 00:25:20.550 "is_configured": true, 00:25:20.550 "data_offset": 2048, 00:25:20.550 "data_size": 63488 00:25:20.550 } 00:25:20.550 ] 00:25:20.550 } 00:25:20.550 } 00:25:20.550 }' 00:25:20.550 21:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:20.550 21:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:25:20.550 BaseBdev2 00:25:20.550 BaseBdev3 00:25:20.550 BaseBdev4' 00:25:20.550 21:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:20.550 21:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:20.550 21:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:25:20.808 21:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:20.808 "name": "BaseBdev1", 00:25:20.808 "aliases": [ 00:25:20.808 "c338ff65-d631-45c3-8c2a-76ff12554f37" 00:25:20.808 ], 00:25:20.808 "product_name": "Malloc disk", 00:25:20.808 "block_size": 512, 00:25:20.808 "num_blocks": 65536, 00:25:20.808 "uuid": "c338ff65-d631-45c3-8c2a-76ff12554f37", 00:25:20.808 "assigned_rate_limits": { 00:25:20.808 "rw_ios_per_sec": 0, 00:25:20.808 "rw_mbytes_per_sec": 0, 00:25:20.808 "r_mbytes_per_sec": 0, 00:25:20.808 "w_mbytes_per_sec": 0 00:25:20.808 }, 00:25:20.808 "claimed": true, 00:25:20.808 "claim_type": "exclusive_write", 00:25:20.808 "zoned": false, 00:25:20.808 "supported_io_types": { 00:25:20.808 "read": true, 00:25:20.808 "write": true, 00:25:20.808 "unmap": true, 00:25:20.808 "flush": true, 00:25:20.808 "reset": true, 00:25:20.808 "nvme_admin": false, 00:25:20.808 "nvme_io": false, 00:25:20.808 "nvme_io_md": false, 00:25:20.808 "write_zeroes": true, 00:25:20.808 "zcopy": true, 00:25:20.808 "get_zone_info": false, 00:25:20.808 "zone_management": false, 00:25:20.808 "zone_append": false, 00:25:20.808 "compare": false, 00:25:20.808 "compare_and_write": false, 00:25:20.808 "abort": true, 00:25:20.808 "seek_hole": false, 00:25:20.808 "seek_data": false, 00:25:20.808 "copy": true, 00:25:20.808 "nvme_iov_md": false 00:25:20.808 }, 00:25:20.808 "memory_domains": [ 00:25:20.808 { 00:25:20.808 "dma_device_id": "system", 00:25:20.808 "dma_device_type": 1 00:25:20.808 }, 00:25:20.808 { 00:25:20.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:20.808 "dma_device_type": 2 00:25:20.808 } 00:25:20.808 ], 00:25:20.808 "driver_specific": {} 00:25:20.808 }' 00:25:20.808 21:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:20.808 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:20.808 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:20.808 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:20.808 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:21.067 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:21.067 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:21.067 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:21.067 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:21.067 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:21.067 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:21.067 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:21.067 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:21.067 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:21.067 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:21.634 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:21.634 "name": "BaseBdev2", 00:25:21.634 "aliases": [ 00:25:21.634 "a7770eaf-6c3f-47c2-8272-1c61a1ad6d8e" 00:25:21.634 ], 00:25:21.634 "product_name": "Malloc disk", 00:25:21.635 "block_size": 512, 00:25:21.635 "num_blocks": 65536, 00:25:21.635 "uuid": "a7770eaf-6c3f-47c2-8272-1c61a1ad6d8e", 00:25:21.635 "assigned_rate_limits": { 00:25:21.635 "rw_ios_per_sec": 0, 00:25:21.635 "rw_mbytes_per_sec": 0, 00:25:21.635 "r_mbytes_per_sec": 0, 00:25:21.635 "w_mbytes_per_sec": 0 00:25:21.635 }, 00:25:21.635 "claimed": true, 00:25:21.635 "claim_type": "exclusive_write", 00:25:21.635 "zoned": false, 00:25:21.635 "supported_io_types": { 00:25:21.635 "read": true, 00:25:21.635 "write": true, 00:25:21.635 "unmap": true, 00:25:21.635 "flush": true, 00:25:21.635 "reset": true, 00:25:21.635 "nvme_admin": false, 00:25:21.635 "nvme_io": false, 00:25:21.635 "nvme_io_md": false, 00:25:21.635 "write_zeroes": true, 00:25:21.635 "zcopy": true, 00:25:21.635 "get_zone_info": false, 00:25:21.635 "zone_management": false, 00:25:21.635 "zone_append": false, 00:25:21.635 "compare": false, 00:25:21.635 "compare_and_write": false, 00:25:21.635 "abort": true, 00:25:21.635 "seek_hole": false, 00:25:21.635 "seek_data": false, 00:25:21.635 "copy": true, 00:25:21.635 "nvme_iov_md": false 00:25:21.635 }, 00:25:21.635 "memory_domains": [ 00:25:21.635 { 00:25:21.635 "dma_device_id": "system", 00:25:21.635 "dma_device_type": 1 00:25:21.635 }, 00:25:21.635 { 00:25:21.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:21.635 "dma_device_type": 2 00:25:21.635 } 00:25:21.635 ], 00:25:21.635 "driver_specific": {} 00:25:21.635 }' 00:25:21.635 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:21.635 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:21.635 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:21.635 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:21.635 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:21.635 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:21.635 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:21.635 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:21.893 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:21.893 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:21.893 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:21.893 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:21.893 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:21.893 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:21.893 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:22.151 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:22.151 "name": "BaseBdev3", 00:25:22.151 "aliases": [ 00:25:22.151 "864c5b6e-8996-4590-8209-995c961cbd95" 00:25:22.151 ], 00:25:22.151 "product_name": "Malloc disk", 00:25:22.151 "block_size": 512, 00:25:22.151 "num_blocks": 65536, 00:25:22.151 "uuid": "864c5b6e-8996-4590-8209-995c961cbd95", 00:25:22.151 "assigned_rate_limits": { 00:25:22.151 "rw_ios_per_sec": 0, 00:25:22.151 "rw_mbytes_per_sec": 0, 00:25:22.151 "r_mbytes_per_sec": 0, 00:25:22.151 "w_mbytes_per_sec": 0 00:25:22.151 }, 00:25:22.151 "claimed": true, 00:25:22.151 "claim_type": "exclusive_write", 00:25:22.151 "zoned": false, 00:25:22.151 "supported_io_types": { 00:25:22.151 "read": true, 00:25:22.151 "write": true, 00:25:22.151 "unmap": true, 00:25:22.151 "flush": true, 00:25:22.151 "reset": true, 00:25:22.151 "nvme_admin": false, 00:25:22.151 "nvme_io": false, 00:25:22.151 "nvme_io_md": false, 00:25:22.151 "write_zeroes": true, 00:25:22.151 "zcopy": true, 00:25:22.151 "get_zone_info": false, 00:25:22.151 "zone_management": false, 00:25:22.151 "zone_append": false, 00:25:22.151 "compare": false, 00:25:22.151 "compare_and_write": false, 00:25:22.151 "abort": true, 00:25:22.151 "seek_hole": false, 00:25:22.151 "seek_data": false, 00:25:22.151 "copy": true, 00:25:22.151 "nvme_iov_md": false 00:25:22.151 }, 00:25:22.151 "memory_domains": [ 00:25:22.151 { 00:25:22.151 "dma_device_id": "system", 00:25:22.151 "dma_device_type": 1 00:25:22.151 }, 00:25:22.151 { 00:25:22.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:22.151 "dma_device_type": 2 00:25:22.151 } 00:25:22.151 ], 00:25:22.151 "driver_specific": {} 00:25:22.151 }' 00:25:22.151 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:22.151 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:22.151 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:22.151 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:22.409 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:22.409 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:22.409 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:22.409 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:22.409 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:22.409 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:22.409 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:22.667 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:22.667 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:22.667 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:22.667 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:22.667 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:22.667 "name": "BaseBdev4", 00:25:22.667 "aliases": [ 00:25:22.667 "7cf063f9-a94a-4868-be66-a3b9aaf3ec09" 00:25:22.667 ], 00:25:22.667 "product_name": "Malloc disk", 00:25:22.667 "block_size": 512, 00:25:22.667 "num_blocks": 65536, 00:25:22.667 "uuid": "7cf063f9-a94a-4868-be66-a3b9aaf3ec09", 00:25:22.667 "assigned_rate_limits": { 00:25:22.667 "rw_ios_per_sec": 0, 00:25:22.667 "rw_mbytes_per_sec": 0, 00:25:22.667 "r_mbytes_per_sec": 0, 00:25:22.667 "w_mbytes_per_sec": 0 00:25:22.667 }, 00:25:22.667 "claimed": true, 00:25:22.667 "claim_type": "exclusive_write", 00:25:22.667 "zoned": false, 00:25:22.667 "supported_io_types": { 00:25:22.667 "read": true, 00:25:22.667 "write": true, 00:25:22.667 "unmap": true, 00:25:22.667 "flush": true, 00:25:22.667 "reset": true, 00:25:22.667 "nvme_admin": false, 00:25:22.667 "nvme_io": false, 00:25:22.667 "nvme_io_md": false, 00:25:22.667 "write_zeroes": true, 00:25:22.667 "zcopy": true, 00:25:22.667 "get_zone_info": false, 00:25:22.667 "zone_management": false, 00:25:22.667 "zone_append": false, 00:25:22.667 "compare": false, 00:25:22.667 "compare_and_write": false, 00:25:22.667 "abort": true, 00:25:22.667 "seek_hole": false, 00:25:22.667 "seek_data": false, 00:25:22.667 "copy": true, 00:25:22.667 "nvme_iov_md": false 00:25:22.667 }, 00:25:22.667 "memory_domains": [ 00:25:22.667 { 00:25:22.667 "dma_device_id": "system", 00:25:22.667 "dma_device_type": 1 00:25:22.667 }, 00:25:22.667 { 00:25:22.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:22.667 "dma_device_type": 2 00:25:22.667 } 00:25:22.667 ], 00:25:22.667 "driver_specific": {} 00:25:22.667 }' 00:25:22.667 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:22.926 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:22.926 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:22.926 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:22.926 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:22.926 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:22.926 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:22.926 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:22.926 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:22.926 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:23.184 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:23.184 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:23.184 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:23.442 [2024-07-15 21:38:56.586873] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:23.442 [2024-07-15 21:38:56.586983] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:23.442 [2024-07-15 21:38:56.587060] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:23.442 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:25:23.442 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:25:23.442 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:23.442 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:25:23.442 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:25:23.442 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:25:23.442 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:23.442 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:25:23.442 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:23.442 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:23.442 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:23.442 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:23.442 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:23.442 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:23.442 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:23.442 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:23.443 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:23.701 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:23.701 "name": "Existed_Raid", 00:25:23.701 "uuid": "8ec63241-335f-4386-9333-82e6f9c04e9b", 00:25:23.701 "strip_size_kb": 64, 00:25:23.701 "state": "offline", 00:25:23.701 "raid_level": "concat", 00:25:23.701 "superblock": true, 00:25:23.701 "num_base_bdevs": 4, 00:25:23.701 "num_base_bdevs_discovered": 3, 00:25:23.701 "num_base_bdevs_operational": 3, 00:25:23.701 "base_bdevs_list": [ 00:25:23.701 { 00:25:23.701 "name": null, 00:25:23.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:23.701 "is_configured": false, 00:25:23.701 "data_offset": 2048, 00:25:23.701 "data_size": 63488 00:25:23.701 }, 00:25:23.701 { 00:25:23.701 "name": "BaseBdev2", 00:25:23.701 "uuid": "a7770eaf-6c3f-47c2-8272-1c61a1ad6d8e", 00:25:23.701 "is_configured": true, 00:25:23.701 "data_offset": 2048, 00:25:23.701 "data_size": 63488 00:25:23.701 }, 00:25:23.701 { 00:25:23.701 "name": "BaseBdev3", 00:25:23.701 "uuid": "864c5b6e-8996-4590-8209-995c961cbd95", 00:25:23.701 "is_configured": true, 00:25:23.701 "data_offset": 2048, 00:25:23.701 "data_size": 63488 00:25:23.701 }, 00:25:23.701 { 00:25:23.701 "name": "BaseBdev4", 00:25:23.701 "uuid": "7cf063f9-a94a-4868-be66-a3b9aaf3ec09", 00:25:23.701 "is_configured": true, 00:25:23.701 "data_offset": 2048, 00:25:23.701 "data_size": 63488 00:25:23.701 } 00:25:23.701 ] 00:25:23.701 }' 00:25:23.701 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:23.701 21:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:24.268 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:25:24.268 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:24.268 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:24.268 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:24.526 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:24.526 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:24.526 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:24.785 [2024-07-15 21:38:57.957896] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:24.785 21:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:24.785 21:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:24.785 21:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:24.785 21:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:25.044 21:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:25.044 21:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:25.044 21:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:25.304 [2024-07-15 21:38:58.532813] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:25.304 21:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:25.304 21:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:25.304 21:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.304 21:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:25.562 21:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:25.562 21:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:25.562 21:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:25:25.820 [2024-07-15 21:38:59.114406] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:25.820 [2024-07-15 21:38:59.114550] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:25:26.090 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:26.090 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:26.090 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.090 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:25:26.374 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:25:26.374 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:25:26.374 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:25:26.374 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:25:26.374 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:26.374 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:26.633 BaseBdev2 00:25:26.633 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:25:26.633 21:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:25:26.633 21:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:26.633 21:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:26.633 21:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:26.633 21:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:26.633 21:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:26.890 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:26.890 [ 00:25:26.890 { 00:25:26.890 "name": "BaseBdev2", 00:25:26.890 "aliases": [ 00:25:26.890 "9778d967-72af-43ca-a40e-4bef9da6da5f" 00:25:26.890 ], 00:25:26.890 "product_name": "Malloc disk", 00:25:26.890 "block_size": 512, 00:25:26.890 "num_blocks": 65536, 00:25:26.890 "uuid": "9778d967-72af-43ca-a40e-4bef9da6da5f", 00:25:26.890 "assigned_rate_limits": { 00:25:26.890 "rw_ios_per_sec": 0, 00:25:26.890 "rw_mbytes_per_sec": 0, 00:25:26.890 "r_mbytes_per_sec": 0, 00:25:26.890 "w_mbytes_per_sec": 0 00:25:26.890 }, 00:25:26.890 "claimed": false, 00:25:26.890 "zoned": false, 00:25:26.890 "supported_io_types": { 00:25:26.890 "read": true, 00:25:26.890 "write": true, 00:25:26.890 "unmap": true, 00:25:26.890 "flush": true, 00:25:26.890 "reset": true, 00:25:26.890 "nvme_admin": false, 00:25:26.890 "nvme_io": false, 00:25:26.890 "nvme_io_md": false, 00:25:26.891 "write_zeroes": true, 00:25:26.891 "zcopy": true, 00:25:26.891 "get_zone_info": false, 00:25:26.891 "zone_management": false, 00:25:26.891 "zone_append": false, 00:25:26.891 "compare": false, 00:25:26.891 "compare_and_write": false, 00:25:26.891 "abort": true, 00:25:26.891 "seek_hole": false, 00:25:26.891 "seek_data": false, 00:25:26.891 "copy": true, 00:25:26.891 "nvme_iov_md": false 00:25:26.891 }, 00:25:26.891 "memory_domains": [ 00:25:26.891 { 00:25:26.891 "dma_device_id": "system", 00:25:26.891 "dma_device_type": 1 00:25:26.891 }, 00:25:26.891 { 00:25:26.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:26.891 "dma_device_type": 2 00:25:26.891 } 00:25:26.891 ], 00:25:26.891 "driver_specific": {} 00:25:26.891 } 00:25:26.891 ] 00:25:27.149 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:27.149 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:27.149 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:27.149 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:27.407 BaseBdev3 00:25:27.407 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:25:27.407 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:25:27.407 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:27.407 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:27.407 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:27.407 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:27.407 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:27.407 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:27.666 [ 00:25:27.666 { 00:25:27.666 "name": "BaseBdev3", 00:25:27.666 "aliases": [ 00:25:27.666 "73f662bc-6f54-45b8-b513-22a708fb594b" 00:25:27.666 ], 00:25:27.666 "product_name": "Malloc disk", 00:25:27.666 "block_size": 512, 00:25:27.666 "num_blocks": 65536, 00:25:27.666 "uuid": "73f662bc-6f54-45b8-b513-22a708fb594b", 00:25:27.666 "assigned_rate_limits": { 00:25:27.666 "rw_ios_per_sec": 0, 00:25:27.666 "rw_mbytes_per_sec": 0, 00:25:27.666 "r_mbytes_per_sec": 0, 00:25:27.666 "w_mbytes_per_sec": 0 00:25:27.666 }, 00:25:27.666 "claimed": false, 00:25:27.666 "zoned": false, 00:25:27.666 "supported_io_types": { 00:25:27.666 "read": true, 00:25:27.666 "write": true, 00:25:27.666 "unmap": true, 00:25:27.666 "flush": true, 00:25:27.666 "reset": true, 00:25:27.666 "nvme_admin": false, 00:25:27.666 "nvme_io": false, 00:25:27.666 "nvme_io_md": false, 00:25:27.666 "write_zeroes": true, 00:25:27.666 "zcopy": true, 00:25:27.666 "get_zone_info": false, 00:25:27.666 "zone_management": false, 00:25:27.666 "zone_append": false, 00:25:27.666 "compare": false, 00:25:27.666 "compare_and_write": false, 00:25:27.666 "abort": true, 00:25:27.666 "seek_hole": false, 00:25:27.666 "seek_data": false, 00:25:27.666 "copy": true, 00:25:27.666 "nvme_iov_md": false 00:25:27.666 }, 00:25:27.666 "memory_domains": [ 00:25:27.666 { 00:25:27.666 "dma_device_id": "system", 00:25:27.666 "dma_device_type": 1 00:25:27.666 }, 00:25:27.666 { 00:25:27.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:27.666 "dma_device_type": 2 00:25:27.666 } 00:25:27.666 ], 00:25:27.666 "driver_specific": {} 00:25:27.666 } 00:25:27.666 ] 00:25:27.666 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:27.666 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:27.666 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:27.666 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:27.925 BaseBdev4 00:25:27.925 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:25:27.925 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:25:27.925 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:27.925 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:27.925 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:27.925 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:27.925 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:28.183 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:28.441 [ 00:25:28.441 { 00:25:28.441 "name": "BaseBdev4", 00:25:28.441 "aliases": [ 00:25:28.441 "5a7de5c5-b345-4dc1-8858-125892cba7d1" 00:25:28.441 ], 00:25:28.441 "product_name": "Malloc disk", 00:25:28.441 "block_size": 512, 00:25:28.441 "num_blocks": 65536, 00:25:28.441 "uuid": "5a7de5c5-b345-4dc1-8858-125892cba7d1", 00:25:28.441 "assigned_rate_limits": { 00:25:28.441 "rw_ios_per_sec": 0, 00:25:28.441 "rw_mbytes_per_sec": 0, 00:25:28.441 "r_mbytes_per_sec": 0, 00:25:28.441 "w_mbytes_per_sec": 0 00:25:28.441 }, 00:25:28.441 "claimed": false, 00:25:28.441 "zoned": false, 00:25:28.441 "supported_io_types": { 00:25:28.441 "read": true, 00:25:28.441 "write": true, 00:25:28.441 "unmap": true, 00:25:28.441 "flush": true, 00:25:28.441 "reset": true, 00:25:28.441 "nvme_admin": false, 00:25:28.441 "nvme_io": false, 00:25:28.441 "nvme_io_md": false, 00:25:28.441 "write_zeroes": true, 00:25:28.441 "zcopy": true, 00:25:28.441 "get_zone_info": false, 00:25:28.441 "zone_management": false, 00:25:28.441 "zone_append": false, 00:25:28.441 "compare": false, 00:25:28.441 "compare_and_write": false, 00:25:28.441 "abort": true, 00:25:28.441 "seek_hole": false, 00:25:28.441 "seek_data": false, 00:25:28.441 "copy": true, 00:25:28.441 "nvme_iov_md": false 00:25:28.441 }, 00:25:28.441 "memory_domains": [ 00:25:28.441 { 00:25:28.441 "dma_device_id": "system", 00:25:28.441 "dma_device_type": 1 00:25:28.441 }, 00:25:28.441 { 00:25:28.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:28.441 "dma_device_type": 2 00:25:28.441 } 00:25:28.441 ], 00:25:28.441 "driver_specific": {} 00:25:28.441 } 00:25:28.441 ] 00:25:28.441 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:28.441 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:28.441 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:28.441 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:28.733 [2024-07-15 21:39:02.029524] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:28.733 [2024-07-15 21:39:02.029683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:28.733 [2024-07-15 21:39:02.029739] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:28.733 [2024-07-15 21:39:02.031667] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:28.733 [2024-07-15 21:39:02.031791] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:28.733 21:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:28.733 21:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:28.733 21:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:28.733 21:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:28.733 21:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:28.733 21:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:28.734 21:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:28.734 21:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:28.734 21:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:28.734 21:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:28.734 21:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.734 21:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:28.990 21:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:28.990 "name": "Existed_Raid", 00:25:28.990 "uuid": "6f74b0fb-44f5-423d-8716-39b64a3ff48a", 00:25:28.990 "strip_size_kb": 64, 00:25:28.990 "state": "configuring", 00:25:28.990 "raid_level": "concat", 00:25:28.990 "superblock": true, 00:25:28.990 "num_base_bdevs": 4, 00:25:28.990 "num_base_bdevs_discovered": 3, 00:25:28.990 "num_base_bdevs_operational": 4, 00:25:28.990 "base_bdevs_list": [ 00:25:28.990 { 00:25:28.990 "name": "BaseBdev1", 00:25:28.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.990 "is_configured": false, 00:25:28.990 "data_offset": 0, 00:25:28.990 "data_size": 0 00:25:28.990 }, 00:25:28.990 { 00:25:28.990 "name": "BaseBdev2", 00:25:28.990 "uuid": "9778d967-72af-43ca-a40e-4bef9da6da5f", 00:25:28.990 "is_configured": true, 00:25:28.990 "data_offset": 2048, 00:25:28.990 "data_size": 63488 00:25:28.990 }, 00:25:28.990 { 00:25:28.990 "name": "BaseBdev3", 00:25:28.990 "uuid": "73f662bc-6f54-45b8-b513-22a708fb594b", 00:25:28.990 "is_configured": true, 00:25:28.990 "data_offset": 2048, 00:25:28.990 "data_size": 63488 00:25:28.990 }, 00:25:28.990 { 00:25:28.990 "name": "BaseBdev4", 00:25:28.990 "uuid": "5a7de5c5-b345-4dc1-8858-125892cba7d1", 00:25:28.990 "is_configured": true, 00:25:28.990 "data_offset": 2048, 00:25:28.990 "data_size": 63488 00:25:28.990 } 00:25:28.990 ] 00:25:28.990 }' 00:25:28.990 21:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:28.990 21:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:29.924 21:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:29.924 [2024-07-15 21:39:03.131998] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:29.924 21:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:29.924 21:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:29.924 21:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:29.924 21:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:29.924 21:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:29.924 21:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:29.924 21:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:29.924 21:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:29.924 21:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:29.924 21:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:29.924 21:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.924 21:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:30.184 21:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:30.184 "name": "Existed_Raid", 00:25:30.184 "uuid": "6f74b0fb-44f5-423d-8716-39b64a3ff48a", 00:25:30.184 "strip_size_kb": 64, 00:25:30.184 "state": "configuring", 00:25:30.184 "raid_level": "concat", 00:25:30.184 "superblock": true, 00:25:30.184 "num_base_bdevs": 4, 00:25:30.184 "num_base_bdevs_discovered": 2, 00:25:30.184 "num_base_bdevs_operational": 4, 00:25:30.184 "base_bdevs_list": [ 00:25:30.184 { 00:25:30.184 "name": "BaseBdev1", 00:25:30.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:30.184 "is_configured": false, 00:25:30.184 "data_offset": 0, 00:25:30.184 "data_size": 0 00:25:30.184 }, 00:25:30.184 { 00:25:30.184 "name": null, 00:25:30.184 "uuid": "9778d967-72af-43ca-a40e-4bef9da6da5f", 00:25:30.184 "is_configured": false, 00:25:30.184 "data_offset": 2048, 00:25:30.184 "data_size": 63488 00:25:30.184 }, 00:25:30.184 { 00:25:30.184 "name": "BaseBdev3", 00:25:30.184 "uuid": "73f662bc-6f54-45b8-b513-22a708fb594b", 00:25:30.184 "is_configured": true, 00:25:30.184 "data_offset": 2048, 00:25:30.184 "data_size": 63488 00:25:30.184 }, 00:25:30.184 { 00:25:30.184 "name": "BaseBdev4", 00:25:30.184 "uuid": "5a7de5c5-b345-4dc1-8858-125892cba7d1", 00:25:30.184 "is_configured": true, 00:25:30.184 "data_offset": 2048, 00:25:30.184 "data_size": 63488 00:25:30.184 } 00:25:30.184 ] 00:25:30.184 }' 00:25:30.184 21:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:30.184 21:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.750 21:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:30.750 21:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:31.008 21:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:25:31.008 21:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:31.265 [2024-07-15 21:39:04.464478] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:31.265 BaseBdev1 00:25:31.265 21:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:25:31.265 21:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:25:31.265 21:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:31.265 21:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:31.265 21:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:31.265 21:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:31.265 21:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:31.523 21:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:31.780 [ 00:25:31.780 { 00:25:31.780 "name": "BaseBdev1", 00:25:31.780 "aliases": [ 00:25:31.780 "16a5be39-1ce9-4891-9d14-250bfbd0a765" 00:25:31.780 ], 00:25:31.780 "product_name": "Malloc disk", 00:25:31.780 "block_size": 512, 00:25:31.780 "num_blocks": 65536, 00:25:31.780 "uuid": "16a5be39-1ce9-4891-9d14-250bfbd0a765", 00:25:31.780 "assigned_rate_limits": { 00:25:31.780 "rw_ios_per_sec": 0, 00:25:31.780 "rw_mbytes_per_sec": 0, 00:25:31.780 "r_mbytes_per_sec": 0, 00:25:31.780 "w_mbytes_per_sec": 0 00:25:31.780 }, 00:25:31.780 "claimed": true, 00:25:31.780 "claim_type": "exclusive_write", 00:25:31.780 "zoned": false, 00:25:31.780 "supported_io_types": { 00:25:31.780 "read": true, 00:25:31.780 "write": true, 00:25:31.780 "unmap": true, 00:25:31.780 "flush": true, 00:25:31.780 "reset": true, 00:25:31.780 "nvme_admin": false, 00:25:31.780 "nvme_io": false, 00:25:31.780 "nvme_io_md": false, 00:25:31.780 "write_zeroes": true, 00:25:31.780 "zcopy": true, 00:25:31.780 "get_zone_info": false, 00:25:31.780 "zone_management": false, 00:25:31.780 "zone_append": false, 00:25:31.780 "compare": false, 00:25:31.780 "compare_and_write": false, 00:25:31.780 "abort": true, 00:25:31.780 "seek_hole": false, 00:25:31.780 "seek_data": false, 00:25:31.780 "copy": true, 00:25:31.780 "nvme_iov_md": false 00:25:31.780 }, 00:25:31.780 "memory_domains": [ 00:25:31.780 { 00:25:31.780 "dma_device_id": "system", 00:25:31.780 "dma_device_type": 1 00:25:31.780 }, 00:25:31.780 { 00:25:31.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:31.780 "dma_device_type": 2 00:25:31.780 } 00:25:31.780 ], 00:25:31.780 "driver_specific": {} 00:25:31.780 } 00:25:31.780 ] 00:25:31.780 21:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:31.780 21:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:31.780 21:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:31.780 21:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:31.780 21:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:31.780 21:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:31.780 21:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:31.780 21:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:31.780 21:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:31.780 21:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:31.780 21:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:31.780 21:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:31.780 21:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:32.039 21:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:32.039 "name": "Existed_Raid", 00:25:32.039 "uuid": "6f74b0fb-44f5-423d-8716-39b64a3ff48a", 00:25:32.039 "strip_size_kb": 64, 00:25:32.039 "state": "configuring", 00:25:32.039 "raid_level": "concat", 00:25:32.039 "superblock": true, 00:25:32.039 "num_base_bdevs": 4, 00:25:32.039 "num_base_bdevs_discovered": 3, 00:25:32.039 "num_base_bdevs_operational": 4, 00:25:32.039 "base_bdevs_list": [ 00:25:32.039 { 00:25:32.039 "name": "BaseBdev1", 00:25:32.039 "uuid": "16a5be39-1ce9-4891-9d14-250bfbd0a765", 00:25:32.039 "is_configured": true, 00:25:32.039 "data_offset": 2048, 00:25:32.039 "data_size": 63488 00:25:32.039 }, 00:25:32.039 { 00:25:32.039 "name": null, 00:25:32.039 "uuid": "9778d967-72af-43ca-a40e-4bef9da6da5f", 00:25:32.039 "is_configured": false, 00:25:32.039 "data_offset": 2048, 00:25:32.039 "data_size": 63488 00:25:32.039 }, 00:25:32.039 { 00:25:32.039 "name": "BaseBdev3", 00:25:32.039 "uuid": "73f662bc-6f54-45b8-b513-22a708fb594b", 00:25:32.039 "is_configured": true, 00:25:32.039 "data_offset": 2048, 00:25:32.039 "data_size": 63488 00:25:32.039 }, 00:25:32.039 { 00:25:32.039 "name": "BaseBdev4", 00:25:32.039 "uuid": "5a7de5c5-b345-4dc1-8858-125892cba7d1", 00:25:32.039 "is_configured": true, 00:25:32.039 "data_offset": 2048, 00:25:32.039 "data_size": 63488 00:25:32.039 } 00:25:32.039 ] 00:25:32.039 }' 00:25:32.039 21:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:32.039 21:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:32.974 21:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:32.974 21:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:32.974 21:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:25:32.974 21:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:25:33.233 [2024-07-15 21:39:06.405483] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:33.233 21:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:33.233 21:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:33.233 21:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:33.233 21:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:33.233 21:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:33.233 21:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:33.233 21:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:33.233 21:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:33.233 21:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:33.233 21:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:33.233 21:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.233 21:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:33.491 21:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:33.491 "name": "Existed_Raid", 00:25:33.491 "uuid": "6f74b0fb-44f5-423d-8716-39b64a3ff48a", 00:25:33.491 "strip_size_kb": 64, 00:25:33.491 "state": "configuring", 00:25:33.491 "raid_level": "concat", 00:25:33.491 "superblock": true, 00:25:33.491 "num_base_bdevs": 4, 00:25:33.491 "num_base_bdevs_discovered": 2, 00:25:33.491 "num_base_bdevs_operational": 4, 00:25:33.491 "base_bdevs_list": [ 00:25:33.491 { 00:25:33.491 "name": "BaseBdev1", 00:25:33.491 "uuid": "16a5be39-1ce9-4891-9d14-250bfbd0a765", 00:25:33.491 "is_configured": true, 00:25:33.491 "data_offset": 2048, 00:25:33.491 "data_size": 63488 00:25:33.491 }, 00:25:33.491 { 00:25:33.491 "name": null, 00:25:33.491 "uuid": "9778d967-72af-43ca-a40e-4bef9da6da5f", 00:25:33.491 "is_configured": false, 00:25:33.491 "data_offset": 2048, 00:25:33.491 "data_size": 63488 00:25:33.491 }, 00:25:33.491 { 00:25:33.491 "name": null, 00:25:33.491 "uuid": "73f662bc-6f54-45b8-b513-22a708fb594b", 00:25:33.491 "is_configured": false, 00:25:33.491 "data_offset": 2048, 00:25:33.491 "data_size": 63488 00:25:33.491 }, 00:25:33.491 { 00:25:33.491 "name": "BaseBdev4", 00:25:33.491 "uuid": "5a7de5c5-b345-4dc1-8858-125892cba7d1", 00:25:33.491 "is_configured": true, 00:25:33.491 "data_offset": 2048, 00:25:33.491 "data_size": 63488 00:25:33.491 } 00:25:33.491 ] 00:25:33.491 }' 00:25:33.491 21:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:33.491 21:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:34.058 21:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:34.058 21:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:34.316 21:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:25:34.316 21:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:34.316 [2024-07-15 21:39:07.607469] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:34.316 21:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:34.316 21:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:34.316 21:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:34.316 21:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:34.316 21:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:34.316 21:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:34.316 21:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:34.316 21:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:34.316 21:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:34.316 21:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:34.316 21:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:34.316 21:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:34.574 21:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:34.574 "name": "Existed_Raid", 00:25:34.574 "uuid": "6f74b0fb-44f5-423d-8716-39b64a3ff48a", 00:25:34.574 "strip_size_kb": 64, 00:25:34.574 "state": "configuring", 00:25:34.574 "raid_level": "concat", 00:25:34.574 "superblock": true, 00:25:34.574 "num_base_bdevs": 4, 00:25:34.574 "num_base_bdevs_discovered": 3, 00:25:34.574 "num_base_bdevs_operational": 4, 00:25:34.574 "base_bdevs_list": [ 00:25:34.574 { 00:25:34.574 "name": "BaseBdev1", 00:25:34.574 "uuid": "16a5be39-1ce9-4891-9d14-250bfbd0a765", 00:25:34.574 "is_configured": true, 00:25:34.574 "data_offset": 2048, 00:25:34.574 "data_size": 63488 00:25:34.574 }, 00:25:34.574 { 00:25:34.574 "name": null, 00:25:34.574 "uuid": "9778d967-72af-43ca-a40e-4bef9da6da5f", 00:25:34.574 "is_configured": false, 00:25:34.574 "data_offset": 2048, 00:25:34.574 "data_size": 63488 00:25:34.574 }, 00:25:34.574 { 00:25:34.574 "name": "BaseBdev3", 00:25:34.574 "uuid": "73f662bc-6f54-45b8-b513-22a708fb594b", 00:25:34.574 "is_configured": true, 00:25:34.574 "data_offset": 2048, 00:25:34.574 "data_size": 63488 00:25:34.574 }, 00:25:34.574 { 00:25:34.574 "name": "BaseBdev4", 00:25:34.574 "uuid": "5a7de5c5-b345-4dc1-8858-125892cba7d1", 00:25:34.574 "is_configured": true, 00:25:34.574 "data_offset": 2048, 00:25:34.574 "data_size": 63488 00:25:34.574 } 00:25:34.574 ] 00:25:34.574 }' 00:25:34.574 21:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:34.574 21:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:35.143 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.143 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:35.402 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:25:35.402 21:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:35.662 [2024-07-15 21:39:08.965299] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:35.920 21:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:35.920 21:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:35.920 21:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:35.920 21:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:35.920 21:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:35.920 21:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:35.920 21:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:35.920 21:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:35.920 21:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:35.920 21:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:35.920 21:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:35.920 21:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.177 21:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:36.177 "name": "Existed_Raid", 00:25:36.177 "uuid": "6f74b0fb-44f5-423d-8716-39b64a3ff48a", 00:25:36.177 "strip_size_kb": 64, 00:25:36.177 "state": "configuring", 00:25:36.177 "raid_level": "concat", 00:25:36.177 "superblock": true, 00:25:36.177 "num_base_bdevs": 4, 00:25:36.177 "num_base_bdevs_discovered": 2, 00:25:36.177 "num_base_bdevs_operational": 4, 00:25:36.177 "base_bdevs_list": [ 00:25:36.177 { 00:25:36.177 "name": null, 00:25:36.177 "uuid": "16a5be39-1ce9-4891-9d14-250bfbd0a765", 00:25:36.177 "is_configured": false, 00:25:36.177 "data_offset": 2048, 00:25:36.177 "data_size": 63488 00:25:36.177 }, 00:25:36.177 { 00:25:36.177 "name": null, 00:25:36.177 "uuid": "9778d967-72af-43ca-a40e-4bef9da6da5f", 00:25:36.177 "is_configured": false, 00:25:36.177 "data_offset": 2048, 00:25:36.177 "data_size": 63488 00:25:36.177 }, 00:25:36.177 { 00:25:36.177 "name": "BaseBdev3", 00:25:36.177 "uuid": "73f662bc-6f54-45b8-b513-22a708fb594b", 00:25:36.177 "is_configured": true, 00:25:36.177 "data_offset": 2048, 00:25:36.177 "data_size": 63488 00:25:36.177 }, 00:25:36.177 { 00:25:36.177 "name": "BaseBdev4", 00:25:36.177 "uuid": "5a7de5c5-b345-4dc1-8858-125892cba7d1", 00:25:36.177 "is_configured": true, 00:25:36.177 "data_offset": 2048, 00:25:36.177 "data_size": 63488 00:25:36.177 } 00:25:36.177 ] 00:25:36.177 }' 00:25:36.178 21:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:36.178 21:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:36.744 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.744 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:37.002 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:25:37.002 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:37.260 [2024-07-15 21:39:10.473901] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:37.260 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:37.260 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:37.260 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:37.260 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:37.260 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:37.260 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:37.260 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:37.260 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:37.260 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:37.260 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:37.260 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:37.260 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:37.519 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:37.519 "name": "Existed_Raid", 00:25:37.519 "uuid": "6f74b0fb-44f5-423d-8716-39b64a3ff48a", 00:25:37.519 "strip_size_kb": 64, 00:25:37.519 "state": "configuring", 00:25:37.519 "raid_level": "concat", 00:25:37.519 "superblock": true, 00:25:37.519 "num_base_bdevs": 4, 00:25:37.519 "num_base_bdevs_discovered": 3, 00:25:37.519 "num_base_bdevs_operational": 4, 00:25:37.519 "base_bdevs_list": [ 00:25:37.519 { 00:25:37.519 "name": null, 00:25:37.519 "uuid": "16a5be39-1ce9-4891-9d14-250bfbd0a765", 00:25:37.519 "is_configured": false, 00:25:37.519 "data_offset": 2048, 00:25:37.519 "data_size": 63488 00:25:37.519 }, 00:25:37.519 { 00:25:37.519 "name": "BaseBdev2", 00:25:37.519 "uuid": "9778d967-72af-43ca-a40e-4bef9da6da5f", 00:25:37.519 "is_configured": true, 00:25:37.519 "data_offset": 2048, 00:25:37.519 "data_size": 63488 00:25:37.519 }, 00:25:37.519 { 00:25:37.519 "name": "BaseBdev3", 00:25:37.519 "uuid": "73f662bc-6f54-45b8-b513-22a708fb594b", 00:25:37.519 "is_configured": true, 00:25:37.519 "data_offset": 2048, 00:25:37.519 "data_size": 63488 00:25:37.519 }, 00:25:37.519 { 00:25:37.519 "name": "BaseBdev4", 00:25:37.519 "uuid": "5a7de5c5-b345-4dc1-8858-125892cba7d1", 00:25:37.519 "is_configured": true, 00:25:37.519 "data_offset": 2048, 00:25:37.519 "data_size": 63488 00:25:37.519 } 00:25:37.519 ] 00:25:37.519 }' 00:25:37.519 21:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:37.519 21:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:38.090 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.090 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:38.348 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:25:38.348 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:38.348 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.606 21:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 16a5be39-1ce9-4891-9d14-250bfbd0a765 00:25:38.864 [2024-07-15 21:39:12.229915] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:38.864 [2024-07-15 21:39:12.230313] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:25:38.864 [2024-07-15 21:39:12.230360] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:38.864 [2024-07-15 21:39:12.230497] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:38.864 NewBaseBdev 00:25:38.864 [2024-07-15 21:39:12.230904] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:25:38.864 [2024-07-15 21:39:12.230973] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009680 00:25:38.864 [2024-07-15 21:39:12.231153] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:39.123 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:25:39.123 21:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:25:39.123 21:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:39.123 21:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:39.123 21:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:39.123 21:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:39.123 21:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:39.123 21:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:39.382 [ 00:25:39.382 { 00:25:39.382 "name": "NewBaseBdev", 00:25:39.382 "aliases": [ 00:25:39.382 "16a5be39-1ce9-4891-9d14-250bfbd0a765" 00:25:39.382 ], 00:25:39.382 "product_name": "Malloc disk", 00:25:39.382 "block_size": 512, 00:25:39.382 "num_blocks": 65536, 00:25:39.382 "uuid": "16a5be39-1ce9-4891-9d14-250bfbd0a765", 00:25:39.382 "assigned_rate_limits": { 00:25:39.382 "rw_ios_per_sec": 0, 00:25:39.382 "rw_mbytes_per_sec": 0, 00:25:39.382 "r_mbytes_per_sec": 0, 00:25:39.382 "w_mbytes_per_sec": 0 00:25:39.382 }, 00:25:39.382 "claimed": true, 00:25:39.382 "claim_type": "exclusive_write", 00:25:39.382 "zoned": false, 00:25:39.382 "supported_io_types": { 00:25:39.382 "read": true, 00:25:39.382 "write": true, 00:25:39.382 "unmap": true, 00:25:39.382 "flush": true, 00:25:39.382 "reset": true, 00:25:39.382 "nvme_admin": false, 00:25:39.382 "nvme_io": false, 00:25:39.382 "nvme_io_md": false, 00:25:39.382 "write_zeroes": true, 00:25:39.382 "zcopy": true, 00:25:39.382 "get_zone_info": false, 00:25:39.382 "zone_management": false, 00:25:39.382 "zone_append": false, 00:25:39.382 "compare": false, 00:25:39.382 "compare_and_write": false, 00:25:39.382 "abort": true, 00:25:39.382 "seek_hole": false, 00:25:39.382 "seek_data": false, 00:25:39.382 "copy": true, 00:25:39.382 "nvme_iov_md": false 00:25:39.382 }, 00:25:39.382 "memory_domains": [ 00:25:39.382 { 00:25:39.382 "dma_device_id": "system", 00:25:39.382 "dma_device_type": 1 00:25:39.382 }, 00:25:39.382 { 00:25:39.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:39.382 "dma_device_type": 2 00:25:39.382 } 00:25:39.382 ], 00:25:39.382 "driver_specific": {} 00:25:39.382 } 00:25:39.382 ] 00:25:39.382 21:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:39.382 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:25:39.382 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:39.382 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:39.382 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:39.382 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:39.382 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:39.382 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:39.382 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:39.382 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:39.382 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:39.382 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.382 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:39.641 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:39.641 "name": "Existed_Raid", 00:25:39.641 "uuid": "6f74b0fb-44f5-423d-8716-39b64a3ff48a", 00:25:39.641 "strip_size_kb": 64, 00:25:39.641 "state": "online", 00:25:39.641 "raid_level": "concat", 00:25:39.641 "superblock": true, 00:25:39.641 "num_base_bdevs": 4, 00:25:39.641 "num_base_bdevs_discovered": 4, 00:25:39.641 "num_base_bdevs_operational": 4, 00:25:39.641 "base_bdevs_list": [ 00:25:39.641 { 00:25:39.641 "name": "NewBaseBdev", 00:25:39.641 "uuid": "16a5be39-1ce9-4891-9d14-250bfbd0a765", 00:25:39.641 "is_configured": true, 00:25:39.641 "data_offset": 2048, 00:25:39.641 "data_size": 63488 00:25:39.641 }, 00:25:39.641 { 00:25:39.641 "name": "BaseBdev2", 00:25:39.641 "uuid": "9778d967-72af-43ca-a40e-4bef9da6da5f", 00:25:39.641 "is_configured": true, 00:25:39.641 "data_offset": 2048, 00:25:39.641 "data_size": 63488 00:25:39.641 }, 00:25:39.641 { 00:25:39.641 "name": "BaseBdev3", 00:25:39.641 "uuid": "73f662bc-6f54-45b8-b513-22a708fb594b", 00:25:39.641 "is_configured": true, 00:25:39.641 "data_offset": 2048, 00:25:39.641 "data_size": 63488 00:25:39.641 }, 00:25:39.641 { 00:25:39.641 "name": "BaseBdev4", 00:25:39.641 "uuid": "5a7de5c5-b345-4dc1-8858-125892cba7d1", 00:25:39.641 "is_configured": true, 00:25:39.641 "data_offset": 2048, 00:25:39.641 "data_size": 63488 00:25:39.641 } 00:25:39.641 ] 00:25:39.641 }' 00:25:39.641 21:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:39.641 21:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:40.575 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:25:40.575 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:40.575 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:40.575 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:40.575 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:40.575 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:25:40.575 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:40.575 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:40.575 [2024-07-15 21:39:13.819849] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:40.575 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:40.575 "name": "Existed_Raid", 00:25:40.575 "aliases": [ 00:25:40.575 "6f74b0fb-44f5-423d-8716-39b64a3ff48a" 00:25:40.575 ], 00:25:40.575 "product_name": "Raid Volume", 00:25:40.575 "block_size": 512, 00:25:40.575 "num_blocks": 253952, 00:25:40.575 "uuid": "6f74b0fb-44f5-423d-8716-39b64a3ff48a", 00:25:40.575 "assigned_rate_limits": { 00:25:40.575 "rw_ios_per_sec": 0, 00:25:40.575 "rw_mbytes_per_sec": 0, 00:25:40.575 "r_mbytes_per_sec": 0, 00:25:40.575 "w_mbytes_per_sec": 0 00:25:40.575 }, 00:25:40.575 "claimed": false, 00:25:40.575 "zoned": false, 00:25:40.575 "supported_io_types": { 00:25:40.575 "read": true, 00:25:40.575 "write": true, 00:25:40.575 "unmap": true, 00:25:40.575 "flush": true, 00:25:40.575 "reset": true, 00:25:40.575 "nvme_admin": false, 00:25:40.575 "nvme_io": false, 00:25:40.575 "nvme_io_md": false, 00:25:40.575 "write_zeroes": true, 00:25:40.575 "zcopy": false, 00:25:40.575 "get_zone_info": false, 00:25:40.575 "zone_management": false, 00:25:40.575 "zone_append": false, 00:25:40.575 "compare": false, 00:25:40.575 "compare_and_write": false, 00:25:40.575 "abort": false, 00:25:40.575 "seek_hole": false, 00:25:40.575 "seek_data": false, 00:25:40.575 "copy": false, 00:25:40.575 "nvme_iov_md": false 00:25:40.575 }, 00:25:40.575 "memory_domains": [ 00:25:40.575 { 00:25:40.575 "dma_device_id": "system", 00:25:40.575 "dma_device_type": 1 00:25:40.575 }, 00:25:40.575 { 00:25:40.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:40.575 "dma_device_type": 2 00:25:40.575 }, 00:25:40.575 { 00:25:40.575 "dma_device_id": "system", 00:25:40.575 "dma_device_type": 1 00:25:40.575 }, 00:25:40.575 { 00:25:40.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:40.575 "dma_device_type": 2 00:25:40.575 }, 00:25:40.575 { 00:25:40.575 "dma_device_id": "system", 00:25:40.575 "dma_device_type": 1 00:25:40.575 }, 00:25:40.575 { 00:25:40.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:40.575 "dma_device_type": 2 00:25:40.575 }, 00:25:40.575 { 00:25:40.575 "dma_device_id": "system", 00:25:40.575 "dma_device_type": 1 00:25:40.575 }, 00:25:40.575 { 00:25:40.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:40.575 "dma_device_type": 2 00:25:40.576 } 00:25:40.576 ], 00:25:40.576 "driver_specific": { 00:25:40.576 "raid": { 00:25:40.576 "uuid": "6f74b0fb-44f5-423d-8716-39b64a3ff48a", 00:25:40.576 "strip_size_kb": 64, 00:25:40.576 "state": "online", 00:25:40.576 "raid_level": "concat", 00:25:40.576 "superblock": true, 00:25:40.576 "num_base_bdevs": 4, 00:25:40.576 "num_base_bdevs_discovered": 4, 00:25:40.576 "num_base_bdevs_operational": 4, 00:25:40.576 "base_bdevs_list": [ 00:25:40.576 { 00:25:40.576 "name": "NewBaseBdev", 00:25:40.576 "uuid": "16a5be39-1ce9-4891-9d14-250bfbd0a765", 00:25:40.576 "is_configured": true, 00:25:40.576 "data_offset": 2048, 00:25:40.576 "data_size": 63488 00:25:40.576 }, 00:25:40.576 { 00:25:40.576 "name": "BaseBdev2", 00:25:40.576 "uuid": "9778d967-72af-43ca-a40e-4bef9da6da5f", 00:25:40.576 "is_configured": true, 00:25:40.576 "data_offset": 2048, 00:25:40.576 "data_size": 63488 00:25:40.576 }, 00:25:40.576 { 00:25:40.576 "name": "BaseBdev3", 00:25:40.576 "uuid": "73f662bc-6f54-45b8-b513-22a708fb594b", 00:25:40.576 "is_configured": true, 00:25:40.576 "data_offset": 2048, 00:25:40.576 "data_size": 63488 00:25:40.576 }, 00:25:40.576 { 00:25:40.576 "name": "BaseBdev4", 00:25:40.576 "uuid": "5a7de5c5-b345-4dc1-8858-125892cba7d1", 00:25:40.576 "is_configured": true, 00:25:40.576 "data_offset": 2048, 00:25:40.576 "data_size": 63488 00:25:40.576 } 00:25:40.576 ] 00:25:40.576 } 00:25:40.576 } 00:25:40.576 }' 00:25:40.576 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:40.576 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:25:40.576 BaseBdev2 00:25:40.576 BaseBdev3 00:25:40.576 BaseBdev4' 00:25:40.576 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:40.576 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:40.576 21:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:25:40.839 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:40.839 "name": "NewBaseBdev", 00:25:40.839 "aliases": [ 00:25:40.839 "16a5be39-1ce9-4891-9d14-250bfbd0a765" 00:25:40.839 ], 00:25:40.839 "product_name": "Malloc disk", 00:25:40.839 "block_size": 512, 00:25:40.839 "num_blocks": 65536, 00:25:40.839 "uuid": "16a5be39-1ce9-4891-9d14-250bfbd0a765", 00:25:40.839 "assigned_rate_limits": { 00:25:40.839 "rw_ios_per_sec": 0, 00:25:40.839 "rw_mbytes_per_sec": 0, 00:25:40.839 "r_mbytes_per_sec": 0, 00:25:40.839 "w_mbytes_per_sec": 0 00:25:40.839 }, 00:25:40.839 "claimed": true, 00:25:40.839 "claim_type": "exclusive_write", 00:25:40.839 "zoned": false, 00:25:40.839 "supported_io_types": { 00:25:40.839 "read": true, 00:25:40.839 "write": true, 00:25:40.839 "unmap": true, 00:25:40.839 "flush": true, 00:25:40.839 "reset": true, 00:25:40.839 "nvme_admin": false, 00:25:40.839 "nvme_io": false, 00:25:40.839 "nvme_io_md": false, 00:25:40.839 "write_zeroes": true, 00:25:40.839 "zcopy": true, 00:25:40.839 "get_zone_info": false, 00:25:40.839 "zone_management": false, 00:25:40.839 "zone_append": false, 00:25:40.839 "compare": false, 00:25:40.839 "compare_and_write": false, 00:25:40.839 "abort": true, 00:25:40.839 "seek_hole": false, 00:25:40.839 "seek_data": false, 00:25:40.839 "copy": true, 00:25:40.839 "nvme_iov_md": false 00:25:40.839 }, 00:25:40.839 "memory_domains": [ 00:25:40.839 { 00:25:40.839 "dma_device_id": "system", 00:25:40.839 "dma_device_type": 1 00:25:40.839 }, 00:25:40.839 { 00:25:40.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:40.839 "dma_device_type": 2 00:25:40.839 } 00:25:40.839 ], 00:25:40.839 "driver_specific": {} 00:25:40.839 }' 00:25:40.839 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:41.098 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:41.098 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:41.098 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:41.098 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:41.098 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:41.098 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:41.098 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:41.357 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:41.357 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:41.357 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:41.357 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:41.357 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:41.357 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:41.357 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:41.615 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:41.615 "name": "BaseBdev2", 00:25:41.615 "aliases": [ 00:25:41.615 "9778d967-72af-43ca-a40e-4bef9da6da5f" 00:25:41.615 ], 00:25:41.615 "product_name": "Malloc disk", 00:25:41.615 "block_size": 512, 00:25:41.615 "num_blocks": 65536, 00:25:41.615 "uuid": "9778d967-72af-43ca-a40e-4bef9da6da5f", 00:25:41.615 "assigned_rate_limits": { 00:25:41.615 "rw_ios_per_sec": 0, 00:25:41.615 "rw_mbytes_per_sec": 0, 00:25:41.615 "r_mbytes_per_sec": 0, 00:25:41.615 "w_mbytes_per_sec": 0 00:25:41.615 }, 00:25:41.615 "claimed": true, 00:25:41.615 "claim_type": "exclusive_write", 00:25:41.615 "zoned": false, 00:25:41.615 "supported_io_types": { 00:25:41.615 "read": true, 00:25:41.615 "write": true, 00:25:41.615 "unmap": true, 00:25:41.615 "flush": true, 00:25:41.615 "reset": true, 00:25:41.615 "nvme_admin": false, 00:25:41.615 "nvme_io": false, 00:25:41.615 "nvme_io_md": false, 00:25:41.615 "write_zeroes": true, 00:25:41.615 "zcopy": true, 00:25:41.615 "get_zone_info": false, 00:25:41.615 "zone_management": false, 00:25:41.615 "zone_append": false, 00:25:41.615 "compare": false, 00:25:41.615 "compare_and_write": false, 00:25:41.615 "abort": true, 00:25:41.615 "seek_hole": false, 00:25:41.615 "seek_data": false, 00:25:41.615 "copy": true, 00:25:41.615 "nvme_iov_md": false 00:25:41.615 }, 00:25:41.615 "memory_domains": [ 00:25:41.615 { 00:25:41.615 "dma_device_id": "system", 00:25:41.615 "dma_device_type": 1 00:25:41.615 }, 00:25:41.615 { 00:25:41.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:41.615 "dma_device_type": 2 00:25:41.615 } 00:25:41.615 ], 00:25:41.615 "driver_specific": {} 00:25:41.615 }' 00:25:41.616 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:41.616 21:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:41.874 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:41.874 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:41.874 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:41.874 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:41.874 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:41.874 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:41.874 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:41.874 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:42.133 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:42.133 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:42.133 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:42.133 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:42.133 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:42.392 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:42.392 "name": "BaseBdev3", 00:25:42.392 "aliases": [ 00:25:42.392 "73f662bc-6f54-45b8-b513-22a708fb594b" 00:25:42.392 ], 00:25:42.392 "product_name": "Malloc disk", 00:25:42.392 "block_size": 512, 00:25:42.392 "num_blocks": 65536, 00:25:42.392 "uuid": "73f662bc-6f54-45b8-b513-22a708fb594b", 00:25:42.392 "assigned_rate_limits": { 00:25:42.392 "rw_ios_per_sec": 0, 00:25:42.392 "rw_mbytes_per_sec": 0, 00:25:42.392 "r_mbytes_per_sec": 0, 00:25:42.392 "w_mbytes_per_sec": 0 00:25:42.392 }, 00:25:42.392 "claimed": true, 00:25:42.392 "claim_type": "exclusive_write", 00:25:42.392 "zoned": false, 00:25:42.392 "supported_io_types": { 00:25:42.392 "read": true, 00:25:42.392 "write": true, 00:25:42.392 "unmap": true, 00:25:42.392 "flush": true, 00:25:42.392 "reset": true, 00:25:42.392 "nvme_admin": false, 00:25:42.392 "nvme_io": false, 00:25:42.392 "nvme_io_md": false, 00:25:42.392 "write_zeroes": true, 00:25:42.392 "zcopy": true, 00:25:42.392 "get_zone_info": false, 00:25:42.392 "zone_management": false, 00:25:42.392 "zone_append": false, 00:25:42.392 "compare": false, 00:25:42.392 "compare_and_write": false, 00:25:42.392 "abort": true, 00:25:42.392 "seek_hole": false, 00:25:42.392 "seek_data": false, 00:25:42.392 "copy": true, 00:25:42.392 "nvme_iov_md": false 00:25:42.392 }, 00:25:42.392 "memory_domains": [ 00:25:42.392 { 00:25:42.392 "dma_device_id": "system", 00:25:42.392 "dma_device_type": 1 00:25:42.392 }, 00:25:42.392 { 00:25:42.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:42.392 "dma_device_type": 2 00:25:42.392 } 00:25:42.392 ], 00:25:42.392 "driver_specific": {} 00:25:42.392 }' 00:25:42.392 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:42.392 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:42.392 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:42.392 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:42.392 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:42.651 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:42.651 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:42.651 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:42.651 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:42.651 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:42.651 21:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:42.910 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:42.910 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:42.910 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:42.910 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:42.910 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:42.910 "name": "BaseBdev4", 00:25:42.910 "aliases": [ 00:25:42.910 "5a7de5c5-b345-4dc1-8858-125892cba7d1" 00:25:42.910 ], 00:25:42.910 "product_name": "Malloc disk", 00:25:42.910 "block_size": 512, 00:25:42.910 "num_blocks": 65536, 00:25:42.910 "uuid": "5a7de5c5-b345-4dc1-8858-125892cba7d1", 00:25:42.910 "assigned_rate_limits": { 00:25:42.910 "rw_ios_per_sec": 0, 00:25:42.910 "rw_mbytes_per_sec": 0, 00:25:42.910 "r_mbytes_per_sec": 0, 00:25:42.910 "w_mbytes_per_sec": 0 00:25:42.910 }, 00:25:42.910 "claimed": true, 00:25:42.910 "claim_type": "exclusive_write", 00:25:42.910 "zoned": false, 00:25:42.910 "supported_io_types": { 00:25:42.910 "read": true, 00:25:42.910 "write": true, 00:25:42.910 "unmap": true, 00:25:42.910 "flush": true, 00:25:42.910 "reset": true, 00:25:42.910 "nvme_admin": false, 00:25:42.910 "nvme_io": false, 00:25:42.910 "nvme_io_md": false, 00:25:42.910 "write_zeroes": true, 00:25:42.910 "zcopy": true, 00:25:42.910 "get_zone_info": false, 00:25:42.910 "zone_management": false, 00:25:42.910 "zone_append": false, 00:25:42.910 "compare": false, 00:25:42.910 "compare_and_write": false, 00:25:42.910 "abort": true, 00:25:42.910 "seek_hole": false, 00:25:42.910 "seek_data": false, 00:25:42.910 "copy": true, 00:25:42.910 "nvme_iov_md": false 00:25:42.910 }, 00:25:42.910 "memory_domains": [ 00:25:42.910 { 00:25:42.910 "dma_device_id": "system", 00:25:42.910 "dma_device_type": 1 00:25:42.910 }, 00:25:42.910 { 00:25:42.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:42.910 "dma_device_type": 2 00:25:42.910 } 00:25:42.910 ], 00:25:42.910 "driver_specific": {} 00:25:42.910 }' 00:25:42.910 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:43.169 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:43.169 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:43.169 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:43.169 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:43.169 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:43.169 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:43.429 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:43.429 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:43.429 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:43.429 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:43.429 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:43.429 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:43.709 [2024-07-15 21:39:16.915640] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:43.709 [2024-07-15 21:39:16.915769] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:43.709 [2024-07-15 21:39:16.915903] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:43.709 [2024-07-15 21:39:16.916013] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:43.709 [2024-07-15 21:39:16.916045] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name Existed_Raid, state offline 00:25:43.709 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 139569 00:25:43.709 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 139569 ']' 00:25:43.709 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 139569 00:25:43.709 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:25:43.709 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:43.709 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 139569 00:25:43.709 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:43.709 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:43.709 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 139569' 00:25:43.709 killing process with pid 139569 00:25:43.709 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 139569 00:25:43.709 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 139569 00:25:43.709 [2024-07-15 21:39:16.951225] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:44.277 [2024-07-15 21:39:17.393029] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:45.651 ************************************ 00:25:45.651 END TEST raid_state_function_test_sb 00:25:45.651 ************************************ 00:25:45.651 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:25:45.651 00:25:45.651 real 0m35.415s 00:25:45.651 user 1m5.962s 00:25:45.651 sys 0m3.842s 00:25:45.651 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:45.651 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:45.651 21:39:18 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:25:45.651 21:39:18 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:25:45.651 21:39:18 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:25:45.651 21:39:18 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:45.651 21:39:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:45.651 ************************************ 00:25:45.651 START TEST raid_superblock_test 00:25:45.651 ************************************ 00:25:45.651 21:39:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 4 00:25:45.651 21:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:25:45.651 21:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:25:45.651 21:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:25:45.651 21:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:25:45.651 21:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:25:45.651 21:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:25:45.651 21:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:25:45.651 21:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:25:45.651 21:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:25:45.651 21:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:25:45.651 21:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:25:45.651 21:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:25:45.651 21:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:25:45.651 21:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:25:45.651 21:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:25:45.651 21:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:25:45.651 21:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=140754 00:25:45.651 21:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:25:45.651 21:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 140754 /var/tmp/spdk-raid.sock 00:25:45.651 21:39:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 140754 ']' 00:25:45.651 21:39:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:45.651 21:39:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:45.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:45.651 21:39:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:45.651 21:39:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:45.651 21:39:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.651 [2024-07-15 21:39:18.927769] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:25:45.651 [2024-07-15 21:39:18.928062] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140754 ] 00:25:45.910 [2024-07-15 21:39:19.082915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.168 [2024-07-15 21:39:19.352311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.425 [2024-07-15 21:39:19.631688] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:46.683 21:39:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:46.683 21:39:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:25:46.683 21:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:25:46.683 21:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:46.683 21:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:25:46.683 21:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:25:46.683 21:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:46.683 21:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:46.683 21:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:46.683 21:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:46.683 21:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:25:46.941 malloc1 00:25:46.941 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:47.200 [2024-07-15 21:39:20.356667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:47.200 [2024-07-15 21:39:20.356879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:47.200 [2024-07-15 21:39:20.356931] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:25:47.200 [2024-07-15 21:39:20.356972] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:47.200 [2024-07-15 21:39:20.359127] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:47.200 [2024-07-15 21:39:20.359249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:47.200 pt1 00:25:47.200 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:47.200 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:47.200 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:25:47.200 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:25:47.200 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:47.200 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:47.200 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:47.200 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:47.200 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:25:47.459 malloc2 00:25:47.459 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:47.718 [2024-07-15 21:39:20.873577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:47.718 [2024-07-15 21:39:20.873794] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:47.718 [2024-07-15 21:39:20.873856] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:25:47.718 [2024-07-15 21:39:20.873897] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:47.718 [2024-07-15 21:39:20.876066] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:47.718 [2024-07-15 21:39:20.876161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:47.718 pt2 00:25:47.718 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:47.718 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:47.718 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:25:47.718 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:25:47.718 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:25:47.718 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:47.718 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:47.718 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:47.718 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:25:47.978 malloc3 00:25:47.978 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:48.237 [2024-07-15 21:39:21.359642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:48.237 [2024-07-15 21:39:21.360243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:48.237 [2024-07-15 21:39:21.360401] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:25:48.237 [2024-07-15 21:39:21.360526] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:48.237 [2024-07-15 21:39:21.362813] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:48.237 [2024-07-15 21:39:21.363047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:48.237 pt3 00:25:48.237 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:48.237 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:48.237 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:25:48.237 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:25:48.237 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:25:48.237 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:48.237 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:48.237 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:48.237 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:25:48.237 malloc4 00:25:48.496 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:48.496 [2024-07-15 21:39:21.841179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:48.496 [2024-07-15 21:39:21.841615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:48.496 [2024-07-15 21:39:21.841751] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:25:48.496 [2024-07-15 21:39:21.841862] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:48.496 [2024-07-15 21:39:21.844164] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:48.496 [2024-07-15 21:39:21.844379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:48.496 pt4 00:25:48.496 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:48.496 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:48.496 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:25:48.756 [2024-07-15 21:39:22.064909] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:48.756 [2024-07-15 21:39:22.066890] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:48.756 [2024-07-15 21:39:22.067050] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:48.756 [2024-07-15 21:39:22.067133] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:48.756 [2024-07-15 21:39:22.067401] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:25:48.756 [2024-07-15 21:39:22.067453] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:48.756 [2024-07-15 21:39:22.067643] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:25:48.756 [2024-07-15 21:39:22.068043] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:25:48.756 [2024-07-15 21:39:22.068091] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:25:48.756 [2024-07-15 21:39:22.068290] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:48.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:48.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:48.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:48.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:48.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:48.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:48.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:48.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:48.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:48.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:48.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.756 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:49.015 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:49.015 "name": "raid_bdev1", 00:25:49.015 "uuid": "b59d39a9-b662-4b91-bed2-d44126b0eafe", 00:25:49.015 "strip_size_kb": 64, 00:25:49.015 "state": "online", 00:25:49.015 "raid_level": "concat", 00:25:49.015 "superblock": true, 00:25:49.015 "num_base_bdevs": 4, 00:25:49.015 "num_base_bdevs_discovered": 4, 00:25:49.015 "num_base_bdevs_operational": 4, 00:25:49.015 "base_bdevs_list": [ 00:25:49.015 { 00:25:49.015 "name": "pt1", 00:25:49.015 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:49.015 "is_configured": true, 00:25:49.015 "data_offset": 2048, 00:25:49.015 "data_size": 63488 00:25:49.015 }, 00:25:49.015 { 00:25:49.015 "name": "pt2", 00:25:49.015 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:49.015 "is_configured": true, 00:25:49.015 "data_offset": 2048, 00:25:49.015 "data_size": 63488 00:25:49.015 }, 00:25:49.015 { 00:25:49.015 "name": "pt3", 00:25:49.015 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:49.015 "is_configured": true, 00:25:49.015 "data_offset": 2048, 00:25:49.015 "data_size": 63488 00:25:49.015 }, 00:25:49.015 { 00:25:49.015 "name": "pt4", 00:25:49.015 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:49.015 "is_configured": true, 00:25:49.015 "data_offset": 2048, 00:25:49.015 "data_size": 63488 00:25:49.015 } 00:25:49.015 ] 00:25:49.015 }' 00:25:49.015 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:49.015 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.956 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:25:49.956 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:25:49.956 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:49.956 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:49.956 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:49.956 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:49.956 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:49.956 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:49.956 [2024-07-15 21:39:23.211335] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:49.956 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:49.956 "name": "raid_bdev1", 00:25:49.956 "aliases": [ 00:25:49.956 "b59d39a9-b662-4b91-bed2-d44126b0eafe" 00:25:49.956 ], 00:25:49.956 "product_name": "Raid Volume", 00:25:49.956 "block_size": 512, 00:25:49.956 "num_blocks": 253952, 00:25:49.956 "uuid": "b59d39a9-b662-4b91-bed2-d44126b0eafe", 00:25:49.956 "assigned_rate_limits": { 00:25:49.956 "rw_ios_per_sec": 0, 00:25:49.956 "rw_mbytes_per_sec": 0, 00:25:49.956 "r_mbytes_per_sec": 0, 00:25:49.956 "w_mbytes_per_sec": 0 00:25:49.956 }, 00:25:49.956 "claimed": false, 00:25:49.956 "zoned": false, 00:25:49.956 "supported_io_types": { 00:25:49.956 "read": true, 00:25:49.956 "write": true, 00:25:49.956 "unmap": true, 00:25:49.956 "flush": true, 00:25:49.956 "reset": true, 00:25:49.956 "nvme_admin": false, 00:25:49.956 "nvme_io": false, 00:25:49.956 "nvme_io_md": false, 00:25:49.956 "write_zeroes": true, 00:25:49.956 "zcopy": false, 00:25:49.956 "get_zone_info": false, 00:25:49.956 "zone_management": false, 00:25:49.956 "zone_append": false, 00:25:49.956 "compare": false, 00:25:49.956 "compare_and_write": false, 00:25:49.956 "abort": false, 00:25:49.956 "seek_hole": false, 00:25:49.956 "seek_data": false, 00:25:49.956 "copy": false, 00:25:49.956 "nvme_iov_md": false 00:25:49.956 }, 00:25:49.956 "memory_domains": [ 00:25:49.956 { 00:25:49.956 "dma_device_id": "system", 00:25:49.956 "dma_device_type": 1 00:25:49.956 }, 00:25:49.956 { 00:25:49.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.956 "dma_device_type": 2 00:25:49.956 }, 00:25:49.956 { 00:25:49.956 "dma_device_id": "system", 00:25:49.956 "dma_device_type": 1 00:25:49.956 }, 00:25:49.956 { 00:25:49.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.956 "dma_device_type": 2 00:25:49.956 }, 00:25:49.956 { 00:25:49.956 "dma_device_id": "system", 00:25:49.956 "dma_device_type": 1 00:25:49.956 }, 00:25:49.956 { 00:25:49.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.956 "dma_device_type": 2 00:25:49.956 }, 00:25:49.956 { 00:25:49.956 "dma_device_id": "system", 00:25:49.956 "dma_device_type": 1 00:25:49.956 }, 00:25:49.956 { 00:25:49.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.956 "dma_device_type": 2 00:25:49.956 } 00:25:49.956 ], 00:25:49.956 "driver_specific": { 00:25:49.956 "raid": { 00:25:49.956 "uuid": "b59d39a9-b662-4b91-bed2-d44126b0eafe", 00:25:49.956 "strip_size_kb": 64, 00:25:49.956 "state": "online", 00:25:49.956 "raid_level": "concat", 00:25:49.956 "superblock": true, 00:25:49.956 "num_base_bdevs": 4, 00:25:49.956 "num_base_bdevs_discovered": 4, 00:25:49.956 "num_base_bdevs_operational": 4, 00:25:49.956 "base_bdevs_list": [ 00:25:49.956 { 00:25:49.956 "name": "pt1", 00:25:49.956 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:49.956 "is_configured": true, 00:25:49.956 "data_offset": 2048, 00:25:49.956 "data_size": 63488 00:25:49.956 }, 00:25:49.956 { 00:25:49.956 "name": "pt2", 00:25:49.956 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:49.956 "is_configured": true, 00:25:49.956 "data_offset": 2048, 00:25:49.956 "data_size": 63488 00:25:49.956 }, 00:25:49.956 { 00:25:49.956 "name": "pt3", 00:25:49.956 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:49.956 "is_configured": true, 00:25:49.956 "data_offset": 2048, 00:25:49.956 "data_size": 63488 00:25:49.956 }, 00:25:49.956 { 00:25:49.956 "name": "pt4", 00:25:49.956 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:49.956 "is_configured": true, 00:25:49.956 "data_offset": 2048, 00:25:49.956 "data_size": 63488 00:25:49.956 } 00:25:49.956 ] 00:25:49.956 } 00:25:49.956 } 00:25:49.956 }' 00:25:49.956 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:49.956 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:25:49.956 pt2 00:25:49.956 pt3 00:25:49.956 pt4' 00:25:49.956 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:49.956 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:49.956 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:50.217 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:50.217 "name": "pt1", 00:25:50.217 "aliases": [ 00:25:50.217 "00000000-0000-0000-0000-000000000001" 00:25:50.217 ], 00:25:50.217 "product_name": "passthru", 00:25:50.217 "block_size": 512, 00:25:50.217 "num_blocks": 65536, 00:25:50.217 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:50.217 "assigned_rate_limits": { 00:25:50.217 "rw_ios_per_sec": 0, 00:25:50.217 "rw_mbytes_per_sec": 0, 00:25:50.217 "r_mbytes_per_sec": 0, 00:25:50.217 "w_mbytes_per_sec": 0 00:25:50.217 }, 00:25:50.217 "claimed": true, 00:25:50.217 "claim_type": "exclusive_write", 00:25:50.217 "zoned": false, 00:25:50.217 "supported_io_types": { 00:25:50.217 "read": true, 00:25:50.217 "write": true, 00:25:50.217 "unmap": true, 00:25:50.217 "flush": true, 00:25:50.217 "reset": true, 00:25:50.217 "nvme_admin": false, 00:25:50.217 "nvme_io": false, 00:25:50.217 "nvme_io_md": false, 00:25:50.217 "write_zeroes": true, 00:25:50.217 "zcopy": true, 00:25:50.217 "get_zone_info": false, 00:25:50.217 "zone_management": false, 00:25:50.217 "zone_append": false, 00:25:50.217 "compare": false, 00:25:50.217 "compare_and_write": false, 00:25:50.217 "abort": true, 00:25:50.217 "seek_hole": false, 00:25:50.217 "seek_data": false, 00:25:50.217 "copy": true, 00:25:50.217 "nvme_iov_md": false 00:25:50.217 }, 00:25:50.217 "memory_domains": [ 00:25:50.217 { 00:25:50.217 "dma_device_id": "system", 00:25:50.217 "dma_device_type": 1 00:25:50.217 }, 00:25:50.217 { 00:25:50.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:50.217 "dma_device_type": 2 00:25:50.217 } 00:25:50.217 ], 00:25:50.217 "driver_specific": { 00:25:50.217 "passthru": { 00:25:50.217 "name": "pt1", 00:25:50.217 "base_bdev_name": "malloc1" 00:25:50.217 } 00:25:50.217 } 00:25:50.217 }' 00:25:50.217 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:50.217 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:50.477 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:50.477 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:50.477 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:50.477 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:50.477 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:50.477 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:50.477 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:50.477 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:50.736 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:50.736 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:50.736 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:50.736 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:50.736 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:50.996 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:50.996 "name": "pt2", 00:25:50.996 "aliases": [ 00:25:50.996 "00000000-0000-0000-0000-000000000002" 00:25:50.996 ], 00:25:50.996 "product_name": "passthru", 00:25:50.996 "block_size": 512, 00:25:50.996 "num_blocks": 65536, 00:25:50.996 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:50.996 "assigned_rate_limits": { 00:25:50.996 "rw_ios_per_sec": 0, 00:25:50.996 "rw_mbytes_per_sec": 0, 00:25:50.996 "r_mbytes_per_sec": 0, 00:25:50.996 "w_mbytes_per_sec": 0 00:25:50.996 }, 00:25:50.996 "claimed": true, 00:25:50.996 "claim_type": "exclusive_write", 00:25:50.996 "zoned": false, 00:25:50.996 "supported_io_types": { 00:25:50.996 "read": true, 00:25:50.996 "write": true, 00:25:50.996 "unmap": true, 00:25:50.996 "flush": true, 00:25:50.996 "reset": true, 00:25:50.996 "nvme_admin": false, 00:25:50.996 "nvme_io": false, 00:25:50.996 "nvme_io_md": false, 00:25:50.996 "write_zeroes": true, 00:25:50.996 "zcopy": true, 00:25:50.996 "get_zone_info": false, 00:25:50.996 "zone_management": false, 00:25:50.996 "zone_append": false, 00:25:50.996 "compare": false, 00:25:50.996 "compare_and_write": false, 00:25:50.996 "abort": true, 00:25:50.996 "seek_hole": false, 00:25:50.996 "seek_data": false, 00:25:50.996 "copy": true, 00:25:50.997 "nvme_iov_md": false 00:25:50.997 }, 00:25:50.997 "memory_domains": [ 00:25:50.997 { 00:25:50.997 "dma_device_id": "system", 00:25:50.997 "dma_device_type": 1 00:25:50.997 }, 00:25:50.997 { 00:25:50.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:50.997 "dma_device_type": 2 00:25:50.997 } 00:25:50.997 ], 00:25:50.997 "driver_specific": { 00:25:50.997 "passthru": { 00:25:50.997 "name": "pt2", 00:25:50.997 "base_bdev_name": "malloc2" 00:25:50.997 } 00:25:50.997 } 00:25:50.997 }' 00:25:50.997 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:50.997 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:50.997 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:50.997 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:50.997 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:50.997 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:50.997 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:51.256 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:51.256 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:51.256 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:51.256 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:51.256 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:51.256 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:51.256 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:25:51.256 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:51.516 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:51.516 "name": "pt3", 00:25:51.516 "aliases": [ 00:25:51.516 "00000000-0000-0000-0000-000000000003" 00:25:51.516 ], 00:25:51.516 "product_name": "passthru", 00:25:51.516 "block_size": 512, 00:25:51.516 "num_blocks": 65536, 00:25:51.516 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:51.516 "assigned_rate_limits": { 00:25:51.516 "rw_ios_per_sec": 0, 00:25:51.516 "rw_mbytes_per_sec": 0, 00:25:51.516 "r_mbytes_per_sec": 0, 00:25:51.516 "w_mbytes_per_sec": 0 00:25:51.516 }, 00:25:51.516 "claimed": true, 00:25:51.516 "claim_type": "exclusive_write", 00:25:51.516 "zoned": false, 00:25:51.516 "supported_io_types": { 00:25:51.516 "read": true, 00:25:51.516 "write": true, 00:25:51.516 "unmap": true, 00:25:51.516 "flush": true, 00:25:51.516 "reset": true, 00:25:51.516 "nvme_admin": false, 00:25:51.516 "nvme_io": false, 00:25:51.516 "nvme_io_md": false, 00:25:51.516 "write_zeroes": true, 00:25:51.516 "zcopy": true, 00:25:51.516 "get_zone_info": false, 00:25:51.516 "zone_management": false, 00:25:51.516 "zone_append": false, 00:25:51.516 "compare": false, 00:25:51.516 "compare_and_write": false, 00:25:51.516 "abort": true, 00:25:51.516 "seek_hole": false, 00:25:51.516 "seek_data": false, 00:25:51.516 "copy": true, 00:25:51.516 "nvme_iov_md": false 00:25:51.516 }, 00:25:51.516 "memory_domains": [ 00:25:51.516 { 00:25:51.516 "dma_device_id": "system", 00:25:51.516 "dma_device_type": 1 00:25:51.516 }, 00:25:51.516 { 00:25:51.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:51.516 "dma_device_type": 2 00:25:51.516 } 00:25:51.516 ], 00:25:51.516 "driver_specific": { 00:25:51.516 "passthru": { 00:25:51.516 "name": "pt3", 00:25:51.516 "base_bdev_name": "malloc3" 00:25:51.516 } 00:25:51.516 } 00:25:51.516 }' 00:25:51.516 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:51.516 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:51.516 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:51.516 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:51.777 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:51.777 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:51.777 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:51.777 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:51.777 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:51.777 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:52.036 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:52.036 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:52.036 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:52.036 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:52.036 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:25:52.296 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:52.296 "name": "pt4", 00:25:52.296 "aliases": [ 00:25:52.296 "00000000-0000-0000-0000-000000000004" 00:25:52.296 ], 00:25:52.296 "product_name": "passthru", 00:25:52.296 "block_size": 512, 00:25:52.296 "num_blocks": 65536, 00:25:52.296 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:52.296 "assigned_rate_limits": { 00:25:52.296 "rw_ios_per_sec": 0, 00:25:52.296 "rw_mbytes_per_sec": 0, 00:25:52.296 "r_mbytes_per_sec": 0, 00:25:52.296 "w_mbytes_per_sec": 0 00:25:52.296 }, 00:25:52.296 "claimed": true, 00:25:52.296 "claim_type": "exclusive_write", 00:25:52.296 "zoned": false, 00:25:52.296 "supported_io_types": { 00:25:52.296 "read": true, 00:25:52.296 "write": true, 00:25:52.296 "unmap": true, 00:25:52.296 "flush": true, 00:25:52.296 "reset": true, 00:25:52.296 "nvme_admin": false, 00:25:52.296 "nvme_io": false, 00:25:52.296 "nvme_io_md": false, 00:25:52.296 "write_zeroes": true, 00:25:52.296 "zcopy": true, 00:25:52.296 "get_zone_info": false, 00:25:52.296 "zone_management": false, 00:25:52.296 "zone_append": false, 00:25:52.296 "compare": false, 00:25:52.296 "compare_and_write": false, 00:25:52.296 "abort": true, 00:25:52.296 "seek_hole": false, 00:25:52.296 "seek_data": false, 00:25:52.296 "copy": true, 00:25:52.296 "nvme_iov_md": false 00:25:52.296 }, 00:25:52.296 "memory_domains": [ 00:25:52.296 { 00:25:52.296 "dma_device_id": "system", 00:25:52.296 "dma_device_type": 1 00:25:52.296 }, 00:25:52.296 { 00:25:52.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:52.296 "dma_device_type": 2 00:25:52.296 } 00:25:52.296 ], 00:25:52.296 "driver_specific": { 00:25:52.296 "passthru": { 00:25:52.296 "name": "pt4", 00:25:52.296 "base_bdev_name": "malloc4" 00:25:52.296 } 00:25:52.296 } 00:25:52.296 }' 00:25:52.296 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:52.296 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:52.296 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:52.296 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:52.296 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:52.296 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:52.296 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:52.553 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:52.553 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:52.553 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:52.553 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:52.553 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:52.553 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:52.553 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:25:52.810 [2024-07-15 21:39:26.050738] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:52.810 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=b59d39a9-b662-4b91-bed2-d44126b0eafe 00:25:52.810 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z b59d39a9-b662-4b91-bed2-d44126b0eafe ']' 00:25:52.810 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:53.068 [2024-07-15 21:39:26.266121] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:53.068 [2024-07-15 21:39:26.266202] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:53.068 [2024-07-15 21:39:26.266335] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:53.068 [2024-07-15 21:39:26.266427] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:53.068 [2024-07-15 21:39:26.266471] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:25:53.068 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.068 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:25:53.327 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:25:53.327 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:25:53.327 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:53.327 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:53.585 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:53.585 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:53.843 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:53.843 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:53.843 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:53.843 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:54.102 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:25:54.102 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:54.361 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:25:54.361 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:54.361 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:25:54.361 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:54.361 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:54.361 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:54.361 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:54.361 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:54.361 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:54.361 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:54.361 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:54.361 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:54.361 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:54.619 [2024-07-15 21:39:27.779555] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:54.619 [2024-07-15 21:39:27.781386] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:54.619 [2024-07-15 21:39:27.781476] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:25:54.619 [2024-07-15 21:39:27.781523] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:25:54.619 [2024-07-15 21:39:27.781598] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:54.619 [2024-07-15 21:39:27.782043] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:54.619 [2024-07-15 21:39:27.782183] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:25:54.619 [2024-07-15 21:39:27.782308] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:25:54.619 [2024-07-15 21:39:27.782404] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:54.619 [2024-07-15 21:39:27.782436] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state configuring 00:25:54.619 request: 00:25:54.619 { 00:25:54.619 "name": "raid_bdev1", 00:25:54.619 "raid_level": "concat", 00:25:54.619 "base_bdevs": [ 00:25:54.619 "malloc1", 00:25:54.619 "malloc2", 00:25:54.619 "malloc3", 00:25:54.619 "malloc4" 00:25:54.619 ], 00:25:54.619 "strip_size_kb": 64, 00:25:54.619 "superblock": false, 00:25:54.619 "method": "bdev_raid_create", 00:25:54.619 "req_id": 1 00:25:54.619 } 00:25:54.619 Got JSON-RPC error response 00:25:54.619 response: 00:25:54.619 { 00:25:54.619 "code": -17, 00:25:54.619 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:54.619 } 00:25:54.619 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:25:54.619 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:54.619 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:54.619 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:54.619 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.619 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:25:54.878 21:39:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:25:54.878 21:39:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:25:54.878 21:39:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:54.878 [2024-07-15 21:39:28.222826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:54.878 [2024-07-15 21:39:28.223000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:54.878 [2024-07-15 21:39:28.223071] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:25:54.878 [2024-07-15 21:39:28.223131] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:54.878 [2024-07-15 21:39:28.225383] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:54.878 [2024-07-15 21:39:28.225471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:54.878 [2024-07-15 21:39:28.225635] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:54.878 [2024-07-15 21:39:28.225744] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:54.878 pt1 00:25:54.878 21:39:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:25:54.878 21:39:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:54.878 21:39:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:54.878 21:39:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:54.878 21:39:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:54.878 21:39:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:54.878 21:39:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:54.878 21:39:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:54.878 21:39:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:54.878 21:39:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:54.878 21:39:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:54.878 21:39:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.137 21:39:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:55.137 "name": "raid_bdev1", 00:25:55.137 "uuid": "b59d39a9-b662-4b91-bed2-d44126b0eafe", 00:25:55.137 "strip_size_kb": 64, 00:25:55.137 "state": "configuring", 00:25:55.137 "raid_level": "concat", 00:25:55.137 "superblock": true, 00:25:55.137 "num_base_bdevs": 4, 00:25:55.137 "num_base_bdevs_discovered": 1, 00:25:55.137 "num_base_bdevs_operational": 4, 00:25:55.137 "base_bdevs_list": [ 00:25:55.137 { 00:25:55.137 "name": "pt1", 00:25:55.137 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:55.137 "is_configured": true, 00:25:55.137 "data_offset": 2048, 00:25:55.137 "data_size": 63488 00:25:55.137 }, 00:25:55.137 { 00:25:55.137 "name": null, 00:25:55.137 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:55.137 "is_configured": false, 00:25:55.137 "data_offset": 2048, 00:25:55.137 "data_size": 63488 00:25:55.137 }, 00:25:55.137 { 00:25:55.137 "name": null, 00:25:55.137 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:55.137 "is_configured": false, 00:25:55.137 "data_offset": 2048, 00:25:55.137 "data_size": 63488 00:25:55.137 }, 00:25:55.137 { 00:25:55.137 "name": null, 00:25:55.137 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:55.137 "is_configured": false, 00:25:55.137 "data_offset": 2048, 00:25:55.137 "data_size": 63488 00:25:55.137 } 00:25:55.137 ] 00:25:55.137 }' 00:25:55.137 21:39:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:55.137 21:39:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:56.073 21:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:25:56.073 21:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:56.073 [2024-07-15 21:39:29.320899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:56.073 [2024-07-15 21:39:29.321062] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:56.073 [2024-07-15 21:39:29.321113] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:25:56.073 [2024-07-15 21:39:29.321178] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:56.073 [2024-07-15 21:39:29.321629] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:56.073 [2024-07-15 21:39:29.321690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:56.073 [2024-07-15 21:39:29.321821] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:56.073 [2024-07-15 21:39:29.321868] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:56.073 pt2 00:25:56.073 21:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:56.331 [2024-07-15 21:39:29.512606] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:25:56.331 21:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:25:56.331 21:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:56.331 21:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:56.331 21:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:56.331 21:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:56.331 21:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:56.331 21:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:56.331 21:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:56.331 21:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:56.331 21:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:56.331 21:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:56.331 21:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:56.590 21:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:56.590 "name": "raid_bdev1", 00:25:56.590 "uuid": "b59d39a9-b662-4b91-bed2-d44126b0eafe", 00:25:56.590 "strip_size_kb": 64, 00:25:56.590 "state": "configuring", 00:25:56.590 "raid_level": "concat", 00:25:56.590 "superblock": true, 00:25:56.590 "num_base_bdevs": 4, 00:25:56.590 "num_base_bdevs_discovered": 1, 00:25:56.590 "num_base_bdevs_operational": 4, 00:25:56.590 "base_bdevs_list": [ 00:25:56.590 { 00:25:56.590 "name": "pt1", 00:25:56.590 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:56.590 "is_configured": true, 00:25:56.590 "data_offset": 2048, 00:25:56.590 "data_size": 63488 00:25:56.590 }, 00:25:56.590 { 00:25:56.590 "name": null, 00:25:56.590 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:56.590 "is_configured": false, 00:25:56.590 "data_offset": 2048, 00:25:56.590 "data_size": 63488 00:25:56.590 }, 00:25:56.590 { 00:25:56.590 "name": null, 00:25:56.590 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:56.590 "is_configured": false, 00:25:56.590 "data_offset": 2048, 00:25:56.590 "data_size": 63488 00:25:56.590 }, 00:25:56.590 { 00:25:56.590 "name": null, 00:25:56.590 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:56.590 "is_configured": false, 00:25:56.590 "data_offset": 2048, 00:25:56.590 "data_size": 63488 00:25:56.590 } 00:25:56.590 ] 00:25:56.590 }' 00:25:56.590 21:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:56.590 21:39:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.156 21:39:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:25:57.156 21:39:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:57.156 21:39:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:57.156 [2024-07-15 21:39:30.486979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:57.156 [2024-07-15 21:39:30.487135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:57.156 [2024-07-15 21:39:30.487178] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:25:57.156 [2024-07-15 21:39:30.487229] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:57.156 [2024-07-15 21:39:30.487671] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:57.156 [2024-07-15 21:39:30.487736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:57.156 [2024-07-15 21:39:30.487852] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:57.156 [2024-07-15 21:39:30.487897] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:57.156 pt2 00:25:57.156 21:39:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:25:57.156 21:39:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:57.156 21:39:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:57.416 [2024-07-15 21:39:30.698658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:57.416 [2024-07-15 21:39:30.698806] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:57.416 [2024-07-15 21:39:30.698843] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:25:57.416 [2024-07-15 21:39:30.698896] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:57.416 [2024-07-15 21:39:30.699371] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:57.416 [2024-07-15 21:39:30.699436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:57.416 [2024-07-15 21:39:30.699561] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:25:57.416 [2024-07-15 21:39:30.699606] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:57.416 pt3 00:25:57.416 21:39:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:25:57.416 21:39:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:57.416 21:39:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:57.675 [2024-07-15 21:39:30.910243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:57.675 [2024-07-15 21:39:30.910380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:57.675 [2024-07-15 21:39:30.910418] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:25:57.675 [2024-07-15 21:39:30.910474] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:57.675 [2024-07-15 21:39:30.910946] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:57.675 [2024-07-15 21:39:30.911015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:57.675 [2024-07-15 21:39:30.911141] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:25:57.675 [2024-07-15 21:39:30.911207] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:57.675 [2024-07-15 21:39:30.911360] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:25:57.675 [2024-07-15 21:39:30.911396] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:57.675 [2024-07-15 21:39:30.911535] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:25:57.675 [2024-07-15 21:39:30.911882] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:25:57.675 [2024-07-15 21:39:30.911928] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:25:57.675 [2024-07-15 21:39:30.912106] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:57.675 pt4 00:25:57.675 21:39:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:25:57.675 21:39:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:57.675 21:39:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:57.675 21:39:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:57.675 21:39:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:57.675 21:39:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:57.675 21:39:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:57.675 21:39:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:57.675 21:39:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:57.675 21:39:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:57.675 21:39:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:57.675 21:39:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:57.675 21:39:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.675 21:39:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:57.934 21:39:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:57.934 "name": "raid_bdev1", 00:25:57.934 "uuid": "b59d39a9-b662-4b91-bed2-d44126b0eafe", 00:25:57.934 "strip_size_kb": 64, 00:25:57.934 "state": "online", 00:25:57.934 "raid_level": "concat", 00:25:57.934 "superblock": true, 00:25:57.934 "num_base_bdevs": 4, 00:25:57.934 "num_base_bdevs_discovered": 4, 00:25:57.934 "num_base_bdevs_operational": 4, 00:25:57.934 "base_bdevs_list": [ 00:25:57.934 { 00:25:57.934 "name": "pt1", 00:25:57.934 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:57.934 "is_configured": true, 00:25:57.934 "data_offset": 2048, 00:25:57.934 "data_size": 63488 00:25:57.934 }, 00:25:57.934 { 00:25:57.934 "name": "pt2", 00:25:57.934 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:57.934 "is_configured": true, 00:25:57.934 "data_offset": 2048, 00:25:57.934 "data_size": 63488 00:25:57.934 }, 00:25:57.934 { 00:25:57.934 "name": "pt3", 00:25:57.934 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:57.934 "is_configured": true, 00:25:57.934 "data_offset": 2048, 00:25:57.934 "data_size": 63488 00:25:57.934 }, 00:25:57.934 { 00:25:57.934 "name": "pt4", 00:25:57.934 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:57.934 "is_configured": true, 00:25:57.934 "data_offset": 2048, 00:25:57.934 "data_size": 63488 00:25:57.934 } 00:25:57.934 ] 00:25:57.934 }' 00:25:57.934 21:39:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:57.934 21:39:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.503 21:39:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:25:58.503 21:39:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:25:58.503 21:39:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:58.503 21:39:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:58.503 21:39:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:58.503 21:39:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:58.503 21:39:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:58.503 21:39:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:58.762 [2024-07-15 21:39:31.936844] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:58.762 21:39:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:58.762 "name": "raid_bdev1", 00:25:58.762 "aliases": [ 00:25:58.762 "b59d39a9-b662-4b91-bed2-d44126b0eafe" 00:25:58.762 ], 00:25:58.762 "product_name": "Raid Volume", 00:25:58.762 "block_size": 512, 00:25:58.762 "num_blocks": 253952, 00:25:58.762 "uuid": "b59d39a9-b662-4b91-bed2-d44126b0eafe", 00:25:58.762 "assigned_rate_limits": { 00:25:58.762 "rw_ios_per_sec": 0, 00:25:58.762 "rw_mbytes_per_sec": 0, 00:25:58.762 "r_mbytes_per_sec": 0, 00:25:58.762 "w_mbytes_per_sec": 0 00:25:58.762 }, 00:25:58.762 "claimed": false, 00:25:58.762 "zoned": false, 00:25:58.762 "supported_io_types": { 00:25:58.762 "read": true, 00:25:58.762 "write": true, 00:25:58.762 "unmap": true, 00:25:58.762 "flush": true, 00:25:58.762 "reset": true, 00:25:58.762 "nvme_admin": false, 00:25:58.762 "nvme_io": false, 00:25:58.762 "nvme_io_md": false, 00:25:58.762 "write_zeroes": true, 00:25:58.762 "zcopy": false, 00:25:58.762 "get_zone_info": false, 00:25:58.762 "zone_management": false, 00:25:58.762 "zone_append": false, 00:25:58.762 "compare": false, 00:25:58.762 "compare_and_write": false, 00:25:58.762 "abort": false, 00:25:58.762 "seek_hole": false, 00:25:58.762 "seek_data": false, 00:25:58.762 "copy": false, 00:25:58.762 "nvme_iov_md": false 00:25:58.762 }, 00:25:58.762 "memory_domains": [ 00:25:58.762 { 00:25:58.762 "dma_device_id": "system", 00:25:58.762 "dma_device_type": 1 00:25:58.762 }, 00:25:58.762 { 00:25:58.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:58.762 "dma_device_type": 2 00:25:58.762 }, 00:25:58.762 { 00:25:58.762 "dma_device_id": "system", 00:25:58.762 "dma_device_type": 1 00:25:58.762 }, 00:25:58.762 { 00:25:58.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:58.762 "dma_device_type": 2 00:25:58.762 }, 00:25:58.762 { 00:25:58.762 "dma_device_id": "system", 00:25:58.762 "dma_device_type": 1 00:25:58.762 }, 00:25:58.762 { 00:25:58.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:58.762 "dma_device_type": 2 00:25:58.762 }, 00:25:58.762 { 00:25:58.762 "dma_device_id": "system", 00:25:58.762 "dma_device_type": 1 00:25:58.762 }, 00:25:58.762 { 00:25:58.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:58.762 "dma_device_type": 2 00:25:58.762 } 00:25:58.762 ], 00:25:58.762 "driver_specific": { 00:25:58.762 "raid": { 00:25:58.762 "uuid": "b59d39a9-b662-4b91-bed2-d44126b0eafe", 00:25:58.762 "strip_size_kb": 64, 00:25:58.762 "state": "online", 00:25:58.762 "raid_level": "concat", 00:25:58.762 "superblock": true, 00:25:58.762 "num_base_bdevs": 4, 00:25:58.762 "num_base_bdevs_discovered": 4, 00:25:58.762 "num_base_bdevs_operational": 4, 00:25:58.762 "base_bdevs_list": [ 00:25:58.762 { 00:25:58.762 "name": "pt1", 00:25:58.762 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:58.762 "is_configured": true, 00:25:58.762 "data_offset": 2048, 00:25:58.762 "data_size": 63488 00:25:58.762 }, 00:25:58.762 { 00:25:58.762 "name": "pt2", 00:25:58.762 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:58.762 "is_configured": true, 00:25:58.762 "data_offset": 2048, 00:25:58.762 "data_size": 63488 00:25:58.762 }, 00:25:58.762 { 00:25:58.762 "name": "pt3", 00:25:58.762 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:58.762 "is_configured": true, 00:25:58.762 "data_offset": 2048, 00:25:58.762 "data_size": 63488 00:25:58.762 }, 00:25:58.762 { 00:25:58.762 "name": "pt4", 00:25:58.762 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:58.762 "is_configured": true, 00:25:58.762 "data_offset": 2048, 00:25:58.762 "data_size": 63488 00:25:58.762 } 00:25:58.762 ] 00:25:58.762 } 00:25:58.762 } 00:25:58.762 }' 00:25:58.762 21:39:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:58.762 21:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:25:58.762 pt2 00:25:58.762 pt3 00:25:58.762 pt4' 00:25:58.762 21:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:58.762 21:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:58.762 21:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:59.020 21:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:59.020 "name": "pt1", 00:25:59.020 "aliases": [ 00:25:59.020 "00000000-0000-0000-0000-000000000001" 00:25:59.020 ], 00:25:59.020 "product_name": "passthru", 00:25:59.020 "block_size": 512, 00:25:59.020 "num_blocks": 65536, 00:25:59.020 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:59.020 "assigned_rate_limits": { 00:25:59.020 "rw_ios_per_sec": 0, 00:25:59.020 "rw_mbytes_per_sec": 0, 00:25:59.020 "r_mbytes_per_sec": 0, 00:25:59.020 "w_mbytes_per_sec": 0 00:25:59.020 }, 00:25:59.020 "claimed": true, 00:25:59.020 "claim_type": "exclusive_write", 00:25:59.020 "zoned": false, 00:25:59.020 "supported_io_types": { 00:25:59.020 "read": true, 00:25:59.020 "write": true, 00:25:59.020 "unmap": true, 00:25:59.020 "flush": true, 00:25:59.020 "reset": true, 00:25:59.020 "nvme_admin": false, 00:25:59.020 "nvme_io": false, 00:25:59.020 "nvme_io_md": false, 00:25:59.020 "write_zeroes": true, 00:25:59.020 "zcopy": true, 00:25:59.020 "get_zone_info": false, 00:25:59.020 "zone_management": false, 00:25:59.020 "zone_append": false, 00:25:59.020 "compare": false, 00:25:59.020 "compare_and_write": false, 00:25:59.020 "abort": true, 00:25:59.020 "seek_hole": false, 00:25:59.020 "seek_data": false, 00:25:59.020 "copy": true, 00:25:59.020 "nvme_iov_md": false 00:25:59.020 }, 00:25:59.020 "memory_domains": [ 00:25:59.020 { 00:25:59.020 "dma_device_id": "system", 00:25:59.020 "dma_device_type": 1 00:25:59.020 }, 00:25:59.020 { 00:25:59.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:59.020 "dma_device_type": 2 00:25:59.020 } 00:25:59.020 ], 00:25:59.020 "driver_specific": { 00:25:59.020 "passthru": { 00:25:59.020 "name": "pt1", 00:25:59.020 "base_bdev_name": "malloc1" 00:25:59.020 } 00:25:59.020 } 00:25:59.020 }' 00:25:59.020 21:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:59.020 21:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:59.020 21:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:59.020 21:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:59.279 21:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:59.279 21:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:59.279 21:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:59.279 21:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:59.279 21:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:59.279 21:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:59.279 21:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:59.538 21:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:59.538 21:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:59.538 21:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:59.538 21:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:59.798 21:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:59.798 "name": "pt2", 00:25:59.798 "aliases": [ 00:25:59.798 "00000000-0000-0000-0000-000000000002" 00:25:59.798 ], 00:25:59.798 "product_name": "passthru", 00:25:59.798 "block_size": 512, 00:25:59.798 "num_blocks": 65536, 00:25:59.798 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:59.798 "assigned_rate_limits": { 00:25:59.798 "rw_ios_per_sec": 0, 00:25:59.798 "rw_mbytes_per_sec": 0, 00:25:59.798 "r_mbytes_per_sec": 0, 00:25:59.798 "w_mbytes_per_sec": 0 00:25:59.798 }, 00:25:59.798 "claimed": true, 00:25:59.798 "claim_type": "exclusive_write", 00:25:59.798 "zoned": false, 00:25:59.798 "supported_io_types": { 00:25:59.798 "read": true, 00:25:59.798 "write": true, 00:25:59.798 "unmap": true, 00:25:59.798 "flush": true, 00:25:59.798 "reset": true, 00:25:59.798 "nvme_admin": false, 00:25:59.798 "nvme_io": false, 00:25:59.798 "nvme_io_md": false, 00:25:59.798 "write_zeroes": true, 00:25:59.798 "zcopy": true, 00:25:59.798 "get_zone_info": false, 00:25:59.798 "zone_management": false, 00:25:59.798 "zone_append": false, 00:25:59.798 "compare": false, 00:25:59.798 "compare_and_write": false, 00:25:59.798 "abort": true, 00:25:59.798 "seek_hole": false, 00:25:59.798 "seek_data": false, 00:25:59.798 "copy": true, 00:25:59.798 "nvme_iov_md": false 00:25:59.798 }, 00:25:59.798 "memory_domains": [ 00:25:59.798 { 00:25:59.798 "dma_device_id": "system", 00:25:59.798 "dma_device_type": 1 00:25:59.798 }, 00:25:59.798 { 00:25:59.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:59.798 "dma_device_type": 2 00:25:59.798 } 00:25:59.798 ], 00:25:59.798 "driver_specific": { 00:25:59.798 "passthru": { 00:25:59.798 "name": "pt2", 00:25:59.798 "base_bdev_name": "malloc2" 00:25:59.798 } 00:25:59.798 } 00:25:59.798 }' 00:25:59.798 21:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:59.798 21:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:59.798 21:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:59.798 21:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:59.798 21:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:59.798 21:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:59.798 21:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:00.055 21:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:00.055 21:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:00.055 21:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:00.055 21:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:00.055 21:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:00.055 21:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:00.055 21:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:00.055 21:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:26:00.313 21:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:00.313 "name": "pt3", 00:26:00.313 "aliases": [ 00:26:00.313 "00000000-0000-0000-0000-000000000003" 00:26:00.313 ], 00:26:00.313 "product_name": "passthru", 00:26:00.313 "block_size": 512, 00:26:00.313 "num_blocks": 65536, 00:26:00.313 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:00.313 "assigned_rate_limits": { 00:26:00.313 "rw_ios_per_sec": 0, 00:26:00.313 "rw_mbytes_per_sec": 0, 00:26:00.313 "r_mbytes_per_sec": 0, 00:26:00.313 "w_mbytes_per_sec": 0 00:26:00.313 }, 00:26:00.313 "claimed": true, 00:26:00.313 "claim_type": "exclusive_write", 00:26:00.313 "zoned": false, 00:26:00.313 "supported_io_types": { 00:26:00.313 "read": true, 00:26:00.313 "write": true, 00:26:00.313 "unmap": true, 00:26:00.313 "flush": true, 00:26:00.313 "reset": true, 00:26:00.313 "nvme_admin": false, 00:26:00.313 "nvme_io": false, 00:26:00.313 "nvme_io_md": false, 00:26:00.313 "write_zeroes": true, 00:26:00.313 "zcopy": true, 00:26:00.313 "get_zone_info": false, 00:26:00.313 "zone_management": false, 00:26:00.313 "zone_append": false, 00:26:00.313 "compare": false, 00:26:00.313 "compare_and_write": false, 00:26:00.313 "abort": true, 00:26:00.313 "seek_hole": false, 00:26:00.313 "seek_data": false, 00:26:00.313 "copy": true, 00:26:00.313 "nvme_iov_md": false 00:26:00.313 }, 00:26:00.313 "memory_domains": [ 00:26:00.313 { 00:26:00.313 "dma_device_id": "system", 00:26:00.313 "dma_device_type": 1 00:26:00.313 }, 00:26:00.313 { 00:26:00.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:00.313 "dma_device_type": 2 00:26:00.313 } 00:26:00.313 ], 00:26:00.313 "driver_specific": { 00:26:00.313 "passthru": { 00:26:00.313 "name": "pt3", 00:26:00.313 "base_bdev_name": "malloc3" 00:26:00.313 } 00:26:00.313 } 00:26:00.313 }' 00:26:00.313 21:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:00.313 21:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:00.571 21:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:00.571 21:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:00.571 21:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:00.571 21:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:00.571 21:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:00.571 21:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:00.571 21:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:00.571 21:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:00.830 21:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:00.830 21:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:00.830 21:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:00.830 21:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:26:00.830 21:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:01.089 21:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:01.089 "name": "pt4", 00:26:01.089 "aliases": [ 00:26:01.089 "00000000-0000-0000-0000-000000000004" 00:26:01.089 ], 00:26:01.089 "product_name": "passthru", 00:26:01.089 "block_size": 512, 00:26:01.089 "num_blocks": 65536, 00:26:01.089 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:01.089 "assigned_rate_limits": { 00:26:01.089 "rw_ios_per_sec": 0, 00:26:01.089 "rw_mbytes_per_sec": 0, 00:26:01.089 "r_mbytes_per_sec": 0, 00:26:01.089 "w_mbytes_per_sec": 0 00:26:01.089 }, 00:26:01.089 "claimed": true, 00:26:01.089 "claim_type": "exclusive_write", 00:26:01.089 "zoned": false, 00:26:01.089 "supported_io_types": { 00:26:01.089 "read": true, 00:26:01.089 "write": true, 00:26:01.089 "unmap": true, 00:26:01.089 "flush": true, 00:26:01.089 "reset": true, 00:26:01.089 "nvme_admin": false, 00:26:01.089 "nvme_io": false, 00:26:01.089 "nvme_io_md": false, 00:26:01.089 "write_zeroes": true, 00:26:01.089 "zcopy": true, 00:26:01.089 "get_zone_info": false, 00:26:01.089 "zone_management": false, 00:26:01.089 "zone_append": false, 00:26:01.089 "compare": false, 00:26:01.089 "compare_and_write": false, 00:26:01.089 "abort": true, 00:26:01.089 "seek_hole": false, 00:26:01.089 "seek_data": false, 00:26:01.089 "copy": true, 00:26:01.089 "nvme_iov_md": false 00:26:01.089 }, 00:26:01.089 "memory_domains": [ 00:26:01.089 { 00:26:01.089 "dma_device_id": "system", 00:26:01.089 "dma_device_type": 1 00:26:01.089 }, 00:26:01.089 { 00:26:01.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:01.089 "dma_device_type": 2 00:26:01.089 } 00:26:01.089 ], 00:26:01.089 "driver_specific": { 00:26:01.089 "passthru": { 00:26:01.089 "name": "pt4", 00:26:01.089 "base_bdev_name": "malloc4" 00:26:01.089 } 00:26:01.089 } 00:26:01.089 }' 00:26:01.089 21:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:01.089 21:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:01.089 21:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:01.089 21:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:01.089 21:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:01.348 21:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:01.348 21:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:01.348 21:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:01.348 21:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:01.348 21:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:01.348 21:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:01.348 21:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:01.348 21:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:01.348 21:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:26:01.607 [2024-07-15 21:39:34.907830] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:01.607 21:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' b59d39a9-b662-4b91-bed2-d44126b0eafe '!=' b59d39a9-b662-4b91-bed2-d44126b0eafe ']' 00:26:01.607 21:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:26:01.607 21:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:01.607 21:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:26:01.607 21:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 140754 00:26:01.607 21:39:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 140754 ']' 00:26:01.607 21:39:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 140754 00:26:01.607 21:39:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:26:01.607 21:39:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:01.607 21:39:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 140754 00:26:01.607 killing process with pid 140754 00:26:01.607 21:39:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:01.607 21:39:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:01.607 21:39:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 140754' 00:26:01.607 21:39:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 140754 00:26:01.607 21:39:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 140754 00:26:01.607 [2024-07-15 21:39:34.958446] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:01.607 [2024-07-15 21:39:34.958537] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:01.607 [2024-07-15 21:39:34.958607] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:01.607 [2024-07-15 21:39:34.958656] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:26:02.176 [2024-07-15 21:39:35.385624] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:03.558 ************************************ 00:26:03.558 END TEST raid_superblock_test 00:26:03.558 ************************************ 00:26:03.558 21:39:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:26:03.558 00:26:03.558 real 0m17.907s 00:26:03.558 user 0m31.984s 00:26:03.558 sys 0m2.183s 00:26:03.558 21:39:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:03.558 21:39:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.558 21:39:36 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:26:03.558 21:39:36 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:26:03.558 21:39:36 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:26:03.558 21:39:36 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:03.558 21:39:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:03.558 ************************************ 00:26:03.558 START TEST raid_read_error_test 00:26:03.558 ************************************ 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 read 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.7g1jdyMpin 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=141314 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 141314 /var/tmp/spdk-raid.sock 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 141314 ']' 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:03.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:03.558 21:39:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.558 [2024-07-15 21:39:36.910275] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:26:03.558 [2024-07-15 21:39:36.910543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141314 ] 00:26:03.817 [2024-07-15 21:39:37.061278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.076 [2024-07-15 21:39:37.267064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.336 [2024-07-15 21:39:37.493543] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:04.596 21:39:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:04.596 21:39:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:26:04.596 21:39:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:04.596 21:39:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:04.856 BaseBdev1_malloc 00:26:04.856 21:39:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:26:05.114 true 00:26:05.114 21:39:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:05.373 [2024-07-15 21:39:38.498685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:05.373 [2024-07-15 21:39:38.498868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:05.373 [2024-07-15 21:39:38.498923] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:05.373 [2024-07-15 21:39:38.498966] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:05.373 [2024-07-15 21:39:38.501332] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:05.373 [2024-07-15 21:39:38.501440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:05.373 BaseBdev1 00:26:05.373 21:39:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:05.373 21:39:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:05.632 BaseBdev2_malloc 00:26:05.632 21:39:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:26:05.632 true 00:26:05.632 21:39:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:05.892 [2024-07-15 21:39:39.190663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:05.892 [2024-07-15 21:39:39.190877] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:05.892 [2024-07-15 21:39:39.190953] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:26:05.892 [2024-07-15 21:39:39.190999] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:05.892 [2024-07-15 21:39:39.193243] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:05.892 [2024-07-15 21:39:39.193349] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:05.892 BaseBdev2 00:26:05.892 21:39:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:05.892 21:39:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:06.151 BaseBdev3_malloc 00:26:06.151 21:39:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:26:06.410 true 00:26:06.410 21:39:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:06.678 [2024-07-15 21:39:39.873379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:06.678 [2024-07-15 21:39:39.873567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:06.678 [2024-07-15 21:39:39.873649] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:26:06.678 [2024-07-15 21:39:39.873695] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:06.678 [2024-07-15 21:39:39.876002] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:06.678 [2024-07-15 21:39:39.876118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:06.678 BaseBdev3 00:26:06.678 21:39:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:06.678 21:39:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:06.946 BaseBdev4_malloc 00:26:06.947 21:39:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:26:07.206 true 00:26:07.206 21:39:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:26:07.206 [2024-07-15 21:39:40.557567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:26:07.206 [2024-07-15 21:39:40.557759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:07.206 [2024-07-15 21:39:40.557829] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:26:07.206 [2024-07-15 21:39:40.557876] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:07.206 [2024-07-15 21:39:40.560215] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:07.206 [2024-07-15 21:39:40.560341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:07.206 BaseBdev4 00:26:07.206 21:39:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:26:07.465 [2024-07-15 21:39:40.777258] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:07.465 [2024-07-15 21:39:40.779304] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:07.465 [2024-07-15 21:39:40.779447] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:07.465 [2024-07-15 21:39:40.779546] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:07.465 [2024-07-15 21:39:40.779845] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:26:07.465 [2024-07-15 21:39:40.779892] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:26:07.465 [2024-07-15 21:39:40.780072] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:26:07.465 [2024-07-15 21:39:40.780470] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:26:07.465 [2024-07-15 21:39:40.780514] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:26:07.465 [2024-07-15 21:39:40.780704] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:07.465 21:39:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:26:07.465 21:39:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:07.465 21:39:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:07.465 21:39:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:07.465 21:39:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:07.465 21:39:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:07.465 21:39:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:07.465 21:39:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:07.465 21:39:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:07.465 21:39:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:07.465 21:39:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:07.465 21:39:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:07.724 21:39:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:07.724 "name": "raid_bdev1", 00:26:07.724 "uuid": "a4c57c91-24a1-4e31-8ad2-080f66898079", 00:26:07.724 "strip_size_kb": 64, 00:26:07.724 "state": "online", 00:26:07.724 "raid_level": "concat", 00:26:07.724 "superblock": true, 00:26:07.724 "num_base_bdevs": 4, 00:26:07.724 "num_base_bdevs_discovered": 4, 00:26:07.724 "num_base_bdevs_operational": 4, 00:26:07.724 "base_bdevs_list": [ 00:26:07.724 { 00:26:07.724 "name": "BaseBdev1", 00:26:07.724 "uuid": "7d9e137b-9f03-56db-a3fb-797463bf4c1a", 00:26:07.724 "is_configured": true, 00:26:07.724 "data_offset": 2048, 00:26:07.724 "data_size": 63488 00:26:07.724 }, 00:26:07.724 { 00:26:07.724 "name": "BaseBdev2", 00:26:07.724 "uuid": "061cf564-0024-5d57-a2b5-03462ed3fea2", 00:26:07.724 "is_configured": true, 00:26:07.724 "data_offset": 2048, 00:26:07.724 "data_size": 63488 00:26:07.724 }, 00:26:07.724 { 00:26:07.724 "name": "BaseBdev3", 00:26:07.724 "uuid": "2c0e95a4-445a-5a3e-89d3-ab35f4185e66", 00:26:07.724 "is_configured": true, 00:26:07.724 "data_offset": 2048, 00:26:07.724 "data_size": 63488 00:26:07.724 }, 00:26:07.724 { 00:26:07.724 "name": "BaseBdev4", 00:26:07.724 "uuid": "1d7f63a2-9c60-59dd-bcf9-cebec681f5b5", 00:26:07.724 "is_configured": true, 00:26:07.724 "data_offset": 2048, 00:26:07.724 "data_size": 63488 00:26:07.724 } 00:26:07.724 ] 00:26:07.724 }' 00:26:07.724 21:39:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:07.724 21:39:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:08.664 21:39:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:26:08.664 21:39:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:26:08.664 [2024-07-15 21:39:41.760718] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:26:09.601 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:26:09.601 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:26:09.601 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:26:09.601 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:26:09.601 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:26:09.601 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:09.601 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:09.601 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:09.601 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:09.601 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:09.601 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:09.601 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:09.601 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:09.601 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:09.601 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:09.601 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:09.859 21:39:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:09.859 "name": "raid_bdev1", 00:26:09.859 "uuid": "a4c57c91-24a1-4e31-8ad2-080f66898079", 00:26:09.859 "strip_size_kb": 64, 00:26:09.859 "state": "online", 00:26:09.859 "raid_level": "concat", 00:26:09.859 "superblock": true, 00:26:09.859 "num_base_bdevs": 4, 00:26:09.859 "num_base_bdevs_discovered": 4, 00:26:09.859 "num_base_bdevs_operational": 4, 00:26:09.859 "base_bdevs_list": [ 00:26:09.859 { 00:26:09.859 "name": "BaseBdev1", 00:26:09.859 "uuid": "7d9e137b-9f03-56db-a3fb-797463bf4c1a", 00:26:09.859 "is_configured": true, 00:26:09.859 "data_offset": 2048, 00:26:09.859 "data_size": 63488 00:26:09.859 }, 00:26:09.859 { 00:26:09.859 "name": "BaseBdev2", 00:26:09.859 "uuid": "061cf564-0024-5d57-a2b5-03462ed3fea2", 00:26:09.859 "is_configured": true, 00:26:09.859 "data_offset": 2048, 00:26:09.859 "data_size": 63488 00:26:09.859 }, 00:26:09.859 { 00:26:09.859 "name": "BaseBdev3", 00:26:09.859 "uuid": "2c0e95a4-445a-5a3e-89d3-ab35f4185e66", 00:26:09.859 "is_configured": true, 00:26:09.859 "data_offset": 2048, 00:26:09.859 "data_size": 63488 00:26:09.859 }, 00:26:09.859 { 00:26:09.860 "name": "BaseBdev4", 00:26:09.860 "uuid": "1d7f63a2-9c60-59dd-bcf9-cebec681f5b5", 00:26:09.860 "is_configured": true, 00:26:09.860 "data_offset": 2048, 00:26:09.860 "data_size": 63488 00:26:09.860 } 00:26:09.860 ] 00:26:09.860 }' 00:26:09.860 21:39:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:09.860 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.444 21:39:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:10.702 [2024-07-15 21:39:43.959371] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:10.702 [2024-07-15 21:39:43.959487] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:10.702 [2024-07-15 21:39:43.962169] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:10.702 [2024-07-15 21:39:43.962273] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:10.702 [2024-07-15 21:39:43.962331] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:10.702 [2024-07-15 21:39:43.962364] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:26:10.702 0 00:26:10.702 21:39:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 141314 00:26:10.702 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 141314 ']' 00:26:10.702 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 141314 00:26:10.702 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:26:10.702 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:10.702 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 141314 00:26:10.702 killing process with pid 141314 00:26:10.702 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:10.702 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:10.702 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 141314' 00:26:10.702 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 141314 00:26:10.702 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 141314 00:26:10.702 [2024-07-15 21:39:44.000594] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:10.960 [2024-07-15 21:39:44.332283] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:12.336 21:39:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.7g1jdyMpin 00:26:12.336 21:39:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:26:12.336 21:39:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:26:12.336 ************************************ 00:26:12.336 END TEST raid_read_error_test 00:26:12.336 ************************************ 00:26:12.336 21:39:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:26:12.336 21:39:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:26:12.336 21:39:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:12.336 21:39:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:26:12.336 21:39:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:26:12.336 00:26:12.336 real 0m8.829s 00:26:12.336 user 0m13.379s 00:26:12.336 sys 0m1.055s 00:26:12.336 21:39:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:12.336 21:39:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.595 21:39:45 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:26:12.595 21:39:45 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:26:12.595 21:39:45 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:26:12.595 21:39:45 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:12.595 21:39:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:12.595 ************************************ 00:26:12.595 START TEST raid_write_error_test 00:26:12.595 ************************************ 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 write 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.9GRXSLyFEA 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=141543 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 141543 /var/tmp/spdk-raid.sock 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 141543 ']' 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:12.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:12.595 21:39:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.595 [2024-07-15 21:39:45.808028] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:26:12.595 [2024-07-15 21:39:45.808386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141543 ] 00:26:12.854 [2024-07-15 21:39:45.980121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.854 [2024-07-15 21:39:46.178810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.113 [2024-07-15 21:39:46.379850] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:13.371 21:39:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:13.372 21:39:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:26:13.372 21:39:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:13.372 21:39:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:13.631 BaseBdev1_malloc 00:26:13.631 21:39:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:26:13.890 true 00:26:13.890 21:39:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:14.149 [2024-07-15 21:39:47.300281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:14.149 [2024-07-15 21:39:47.300466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:14.149 [2024-07-15 21:39:47.300520] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:14.149 [2024-07-15 21:39:47.300558] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:14.149 [2024-07-15 21:39:47.302866] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:14.149 [2024-07-15 21:39:47.302969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:14.149 BaseBdev1 00:26:14.149 21:39:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:14.149 21:39:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:14.406 BaseBdev2_malloc 00:26:14.406 21:39:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:26:14.663 true 00:26:14.663 21:39:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:14.921 [2024-07-15 21:39:48.036633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:14.921 [2024-07-15 21:39:48.036842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:14.921 [2024-07-15 21:39:48.036915] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:26:14.921 [2024-07-15 21:39:48.036958] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:14.921 [2024-07-15 21:39:48.039265] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:14.921 [2024-07-15 21:39:48.039368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:14.921 BaseBdev2 00:26:14.921 21:39:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:14.921 21:39:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:15.179 BaseBdev3_malloc 00:26:15.179 21:39:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:26:15.437 true 00:26:15.437 21:39:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:15.696 [2024-07-15 21:39:48.830112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:15.696 [2024-07-15 21:39:48.830300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:15.696 [2024-07-15 21:39:48.830357] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:26:15.696 [2024-07-15 21:39:48.830406] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:15.696 [2024-07-15 21:39:48.832693] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:15.696 [2024-07-15 21:39:48.832803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:15.696 BaseBdev3 00:26:15.696 21:39:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:15.696 21:39:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:15.954 BaseBdev4_malloc 00:26:15.954 21:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:26:15.954 true 00:26:15.954 21:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:26:16.519 [2024-07-15 21:39:49.600352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:26:16.519 [2024-07-15 21:39:49.600551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:16.519 [2024-07-15 21:39:49.600629] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:26:16.519 [2024-07-15 21:39:49.600674] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:16.519 [2024-07-15 21:39:49.602961] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:16.519 [2024-07-15 21:39:49.603063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:16.519 BaseBdev4 00:26:16.519 21:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:26:16.519 [2024-07-15 21:39:49.820012] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:16.519 [2024-07-15 21:39:49.821950] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:16.519 [2024-07-15 21:39:49.822084] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:16.519 [2024-07-15 21:39:49.822171] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:16.519 [2024-07-15 21:39:49.822433] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:26:16.519 [2024-07-15 21:39:49.822471] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:26:16.519 [2024-07-15 21:39:49.822648] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:26:16.519 [2024-07-15 21:39:49.823021] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:26:16.519 [2024-07-15 21:39:49.823063] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:26:16.519 [2024-07-15 21:39:49.823280] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:16.519 21:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:26:16.519 21:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:16.519 21:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:16.519 21:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:16.519 21:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:16.519 21:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:16.519 21:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:16.519 21:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:16.519 21:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:16.519 21:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:16.519 21:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:16.519 21:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:16.778 21:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:16.778 "name": "raid_bdev1", 00:26:16.778 "uuid": "e44ca80e-ec6a-43a5-9139-68faff6e9888", 00:26:16.778 "strip_size_kb": 64, 00:26:16.778 "state": "online", 00:26:16.778 "raid_level": "concat", 00:26:16.778 "superblock": true, 00:26:16.778 "num_base_bdevs": 4, 00:26:16.778 "num_base_bdevs_discovered": 4, 00:26:16.778 "num_base_bdevs_operational": 4, 00:26:16.778 "base_bdevs_list": [ 00:26:16.778 { 00:26:16.778 "name": "BaseBdev1", 00:26:16.778 "uuid": "cd43daf9-a5f6-546f-976f-d555b0b17067", 00:26:16.778 "is_configured": true, 00:26:16.778 "data_offset": 2048, 00:26:16.778 "data_size": 63488 00:26:16.778 }, 00:26:16.778 { 00:26:16.778 "name": "BaseBdev2", 00:26:16.778 "uuid": "30299e00-bd95-5b69-b470-299d9c53778c", 00:26:16.778 "is_configured": true, 00:26:16.778 "data_offset": 2048, 00:26:16.778 "data_size": 63488 00:26:16.778 }, 00:26:16.778 { 00:26:16.778 "name": "BaseBdev3", 00:26:16.778 "uuid": "af96a4fa-c45b-5c2b-b906-6720989115da", 00:26:16.778 "is_configured": true, 00:26:16.778 "data_offset": 2048, 00:26:16.778 "data_size": 63488 00:26:16.778 }, 00:26:16.778 { 00:26:16.778 "name": "BaseBdev4", 00:26:16.778 "uuid": "f841b028-afe3-5b5c-9916-f2df7699d37a", 00:26:16.778 "is_configured": true, 00:26:16.778 "data_offset": 2048, 00:26:16.778 "data_size": 63488 00:26:16.778 } 00:26:16.778 ] 00:26:16.778 }' 00:26:16.778 21:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:16.778 21:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.713 21:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:26:17.713 21:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:26:17.713 [2024-07-15 21:39:50.831583] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:26:18.653 21:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:26:18.653 21:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:26:18.653 21:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:26:18.653 21:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:26:18.653 21:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:26:18.653 21:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:18.653 21:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:18.653 21:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:18.653 21:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:18.653 21:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:18.653 21:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:18.653 21:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:18.653 21:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:18.653 21:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:18.653 21:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:18.653 21:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:18.912 21:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:18.912 "name": "raid_bdev1", 00:26:18.912 "uuid": "e44ca80e-ec6a-43a5-9139-68faff6e9888", 00:26:18.912 "strip_size_kb": 64, 00:26:18.912 "state": "online", 00:26:18.912 "raid_level": "concat", 00:26:18.912 "superblock": true, 00:26:18.912 "num_base_bdevs": 4, 00:26:18.912 "num_base_bdevs_discovered": 4, 00:26:18.912 "num_base_bdevs_operational": 4, 00:26:18.912 "base_bdevs_list": [ 00:26:18.912 { 00:26:18.912 "name": "BaseBdev1", 00:26:18.912 "uuid": "cd43daf9-a5f6-546f-976f-d555b0b17067", 00:26:18.912 "is_configured": true, 00:26:18.912 "data_offset": 2048, 00:26:18.912 "data_size": 63488 00:26:18.912 }, 00:26:18.912 { 00:26:18.912 "name": "BaseBdev2", 00:26:18.912 "uuid": "30299e00-bd95-5b69-b470-299d9c53778c", 00:26:18.912 "is_configured": true, 00:26:18.912 "data_offset": 2048, 00:26:18.912 "data_size": 63488 00:26:18.912 }, 00:26:18.912 { 00:26:18.912 "name": "BaseBdev3", 00:26:18.912 "uuid": "af96a4fa-c45b-5c2b-b906-6720989115da", 00:26:18.912 "is_configured": true, 00:26:18.912 "data_offset": 2048, 00:26:18.912 "data_size": 63488 00:26:18.912 }, 00:26:18.912 { 00:26:18.912 "name": "BaseBdev4", 00:26:18.912 "uuid": "f841b028-afe3-5b5c-9916-f2df7699d37a", 00:26:18.912 "is_configured": true, 00:26:18.912 "data_offset": 2048, 00:26:18.912 "data_size": 63488 00:26:18.912 } 00:26:18.912 ] 00:26:18.912 }' 00:26:18.912 21:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:18.912 21:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.851 21:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:19.851 [2024-07-15 21:39:53.146797] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:19.851 [2024-07-15 21:39:53.146905] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:19.851 [2024-07-15 21:39:53.149373] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:19.851 [2024-07-15 21:39:53.149452] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:19.851 [2024-07-15 21:39:53.149502] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:19.851 [2024-07-15 21:39:53.149524] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:26:19.851 0 00:26:19.851 21:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 141543 00:26:19.851 21:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 141543 ']' 00:26:19.851 21:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 141543 00:26:19.851 21:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:26:19.851 21:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:19.851 21:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 141543 00:26:19.851 killing process with pid 141543 00:26:19.851 21:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:19.851 21:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:19.851 21:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 141543' 00:26:19.851 21:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 141543 00:26:19.851 21:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 141543 00:26:19.851 [2024-07-15 21:39:53.205155] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:20.418 [2024-07-15 21:39:53.515132] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:21.795 21:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.9GRXSLyFEA 00:26:21.795 21:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:26:21.795 21:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:26:21.795 ************************************ 00:26:21.795 END TEST raid_write_error_test 00:26:21.795 ************************************ 00:26:21.795 21:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:26:21.795 21:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:26:21.795 21:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:21.795 21:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:26:21.795 21:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:26:21.795 00:26:21.795 real 0m9.077s 00:26:21.795 user 0m13.970s 00:26:21.795 sys 0m1.060s 00:26:21.795 21:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:21.795 21:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.795 21:39:54 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:26:21.795 21:39:54 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:26:21.795 21:39:54 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:26:21.795 21:39:54 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:26:21.795 21:39:54 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:21.795 21:39:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:21.795 ************************************ 00:26:21.795 START TEST raid_state_function_test 00:26:21.795 ************************************ 00:26:21.795 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 false 00:26:21.795 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:26:21.795 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:26:21.795 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:26:21.795 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:26:21.795 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:26:21.795 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:26:21.795 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:21.795 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:26:21.795 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:21.795 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:21.795 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:26:21.795 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:21.795 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:21.795 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:26:21.795 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:21.795 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:21.795 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:26:21.795 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:21.795 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:21.795 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:26:21.795 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:26:21.795 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:26:21.795 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:26:21.795 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:26:21.795 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:26:21.795 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:26:21.795 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:26:21.795 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:26:21.796 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=141776 00:26:21.796 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:21.796 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 141776' 00:26:21.796 Process raid pid: 141776 00:26:21.796 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 141776 /var/tmp/spdk-raid.sock 00:26:21.796 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 141776 ']' 00:26:21.796 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:21.796 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:21.796 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:21.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:21.796 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:21.796 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.796 [2024-07-15 21:39:54.952413] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:26:21.796 [2024-07-15 21:39:54.952676] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:21.796 [2024-07-15 21:39:55.100523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.055 [2024-07-15 21:39:55.300775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.313 [2024-07-15 21:39:55.506062] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:22.571 21:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:22.571 21:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:26:22.571 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:22.830 [2024-07-15 21:39:55.972158] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:22.830 [2024-07-15 21:39:55.972303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:22.830 [2024-07-15 21:39:55.972350] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:22.830 [2024-07-15 21:39:55.972381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:22.830 [2024-07-15 21:39:55.972399] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:22.830 [2024-07-15 21:39:55.972419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:22.830 [2024-07-15 21:39:55.972443] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:22.830 [2024-07-15 21:39:55.972493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:22.830 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:22.830 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:22.830 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:22.830 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:22.830 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:22.830 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:22.830 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:22.830 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:22.830 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:22.830 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:22.830 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:22.830 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:22.830 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:22.830 "name": "Existed_Raid", 00:26:22.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:22.830 "strip_size_kb": 0, 00:26:22.830 "state": "configuring", 00:26:22.830 "raid_level": "raid1", 00:26:22.830 "superblock": false, 00:26:22.830 "num_base_bdevs": 4, 00:26:22.830 "num_base_bdevs_discovered": 0, 00:26:22.830 "num_base_bdevs_operational": 4, 00:26:22.830 "base_bdevs_list": [ 00:26:22.830 { 00:26:22.830 "name": "BaseBdev1", 00:26:22.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:22.830 "is_configured": false, 00:26:22.830 "data_offset": 0, 00:26:22.830 "data_size": 0 00:26:22.830 }, 00:26:22.830 { 00:26:22.830 "name": "BaseBdev2", 00:26:22.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:22.830 "is_configured": false, 00:26:22.830 "data_offset": 0, 00:26:22.830 "data_size": 0 00:26:22.830 }, 00:26:22.830 { 00:26:22.830 "name": "BaseBdev3", 00:26:22.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:22.830 "is_configured": false, 00:26:22.830 "data_offset": 0, 00:26:22.830 "data_size": 0 00:26:22.830 }, 00:26:22.830 { 00:26:22.830 "name": "BaseBdev4", 00:26:22.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:22.830 "is_configured": false, 00:26:22.830 "data_offset": 0, 00:26:22.830 "data_size": 0 00:26:22.830 } 00:26:22.830 ] 00:26:22.830 }' 00:26:22.830 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:22.830 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:23.765 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:23.765 [2024-07-15 21:39:56.958568] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:23.765 [2024-07-15 21:39:56.958652] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:26:23.765 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:24.024 [2024-07-15 21:39:57.150254] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:24.024 [2024-07-15 21:39:57.150363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:24.024 [2024-07-15 21:39:57.150389] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:24.024 [2024-07-15 21:39:57.150453] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:24.024 [2024-07-15 21:39:57.150518] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:24.024 [2024-07-15 21:39:57.150556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:24.024 [2024-07-15 21:39:57.150588] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:24.024 [2024-07-15 21:39:57.150620] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:24.024 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:24.024 [2024-07-15 21:39:57.392148] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:24.024 BaseBdev1 00:26:24.282 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:26:24.282 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:26:24.282 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:24.282 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:24.282 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:24.282 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:24.282 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:24.282 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:24.540 [ 00:26:24.540 { 00:26:24.540 "name": "BaseBdev1", 00:26:24.540 "aliases": [ 00:26:24.540 "a7cca971-1046-4578-85e4-61b6cdda2e57" 00:26:24.540 ], 00:26:24.540 "product_name": "Malloc disk", 00:26:24.540 "block_size": 512, 00:26:24.540 "num_blocks": 65536, 00:26:24.540 "uuid": "a7cca971-1046-4578-85e4-61b6cdda2e57", 00:26:24.540 "assigned_rate_limits": { 00:26:24.540 "rw_ios_per_sec": 0, 00:26:24.540 "rw_mbytes_per_sec": 0, 00:26:24.540 "r_mbytes_per_sec": 0, 00:26:24.540 "w_mbytes_per_sec": 0 00:26:24.540 }, 00:26:24.540 "claimed": true, 00:26:24.540 "claim_type": "exclusive_write", 00:26:24.540 "zoned": false, 00:26:24.540 "supported_io_types": { 00:26:24.540 "read": true, 00:26:24.540 "write": true, 00:26:24.540 "unmap": true, 00:26:24.540 "flush": true, 00:26:24.540 "reset": true, 00:26:24.540 "nvme_admin": false, 00:26:24.540 "nvme_io": false, 00:26:24.540 "nvme_io_md": false, 00:26:24.540 "write_zeroes": true, 00:26:24.540 "zcopy": true, 00:26:24.540 "get_zone_info": false, 00:26:24.540 "zone_management": false, 00:26:24.540 "zone_append": false, 00:26:24.540 "compare": false, 00:26:24.540 "compare_and_write": false, 00:26:24.540 "abort": true, 00:26:24.540 "seek_hole": false, 00:26:24.540 "seek_data": false, 00:26:24.540 "copy": true, 00:26:24.540 "nvme_iov_md": false 00:26:24.540 }, 00:26:24.540 "memory_domains": [ 00:26:24.540 { 00:26:24.540 "dma_device_id": "system", 00:26:24.540 "dma_device_type": 1 00:26:24.540 }, 00:26:24.540 { 00:26:24.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:24.540 "dma_device_type": 2 00:26:24.540 } 00:26:24.540 ], 00:26:24.540 "driver_specific": {} 00:26:24.540 } 00:26:24.540 ] 00:26:24.540 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:24.540 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:24.540 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:24.540 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:24.540 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:24.540 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:24.540 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:24.540 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:24.541 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:24.541 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:24.541 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:24.541 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:24.541 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:24.799 21:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:24.799 "name": "Existed_Raid", 00:26:24.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:24.799 "strip_size_kb": 0, 00:26:24.799 "state": "configuring", 00:26:24.799 "raid_level": "raid1", 00:26:24.799 "superblock": false, 00:26:24.799 "num_base_bdevs": 4, 00:26:24.799 "num_base_bdevs_discovered": 1, 00:26:24.799 "num_base_bdevs_operational": 4, 00:26:24.799 "base_bdevs_list": [ 00:26:24.799 { 00:26:24.799 "name": "BaseBdev1", 00:26:24.799 "uuid": "a7cca971-1046-4578-85e4-61b6cdda2e57", 00:26:24.799 "is_configured": true, 00:26:24.799 "data_offset": 0, 00:26:24.799 "data_size": 65536 00:26:24.799 }, 00:26:24.799 { 00:26:24.799 "name": "BaseBdev2", 00:26:24.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:24.799 "is_configured": false, 00:26:24.799 "data_offset": 0, 00:26:24.799 "data_size": 0 00:26:24.799 }, 00:26:24.799 { 00:26:24.799 "name": "BaseBdev3", 00:26:24.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:24.799 "is_configured": false, 00:26:24.799 "data_offset": 0, 00:26:24.799 "data_size": 0 00:26:24.799 }, 00:26:24.799 { 00:26:24.799 "name": "BaseBdev4", 00:26:24.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:24.799 "is_configured": false, 00:26:24.799 "data_offset": 0, 00:26:24.799 "data_size": 0 00:26:24.799 } 00:26:24.799 ] 00:26:24.799 }' 00:26:24.799 21:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:24.799 21:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.364 21:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:25.621 [2024-07-15 21:39:58.753808] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:25.621 [2024-07-15 21:39:58.753893] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:26:25.621 21:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:25.621 [2024-07-15 21:39:58.945521] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:25.621 [2024-07-15 21:39:58.947192] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:25.621 [2024-07-15 21:39:58.947270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:25.621 [2024-07-15 21:39:58.947302] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:25.621 [2024-07-15 21:39:58.947349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:25.621 [2024-07-15 21:39:58.947367] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:25.621 [2024-07-15 21:39:58.947400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:25.621 21:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:26:25.621 21:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:25.621 21:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:25.621 21:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:25.621 21:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:25.621 21:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:25.621 21:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:25.621 21:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:25.621 21:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:25.621 21:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:25.621 21:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:25.621 21:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:25.621 21:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:25.621 21:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:25.878 21:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:25.878 "name": "Existed_Raid", 00:26:25.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:25.878 "strip_size_kb": 0, 00:26:25.878 "state": "configuring", 00:26:25.878 "raid_level": "raid1", 00:26:25.878 "superblock": false, 00:26:25.878 "num_base_bdevs": 4, 00:26:25.878 "num_base_bdevs_discovered": 1, 00:26:25.878 "num_base_bdevs_operational": 4, 00:26:25.878 "base_bdevs_list": [ 00:26:25.878 { 00:26:25.878 "name": "BaseBdev1", 00:26:25.878 "uuid": "a7cca971-1046-4578-85e4-61b6cdda2e57", 00:26:25.878 "is_configured": true, 00:26:25.878 "data_offset": 0, 00:26:25.878 "data_size": 65536 00:26:25.878 }, 00:26:25.878 { 00:26:25.878 "name": "BaseBdev2", 00:26:25.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:25.878 "is_configured": false, 00:26:25.878 "data_offset": 0, 00:26:25.878 "data_size": 0 00:26:25.878 }, 00:26:25.878 { 00:26:25.878 "name": "BaseBdev3", 00:26:25.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:25.878 "is_configured": false, 00:26:25.878 "data_offset": 0, 00:26:25.878 "data_size": 0 00:26:25.878 }, 00:26:25.878 { 00:26:25.878 "name": "BaseBdev4", 00:26:25.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:25.878 "is_configured": false, 00:26:25.878 "data_offset": 0, 00:26:25.878 "data_size": 0 00:26:25.878 } 00:26:25.878 ] 00:26:25.878 }' 00:26:25.878 21:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:25.878 21:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.812 21:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:26.812 [2024-07-15 21:40:00.112065] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:26.812 BaseBdev2 00:26:26.812 21:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:26:26.812 21:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:26:26.812 21:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:26.812 21:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:26.812 21:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:26.812 21:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:26.812 21:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:27.068 21:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:27.324 [ 00:26:27.324 { 00:26:27.324 "name": "BaseBdev2", 00:26:27.324 "aliases": [ 00:26:27.324 "e472ea79-e098-4d3f-bd72-c9e38e51f7d4" 00:26:27.324 ], 00:26:27.324 "product_name": "Malloc disk", 00:26:27.324 "block_size": 512, 00:26:27.324 "num_blocks": 65536, 00:26:27.324 "uuid": "e472ea79-e098-4d3f-bd72-c9e38e51f7d4", 00:26:27.324 "assigned_rate_limits": { 00:26:27.324 "rw_ios_per_sec": 0, 00:26:27.324 "rw_mbytes_per_sec": 0, 00:26:27.324 "r_mbytes_per_sec": 0, 00:26:27.324 "w_mbytes_per_sec": 0 00:26:27.324 }, 00:26:27.324 "claimed": true, 00:26:27.324 "claim_type": "exclusive_write", 00:26:27.324 "zoned": false, 00:26:27.324 "supported_io_types": { 00:26:27.324 "read": true, 00:26:27.324 "write": true, 00:26:27.324 "unmap": true, 00:26:27.324 "flush": true, 00:26:27.324 "reset": true, 00:26:27.324 "nvme_admin": false, 00:26:27.324 "nvme_io": false, 00:26:27.324 "nvme_io_md": false, 00:26:27.324 "write_zeroes": true, 00:26:27.324 "zcopy": true, 00:26:27.324 "get_zone_info": false, 00:26:27.324 "zone_management": false, 00:26:27.324 "zone_append": false, 00:26:27.324 "compare": false, 00:26:27.324 "compare_and_write": false, 00:26:27.324 "abort": true, 00:26:27.324 "seek_hole": false, 00:26:27.324 "seek_data": false, 00:26:27.324 "copy": true, 00:26:27.324 "nvme_iov_md": false 00:26:27.324 }, 00:26:27.324 "memory_domains": [ 00:26:27.324 { 00:26:27.324 "dma_device_id": "system", 00:26:27.324 "dma_device_type": 1 00:26:27.324 }, 00:26:27.324 { 00:26:27.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:27.324 "dma_device_type": 2 00:26:27.324 } 00:26:27.324 ], 00:26:27.324 "driver_specific": {} 00:26:27.324 } 00:26:27.324 ] 00:26:27.324 21:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:27.324 21:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:27.324 21:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:27.324 21:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:27.324 21:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:27.324 21:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:27.324 21:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:27.324 21:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:27.324 21:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:27.324 21:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:27.324 21:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:27.325 21:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:27.325 21:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:27.325 21:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:27.325 21:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:27.605 21:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:27.605 "name": "Existed_Raid", 00:26:27.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:27.605 "strip_size_kb": 0, 00:26:27.605 "state": "configuring", 00:26:27.605 "raid_level": "raid1", 00:26:27.605 "superblock": false, 00:26:27.605 "num_base_bdevs": 4, 00:26:27.605 "num_base_bdevs_discovered": 2, 00:26:27.605 "num_base_bdevs_operational": 4, 00:26:27.605 "base_bdevs_list": [ 00:26:27.605 { 00:26:27.605 "name": "BaseBdev1", 00:26:27.605 "uuid": "a7cca971-1046-4578-85e4-61b6cdda2e57", 00:26:27.605 "is_configured": true, 00:26:27.605 "data_offset": 0, 00:26:27.605 "data_size": 65536 00:26:27.605 }, 00:26:27.605 { 00:26:27.605 "name": "BaseBdev2", 00:26:27.605 "uuid": "e472ea79-e098-4d3f-bd72-c9e38e51f7d4", 00:26:27.605 "is_configured": true, 00:26:27.605 "data_offset": 0, 00:26:27.605 "data_size": 65536 00:26:27.605 }, 00:26:27.605 { 00:26:27.605 "name": "BaseBdev3", 00:26:27.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:27.605 "is_configured": false, 00:26:27.605 "data_offset": 0, 00:26:27.605 "data_size": 0 00:26:27.605 }, 00:26:27.605 { 00:26:27.605 "name": "BaseBdev4", 00:26:27.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:27.605 "is_configured": false, 00:26:27.605 "data_offset": 0, 00:26:27.605 "data_size": 0 00:26:27.605 } 00:26:27.605 ] 00:26:27.605 }' 00:26:27.605 21:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:27.605 21:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:28.168 21:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:28.426 [2024-07-15 21:40:01.576386] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:28.426 BaseBdev3 00:26:28.426 21:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:26:28.426 21:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:26:28.426 21:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:28.426 21:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:28.426 21:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:28.426 21:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:28.426 21:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:28.426 21:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:28.684 [ 00:26:28.684 { 00:26:28.684 "name": "BaseBdev3", 00:26:28.684 "aliases": [ 00:26:28.684 "3507b392-18db-4c56-a812-5164da45f887" 00:26:28.684 ], 00:26:28.684 "product_name": "Malloc disk", 00:26:28.684 "block_size": 512, 00:26:28.684 "num_blocks": 65536, 00:26:28.684 "uuid": "3507b392-18db-4c56-a812-5164da45f887", 00:26:28.684 "assigned_rate_limits": { 00:26:28.684 "rw_ios_per_sec": 0, 00:26:28.684 "rw_mbytes_per_sec": 0, 00:26:28.684 "r_mbytes_per_sec": 0, 00:26:28.684 "w_mbytes_per_sec": 0 00:26:28.684 }, 00:26:28.684 "claimed": true, 00:26:28.684 "claim_type": "exclusive_write", 00:26:28.684 "zoned": false, 00:26:28.684 "supported_io_types": { 00:26:28.684 "read": true, 00:26:28.684 "write": true, 00:26:28.684 "unmap": true, 00:26:28.684 "flush": true, 00:26:28.684 "reset": true, 00:26:28.684 "nvme_admin": false, 00:26:28.684 "nvme_io": false, 00:26:28.684 "nvme_io_md": false, 00:26:28.684 "write_zeroes": true, 00:26:28.684 "zcopy": true, 00:26:28.684 "get_zone_info": false, 00:26:28.684 "zone_management": false, 00:26:28.684 "zone_append": false, 00:26:28.684 "compare": false, 00:26:28.684 "compare_and_write": false, 00:26:28.684 "abort": true, 00:26:28.684 "seek_hole": false, 00:26:28.684 "seek_data": false, 00:26:28.684 "copy": true, 00:26:28.684 "nvme_iov_md": false 00:26:28.684 }, 00:26:28.684 "memory_domains": [ 00:26:28.684 { 00:26:28.684 "dma_device_id": "system", 00:26:28.684 "dma_device_type": 1 00:26:28.684 }, 00:26:28.684 { 00:26:28.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:28.684 "dma_device_type": 2 00:26:28.684 } 00:26:28.684 ], 00:26:28.684 "driver_specific": {} 00:26:28.684 } 00:26:28.684 ] 00:26:28.684 21:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:28.684 21:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:28.684 21:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:28.684 21:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:28.684 21:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:28.684 21:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:28.684 21:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:28.684 21:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:28.684 21:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:28.684 21:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:28.684 21:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:28.684 21:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:28.684 21:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:28.684 21:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:28.684 21:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:28.942 21:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:28.942 "name": "Existed_Raid", 00:26:28.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:28.942 "strip_size_kb": 0, 00:26:28.942 "state": "configuring", 00:26:28.942 "raid_level": "raid1", 00:26:28.942 "superblock": false, 00:26:28.942 "num_base_bdevs": 4, 00:26:28.942 "num_base_bdevs_discovered": 3, 00:26:28.942 "num_base_bdevs_operational": 4, 00:26:28.942 "base_bdevs_list": [ 00:26:28.942 { 00:26:28.942 "name": "BaseBdev1", 00:26:28.942 "uuid": "a7cca971-1046-4578-85e4-61b6cdda2e57", 00:26:28.942 "is_configured": true, 00:26:28.942 "data_offset": 0, 00:26:28.942 "data_size": 65536 00:26:28.942 }, 00:26:28.942 { 00:26:28.942 "name": "BaseBdev2", 00:26:28.942 "uuid": "e472ea79-e098-4d3f-bd72-c9e38e51f7d4", 00:26:28.942 "is_configured": true, 00:26:28.942 "data_offset": 0, 00:26:28.942 "data_size": 65536 00:26:28.942 }, 00:26:28.942 { 00:26:28.942 "name": "BaseBdev3", 00:26:28.942 "uuid": "3507b392-18db-4c56-a812-5164da45f887", 00:26:28.942 "is_configured": true, 00:26:28.942 "data_offset": 0, 00:26:28.942 "data_size": 65536 00:26:28.942 }, 00:26:28.942 { 00:26:28.942 "name": "BaseBdev4", 00:26:28.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:28.942 "is_configured": false, 00:26:28.942 "data_offset": 0, 00:26:28.942 "data_size": 0 00:26:28.942 } 00:26:28.942 ] 00:26:28.942 }' 00:26:28.942 21:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:28.942 21:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:29.511 21:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:26:29.771 [2024-07-15 21:40:03.010817] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:29.771 [2024-07-15 21:40:03.010919] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:26:29.771 [2024-07-15 21:40:03.010939] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:26:29.771 [2024-07-15 21:40:03.011095] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:26:29.771 [2024-07-15 21:40:03.011419] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:26:29.771 [2024-07-15 21:40:03.011461] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:26:29.771 [2024-07-15 21:40:03.011703] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:29.771 BaseBdev4 00:26:29.771 21:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:26:29.771 21:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:26:29.771 21:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:29.771 21:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:29.771 21:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:29.771 21:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:29.771 21:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:30.059 21:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:30.059 [ 00:26:30.059 { 00:26:30.059 "name": "BaseBdev4", 00:26:30.059 "aliases": [ 00:26:30.059 "8bd766d6-b15b-404a-906a-bdc02f748d28" 00:26:30.059 ], 00:26:30.059 "product_name": "Malloc disk", 00:26:30.059 "block_size": 512, 00:26:30.059 "num_blocks": 65536, 00:26:30.059 "uuid": "8bd766d6-b15b-404a-906a-bdc02f748d28", 00:26:30.059 "assigned_rate_limits": { 00:26:30.059 "rw_ios_per_sec": 0, 00:26:30.059 "rw_mbytes_per_sec": 0, 00:26:30.059 "r_mbytes_per_sec": 0, 00:26:30.059 "w_mbytes_per_sec": 0 00:26:30.059 }, 00:26:30.059 "claimed": true, 00:26:30.059 "claim_type": "exclusive_write", 00:26:30.059 "zoned": false, 00:26:30.059 "supported_io_types": { 00:26:30.059 "read": true, 00:26:30.059 "write": true, 00:26:30.059 "unmap": true, 00:26:30.059 "flush": true, 00:26:30.059 "reset": true, 00:26:30.059 "nvme_admin": false, 00:26:30.059 "nvme_io": false, 00:26:30.059 "nvme_io_md": false, 00:26:30.059 "write_zeroes": true, 00:26:30.059 "zcopy": true, 00:26:30.059 "get_zone_info": false, 00:26:30.059 "zone_management": false, 00:26:30.059 "zone_append": false, 00:26:30.059 "compare": false, 00:26:30.059 "compare_and_write": false, 00:26:30.059 "abort": true, 00:26:30.059 "seek_hole": false, 00:26:30.059 "seek_data": false, 00:26:30.059 "copy": true, 00:26:30.059 "nvme_iov_md": false 00:26:30.059 }, 00:26:30.059 "memory_domains": [ 00:26:30.059 { 00:26:30.059 "dma_device_id": "system", 00:26:30.059 "dma_device_type": 1 00:26:30.059 }, 00:26:30.059 { 00:26:30.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:30.059 "dma_device_type": 2 00:26:30.059 } 00:26:30.059 ], 00:26:30.059 "driver_specific": {} 00:26:30.059 } 00:26:30.059 ] 00:26:30.059 21:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:30.059 21:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:30.059 21:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:30.059 21:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:26:30.059 21:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:30.059 21:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:30.059 21:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:30.059 21:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:30.059 21:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:30.059 21:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:30.059 21:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:30.059 21:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:30.059 21:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:30.059 21:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:30.059 21:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:30.318 21:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:30.318 "name": "Existed_Raid", 00:26:30.318 "uuid": "5bf8c60f-1a71-442b-a0ce-1edf1bad5a5b", 00:26:30.318 "strip_size_kb": 0, 00:26:30.318 "state": "online", 00:26:30.318 "raid_level": "raid1", 00:26:30.318 "superblock": false, 00:26:30.318 "num_base_bdevs": 4, 00:26:30.318 "num_base_bdevs_discovered": 4, 00:26:30.318 "num_base_bdevs_operational": 4, 00:26:30.318 "base_bdevs_list": [ 00:26:30.318 { 00:26:30.318 "name": "BaseBdev1", 00:26:30.318 "uuid": "a7cca971-1046-4578-85e4-61b6cdda2e57", 00:26:30.318 "is_configured": true, 00:26:30.318 "data_offset": 0, 00:26:30.318 "data_size": 65536 00:26:30.318 }, 00:26:30.318 { 00:26:30.318 "name": "BaseBdev2", 00:26:30.318 "uuid": "e472ea79-e098-4d3f-bd72-c9e38e51f7d4", 00:26:30.318 "is_configured": true, 00:26:30.318 "data_offset": 0, 00:26:30.318 "data_size": 65536 00:26:30.318 }, 00:26:30.318 { 00:26:30.318 "name": "BaseBdev3", 00:26:30.318 "uuid": "3507b392-18db-4c56-a812-5164da45f887", 00:26:30.318 "is_configured": true, 00:26:30.318 "data_offset": 0, 00:26:30.318 "data_size": 65536 00:26:30.318 }, 00:26:30.318 { 00:26:30.318 "name": "BaseBdev4", 00:26:30.318 "uuid": "8bd766d6-b15b-404a-906a-bdc02f748d28", 00:26:30.318 "is_configured": true, 00:26:30.318 "data_offset": 0, 00:26:30.318 "data_size": 65536 00:26:30.318 } 00:26:30.318 ] 00:26:30.318 }' 00:26:30.318 21:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:30.318 21:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.892 21:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:26:30.892 21:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:30.892 21:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:30.892 21:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:30.892 21:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:30.892 21:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:26:30.892 21:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:30.892 21:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:31.151 [2024-07-15 21:40:04.396834] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:31.151 21:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:31.152 "name": "Existed_Raid", 00:26:31.152 "aliases": [ 00:26:31.152 "5bf8c60f-1a71-442b-a0ce-1edf1bad5a5b" 00:26:31.152 ], 00:26:31.152 "product_name": "Raid Volume", 00:26:31.152 "block_size": 512, 00:26:31.152 "num_blocks": 65536, 00:26:31.152 "uuid": "5bf8c60f-1a71-442b-a0ce-1edf1bad5a5b", 00:26:31.152 "assigned_rate_limits": { 00:26:31.152 "rw_ios_per_sec": 0, 00:26:31.152 "rw_mbytes_per_sec": 0, 00:26:31.152 "r_mbytes_per_sec": 0, 00:26:31.152 "w_mbytes_per_sec": 0 00:26:31.152 }, 00:26:31.152 "claimed": false, 00:26:31.152 "zoned": false, 00:26:31.152 "supported_io_types": { 00:26:31.152 "read": true, 00:26:31.152 "write": true, 00:26:31.152 "unmap": false, 00:26:31.152 "flush": false, 00:26:31.152 "reset": true, 00:26:31.152 "nvme_admin": false, 00:26:31.152 "nvme_io": false, 00:26:31.152 "nvme_io_md": false, 00:26:31.152 "write_zeroes": true, 00:26:31.152 "zcopy": false, 00:26:31.152 "get_zone_info": false, 00:26:31.152 "zone_management": false, 00:26:31.152 "zone_append": false, 00:26:31.152 "compare": false, 00:26:31.152 "compare_and_write": false, 00:26:31.152 "abort": false, 00:26:31.152 "seek_hole": false, 00:26:31.152 "seek_data": false, 00:26:31.152 "copy": false, 00:26:31.152 "nvme_iov_md": false 00:26:31.152 }, 00:26:31.152 "memory_domains": [ 00:26:31.152 { 00:26:31.152 "dma_device_id": "system", 00:26:31.152 "dma_device_type": 1 00:26:31.152 }, 00:26:31.152 { 00:26:31.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:31.152 "dma_device_type": 2 00:26:31.152 }, 00:26:31.152 { 00:26:31.152 "dma_device_id": "system", 00:26:31.152 "dma_device_type": 1 00:26:31.152 }, 00:26:31.152 { 00:26:31.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:31.152 "dma_device_type": 2 00:26:31.152 }, 00:26:31.152 { 00:26:31.152 "dma_device_id": "system", 00:26:31.152 "dma_device_type": 1 00:26:31.152 }, 00:26:31.152 { 00:26:31.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:31.152 "dma_device_type": 2 00:26:31.152 }, 00:26:31.152 { 00:26:31.152 "dma_device_id": "system", 00:26:31.152 "dma_device_type": 1 00:26:31.152 }, 00:26:31.152 { 00:26:31.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:31.152 "dma_device_type": 2 00:26:31.152 } 00:26:31.152 ], 00:26:31.152 "driver_specific": { 00:26:31.152 "raid": { 00:26:31.152 "uuid": "5bf8c60f-1a71-442b-a0ce-1edf1bad5a5b", 00:26:31.152 "strip_size_kb": 0, 00:26:31.152 "state": "online", 00:26:31.152 "raid_level": "raid1", 00:26:31.152 "superblock": false, 00:26:31.152 "num_base_bdevs": 4, 00:26:31.152 "num_base_bdevs_discovered": 4, 00:26:31.152 "num_base_bdevs_operational": 4, 00:26:31.152 "base_bdevs_list": [ 00:26:31.152 { 00:26:31.152 "name": "BaseBdev1", 00:26:31.152 "uuid": "a7cca971-1046-4578-85e4-61b6cdda2e57", 00:26:31.152 "is_configured": true, 00:26:31.152 "data_offset": 0, 00:26:31.152 "data_size": 65536 00:26:31.152 }, 00:26:31.152 { 00:26:31.152 "name": "BaseBdev2", 00:26:31.152 "uuid": "e472ea79-e098-4d3f-bd72-c9e38e51f7d4", 00:26:31.152 "is_configured": true, 00:26:31.152 "data_offset": 0, 00:26:31.152 "data_size": 65536 00:26:31.152 }, 00:26:31.152 { 00:26:31.152 "name": "BaseBdev3", 00:26:31.152 "uuid": "3507b392-18db-4c56-a812-5164da45f887", 00:26:31.152 "is_configured": true, 00:26:31.152 "data_offset": 0, 00:26:31.152 "data_size": 65536 00:26:31.152 }, 00:26:31.152 { 00:26:31.152 "name": "BaseBdev4", 00:26:31.152 "uuid": "8bd766d6-b15b-404a-906a-bdc02f748d28", 00:26:31.152 "is_configured": true, 00:26:31.152 "data_offset": 0, 00:26:31.152 "data_size": 65536 00:26:31.152 } 00:26:31.152 ] 00:26:31.152 } 00:26:31.152 } 00:26:31.152 }' 00:26:31.152 21:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:31.152 21:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:26:31.152 BaseBdev2 00:26:31.152 BaseBdev3 00:26:31.152 BaseBdev4' 00:26:31.152 21:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:31.152 21:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:26:31.152 21:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:31.412 21:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:31.412 "name": "BaseBdev1", 00:26:31.412 "aliases": [ 00:26:31.412 "a7cca971-1046-4578-85e4-61b6cdda2e57" 00:26:31.412 ], 00:26:31.412 "product_name": "Malloc disk", 00:26:31.412 "block_size": 512, 00:26:31.412 "num_blocks": 65536, 00:26:31.412 "uuid": "a7cca971-1046-4578-85e4-61b6cdda2e57", 00:26:31.412 "assigned_rate_limits": { 00:26:31.412 "rw_ios_per_sec": 0, 00:26:31.412 "rw_mbytes_per_sec": 0, 00:26:31.412 "r_mbytes_per_sec": 0, 00:26:31.412 "w_mbytes_per_sec": 0 00:26:31.412 }, 00:26:31.412 "claimed": true, 00:26:31.412 "claim_type": "exclusive_write", 00:26:31.412 "zoned": false, 00:26:31.412 "supported_io_types": { 00:26:31.412 "read": true, 00:26:31.412 "write": true, 00:26:31.412 "unmap": true, 00:26:31.412 "flush": true, 00:26:31.412 "reset": true, 00:26:31.412 "nvme_admin": false, 00:26:31.412 "nvme_io": false, 00:26:31.412 "nvme_io_md": false, 00:26:31.412 "write_zeroes": true, 00:26:31.412 "zcopy": true, 00:26:31.412 "get_zone_info": false, 00:26:31.412 "zone_management": false, 00:26:31.412 "zone_append": false, 00:26:31.412 "compare": false, 00:26:31.412 "compare_and_write": false, 00:26:31.412 "abort": true, 00:26:31.412 "seek_hole": false, 00:26:31.412 "seek_data": false, 00:26:31.412 "copy": true, 00:26:31.412 "nvme_iov_md": false 00:26:31.412 }, 00:26:31.412 "memory_domains": [ 00:26:31.412 { 00:26:31.412 "dma_device_id": "system", 00:26:31.412 "dma_device_type": 1 00:26:31.412 }, 00:26:31.412 { 00:26:31.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:31.412 "dma_device_type": 2 00:26:31.412 } 00:26:31.412 ], 00:26:31.412 "driver_specific": {} 00:26:31.412 }' 00:26:31.412 21:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:31.413 21:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:31.413 21:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:31.413 21:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:31.672 21:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:31.672 21:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:31.672 21:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:31.672 21:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:31.672 21:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:31.672 21:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:31.931 21:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:31.931 21:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:31.931 21:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:31.931 21:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:31.931 21:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:31.931 21:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:31.931 "name": "BaseBdev2", 00:26:31.931 "aliases": [ 00:26:31.931 "e472ea79-e098-4d3f-bd72-c9e38e51f7d4" 00:26:31.931 ], 00:26:31.931 "product_name": "Malloc disk", 00:26:31.931 "block_size": 512, 00:26:31.931 "num_blocks": 65536, 00:26:31.931 "uuid": "e472ea79-e098-4d3f-bd72-c9e38e51f7d4", 00:26:31.931 "assigned_rate_limits": { 00:26:31.931 "rw_ios_per_sec": 0, 00:26:31.931 "rw_mbytes_per_sec": 0, 00:26:31.931 "r_mbytes_per_sec": 0, 00:26:31.931 "w_mbytes_per_sec": 0 00:26:31.931 }, 00:26:31.931 "claimed": true, 00:26:31.931 "claim_type": "exclusive_write", 00:26:31.931 "zoned": false, 00:26:31.931 "supported_io_types": { 00:26:31.931 "read": true, 00:26:31.931 "write": true, 00:26:31.931 "unmap": true, 00:26:31.931 "flush": true, 00:26:31.931 "reset": true, 00:26:31.931 "nvme_admin": false, 00:26:31.931 "nvme_io": false, 00:26:31.931 "nvme_io_md": false, 00:26:31.931 "write_zeroes": true, 00:26:31.931 "zcopy": true, 00:26:31.931 "get_zone_info": false, 00:26:31.931 "zone_management": false, 00:26:31.931 "zone_append": false, 00:26:31.931 "compare": false, 00:26:31.931 "compare_and_write": false, 00:26:31.931 "abort": true, 00:26:31.931 "seek_hole": false, 00:26:31.931 "seek_data": false, 00:26:31.931 "copy": true, 00:26:31.931 "nvme_iov_md": false 00:26:31.931 }, 00:26:31.931 "memory_domains": [ 00:26:31.931 { 00:26:31.931 "dma_device_id": "system", 00:26:31.931 "dma_device_type": 1 00:26:31.931 }, 00:26:31.931 { 00:26:31.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:31.931 "dma_device_type": 2 00:26:31.931 } 00:26:31.931 ], 00:26:31.931 "driver_specific": {} 00:26:31.931 }' 00:26:31.931 21:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:32.189 21:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:32.189 21:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:32.189 21:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:32.189 21:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:32.189 21:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:32.189 21:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:32.448 21:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:32.448 21:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:32.448 21:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:32.448 21:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:32.448 21:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:32.448 21:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:32.448 21:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:32.448 21:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:32.708 21:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:32.708 "name": "BaseBdev3", 00:26:32.708 "aliases": [ 00:26:32.708 "3507b392-18db-4c56-a812-5164da45f887" 00:26:32.708 ], 00:26:32.708 "product_name": "Malloc disk", 00:26:32.708 "block_size": 512, 00:26:32.708 "num_blocks": 65536, 00:26:32.708 "uuid": "3507b392-18db-4c56-a812-5164da45f887", 00:26:32.708 "assigned_rate_limits": { 00:26:32.708 "rw_ios_per_sec": 0, 00:26:32.708 "rw_mbytes_per_sec": 0, 00:26:32.708 "r_mbytes_per_sec": 0, 00:26:32.708 "w_mbytes_per_sec": 0 00:26:32.708 }, 00:26:32.708 "claimed": true, 00:26:32.708 "claim_type": "exclusive_write", 00:26:32.708 "zoned": false, 00:26:32.708 "supported_io_types": { 00:26:32.708 "read": true, 00:26:32.708 "write": true, 00:26:32.708 "unmap": true, 00:26:32.708 "flush": true, 00:26:32.708 "reset": true, 00:26:32.708 "nvme_admin": false, 00:26:32.708 "nvme_io": false, 00:26:32.708 "nvme_io_md": false, 00:26:32.708 "write_zeroes": true, 00:26:32.708 "zcopy": true, 00:26:32.708 "get_zone_info": false, 00:26:32.708 "zone_management": false, 00:26:32.708 "zone_append": false, 00:26:32.708 "compare": false, 00:26:32.708 "compare_and_write": false, 00:26:32.708 "abort": true, 00:26:32.708 "seek_hole": false, 00:26:32.708 "seek_data": false, 00:26:32.708 "copy": true, 00:26:32.708 "nvme_iov_md": false 00:26:32.708 }, 00:26:32.708 "memory_domains": [ 00:26:32.708 { 00:26:32.708 "dma_device_id": "system", 00:26:32.708 "dma_device_type": 1 00:26:32.708 }, 00:26:32.708 { 00:26:32.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:32.708 "dma_device_type": 2 00:26:32.708 } 00:26:32.708 ], 00:26:32.708 "driver_specific": {} 00:26:32.708 }' 00:26:32.708 21:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:32.708 21:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:32.708 21:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:32.708 21:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:32.967 21:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:32.967 21:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:32.967 21:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:32.967 21:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:32.967 21:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:32.967 21:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:32.967 21:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:33.225 21:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:33.225 21:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:33.225 21:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:26:33.225 21:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:33.225 21:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:33.225 "name": "BaseBdev4", 00:26:33.225 "aliases": [ 00:26:33.225 "8bd766d6-b15b-404a-906a-bdc02f748d28" 00:26:33.225 ], 00:26:33.225 "product_name": "Malloc disk", 00:26:33.225 "block_size": 512, 00:26:33.225 "num_blocks": 65536, 00:26:33.225 "uuid": "8bd766d6-b15b-404a-906a-bdc02f748d28", 00:26:33.225 "assigned_rate_limits": { 00:26:33.225 "rw_ios_per_sec": 0, 00:26:33.225 "rw_mbytes_per_sec": 0, 00:26:33.225 "r_mbytes_per_sec": 0, 00:26:33.225 "w_mbytes_per_sec": 0 00:26:33.225 }, 00:26:33.225 "claimed": true, 00:26:33.225 "claim_type": "exclusive_write", 00:26:33.225 "zoned": false, 00:26:33.225 "supported_io_types": { 00:26:33.225 "read": true, 00:26:33.225 "write": true, 00:26:33.225 "unmap": true, 00:26:33.225 "flush": true, 00:26:33.225 "reset": true, 00:26:33.225 "nvme_admin": false, 00:26:33.225 "nvme_io": false, 00:26:33.225 "nvme_io_md": false, 00:26:33.226 "write_zeroes": true, 00:26:33.226 "zcopy": true, 00:26:33.226 "get_zone_info": false, 00:26:33.226 "zone_management": false, 00:26:33.226 "zone_append": false, 00:26:33.226 "compare": false, 00:26:33.226 "compare_and_write": false, 00:26:33.226 "abort": true, 00:26:33.226 "seek_hole": false, 00:26:33.226 "seek_data": false, 00:26:33.226 "copy": true, 00:26:33.226 "nvme_iov_md": false 00:26:33.226 }, 00:26:33.226 "memory_domains": [ 00:26:33.226 { 00:26:33.226 "dma_device_id": "system", 00:26:33.226 "dma_device_type": 1 00:26:33.226 }, 00:26:33.226 { 00:26:33.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:33.226 "dma_device_type": 2 00:26:33.226 } 00:26:33.226 ], 00:26:33.226 "driver_specific": {} 00:26:33.226 }' 00:26:33.226 21:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:33.483 21:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:33.483 21:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:33.483 21:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:33.483 21:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:33.483 21:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:33.483 21:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:33.741 21:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:33.741 21:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:33.741 21:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:33.741 21:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:33.741 21:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:33.741 21:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:33.999 [2024-07-15 21:40:07.207844] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:33.999 21:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:26:33.999 21:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:26:33.999 21:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:33.999 21:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:26:33.999 21:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:26:33.999 21:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:26:33.999 21:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:33.999 21:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:33.999 21:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:33.999 21:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:33.999 21:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:33.999 21:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:33.999 21:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:33.999 21:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:33.999 21:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:33.999 21:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:33.999 21:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:34.258 21:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:34.258 "name": "Existed_Raid", 00:26:34.258 "uuid": "5bf8c60f-1a71-442b-a0ce-1edf1bad5a5b", 00:26:34.258 "strip_size_kb": 0, 00:26:34.258 "state": "online", 00:26:34.258 "raid_level": "raid1", 00:26:34.258 "superblock": false, 00:26:34.258 "num_base_bdevs": 4, 00:26:34.258 "num_base_bdevs_discovered": 3, 00:26:34.258 "num_base_bdevs_operational": 3, 00:26:34.258 "base_bdevs_list": [ 00:26:34.258 { 00:26:34.258 "name": null, 00:26:34.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:34.258 "is_configured": false, 00:26:34.258 "data_offset": 0, 00:26:34.258 "data_size": 65536 00:26:34.258 }, 00:26:34.258 { 00:26:34.258 "name": "BaseBdev2", 00:26:34.258 "uuid": "e472ea79-e098-4d3f-bd72-c9e38e51f7d4", 00:26:34.258 "is_configured": true, 00:26:34.258 "data_offset": 0, 00:26:34.258 "data_size": 65536 00:26:34.258 }, 00:26:34.258 { 00:26:34.258 "name": "BaseBdev3", 00:26:34.258 "uuid": "3507b392-18db-4c56-a812-5164da45f887", 00:26:34.258 "is_configured": true, 00:26:34.258 "data_offset": 0, 00:26:34.258 "data_size": 65536 00:26:34.258 }, 00:26:34.258 { 00:26:34.258 "name": "BaseBdev4", 00:26:34.258 "uuid": "8bd766d6-b15b-404a-906a-bdc02f748d28", 00:26:34.258 "is_configured": true, 00:26:34.258 "data_offset": 0, 00:26:34.258 "data_size": 65536 00:26:34.258 } 00:26:34.258 ] 00:26:34.258 }' 00:26:34.258 21:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:34.258 21:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.826 21:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:26:34.826 21:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:34.826 21:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:34.826 21:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:35.083 21:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:35.083 21:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:35.083 21:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:35.342 [2024-07-15 21:40:08.564184] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:35.342 21:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:35.342 21:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:35.342 21:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:35.342 21:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:35.601 21:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:35.601 21:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:35.601 21:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:26:35.859 [2024-07-15 21:40:09.061098] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:35.859 21:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:35.859 21:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:35.859 21:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:35.859 21:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:36.117 21:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:36.117 21:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:36.117 21:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:26:36.375 [2024-07-15 21:40:09.562122] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:26:36.375 [2024-07-15 21:40:09.562304] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:36.375 [2024-07-15 21:40:09.667309] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:36.375 [2024-07-15 21:40:09.669745] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:36.375 [2024-07-15 21:40:09.669953] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:26:36.375 21:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:36.375 21:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:36.375 21:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:36.375 21:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:26:36.635 21:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:26:36.635 21:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:26:36.635 21:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:26:36.635 21:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:26:36.635 21:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:36.635 21:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:36.894 BaseBdev2 00:26:36.894 21:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:26:36.894 21:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:26:36.894 21:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:36.894 21:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:36.894 21:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:36.894 21:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:36.894 21:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:37.152 21:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:37.408 [ 00:26:37.408 { 00:26:37.408 "name": "BaseBdev2", 00:26:37.408 "aliases": [ 00:26:37.408 "dc4651ba-dda6-4f0f-baa8-592265cf39f3" 00:26:37.408 ], 00:26:37.408 "product_name": "Malloc disk", 00:26:37.408 "block_size": 512, 00:26:37.408 "num_blocks": 65536, 00:26:37.408 "uuid": "dc4651ba-dda6-4f0f-baa8-592265cf39f3", 00:26:37.408 "assigned_rate_limits": { 00:26:37.408 "rw_ios_per_sec": 0, 00:26:37.408 "rw_mbytes_per_sec": 0, 00:26:37.408 "r_mbytes_per_sec": 0, 00:26:37.408 "w_mbytes_per_sec": 0 00:26:37.408 }, 00:26:37.408 "claimed": false, 00:26:37.408 "zoned": false, 00:26:37.408 "supported_io_types": { 00:26:37.408 "read": true, 00:26:37.408 "write": true, 00:26:37.408 "unmap": true, 00:26:37.408 "flush": true, 00:26:37.408 "reset": true, 00:26:37.408 "nvme_admin": false, 00:26:37.408 "nvme_io": false, 00:26:37.408 "nvme_io_md": false, 00:26:37.408 "write_zeroes": true, 00:26:37.408 "zcopy": true, 00:26:37.408 "get_zone_info": false, 00:26:37.408 "zone_management": false, 00:26:37.408 "zone_append": false, 00:26:37.408 "compare": false, 00:26:37.408 "compare_and_write": false, 00:26:37.408 "abort": true, 00:26:37.408 "seek_hole": false, 00:26:37.408 "seek_data": false, 00:26:37.408 "copy": true, 00:26:37.408 "nvme_iov_md": false 00:26:37.408 }, 00:26:37.408 "memory_domains": [ 00:26:37.408 { 00:26:37.408 "dma_device_id": "system", 00:26:37.408 "dma_device_type": 1 00:26:37.408 }, 00:26:37.408 { 00:26:37.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:37.408 "dma_device_type": 2 00:26:37.408 } 00:26:37.408 ], 00:26:37.408 "driver_specific": {} 00:26:37.408 } 00:26:37.408 ] 00:26:37.408 21:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:37.408 21:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:37.408 21:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:37.408 21:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:37.408 BaseBdev3 00:26:37.408 21:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:26:37.408 21:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:26:37.408 21:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:37.408 21:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:37.408 21:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:37.408 21:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:37.408 21:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:37.665 21:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:37.924 [ 00:26:37.924 { 00:26:37.924 "name": "BaseBdev3", 00:26:37.924 "aliases": [ 00:26:37.924 "8a51dca9-3c25-4190-b4b3-920c49374bbb" 00:26:37.924 ], 00:26:37.924 "product_name": "Malloc disk", 00:26:37.924 "block_size": 512, 00:26:37.924 "num_blocks": 65536, 00:26:37.924 "uuid": "8a51dca9-3c25-4190-b4b3-920c49374bbb", 00:26:37.924 "assigned_rate_limits": { 00:26:37.924 "rw_ios_per_sec": 0, 00:26:37.924 "rw_mbytes_per_sec": 0, 00:26:37.924 "r_mbytes_per_sec": 0, 00:26:37.924 "w_mbytes_per_sec": 0 00:26:37.924 }, 00:26:37.924 "claimed": false, 00:26:37.924 "zoned": false, 00:26:37.924 "supported_io_types": { 00:26:37.924 "read": true, 00:26:37.924 "write": true, 00:26:37.924 "unmap": true, 00:26:37.924 "flush": true, 00:26:37.924 "reset": true, 00:26:37.924 "nvme_admin": false, 00:26:37.924 "nvme_io": false, 00:26:37.924 "nvme_io_md": false, 00:26:37.924 "write_zeroes": true, 00:26:37.924 "zcopy": true, 00:26:37.924 "get_zone_info": false, 00:26:37.924 "zone_management": false, 00:26:37.924 "zone_append": false, 00:26:37.924 "compare": false, 00:26:37.924 "compare_and_write": false, 00:26:37.924 "abort": true, 00:26:37.924 "seek_hole": false, 00:26:37.924 "seek_data": false, 00:26:37.924 "copy": true, 00:26:37.924 "nvme_iov_md": false 00:26:37.924 }, 00:26:37.924 "memory_domains": [ 00:26:37.924 { 00:26:37.924 "dma_device_id": "system", 00:26:37.924 "dma_device_type": 1 00:26:37.924 }, 00:26:37.924 { 00:26:37.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:37.924 "dma_device_type": 2 00:26:37.924 } 00:26:37.924 ], 00:26:37.924 "driver_specific": {} 00:26:37.924 } 00:26:37.924 ] 00:26:37.924 21:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:37.924 21:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:37.924 21:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:37.924 21:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:26:38.181 BaseBdev4 00:26:38.182 21:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:26:38.182 21:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:26:38.182 21:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:38.182 21:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:38.182 21:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:38.182 21:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:38.182 21:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:38.441 21:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:38.441 [ 00:26:38.441 { 00:26:38.441 "name": "BaseBdev4", 00:26:38.441 "aliases": [ 00:26:38.441 "3faa74d5-3410-422d-9edb-ff25960ce442" 00:26:38.441 ], 00:26:38.441 "product_name": "Malloc disk", 00:26:38.441 "block_size": 512, 00:26:38.441 "num_blocks": 65536, 00:26:38.441 "uuid": "3faa74d5-3410-422d-9edb-ff25960ce442", 00:26:38.441 "assigned_rate_limits": { 00:26:38.441 "rw_ios_per_sec": 0, 00:26:38.441 "rw_mbytes_per_sec": 0, 00:26:38.441 "r_mbytes_per_sec": 0, 00:26:38.441 "w_mbytes_per_sec": 0 00:26:38.441 }, 00:26:38.441 "claimed": false, 00:26:38.441 "zoned": false, 00:26:38.441 "supported_io_types": { 00:26:38.441 "read": true, 00:26:38.441 "write": true, 00:26:38.441 "unmap": true, 00:26:38.441 "flush": true, 00:26:38.441 "reset": true, 00:26:38.441 "nvme_admin": false, 00:26:38.441 "nvme_io": false, 00:26:38.441 "nvme_io_md": false, 00:26:38.441 "write_zeroes": true, 00:26:38.441 "zcopy": true, 00:26:38.441 "get_zone_info": false, 00:26:38.441 "zone_management": false, 00:26:38.441 "zone_append": false, 00:26:38.441 "compare": false, 00:26:38.441 "compare_and_write": false, 00:26:38.441 "abort": true, 00:26:38.441 "seek_hole": false, 00:26:38.441 "seek_data": false, 00:26:38.441 "copy": true, 00:26:38.441 "nvme_iov_md": false 00:26:38.441 }, 00:26:38.441 "memory_domains": [ 00:26:38.441 { 00:26:38.441 "dma_device_id": "system", 00:26:38.441 "dma_device_type": 1 00:26:38.441 }, 00:26:38.441 { 00:26:38.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:38.441 "dma_device_type": 2 00:26:38.441 } 00:26:38.441 ], 00:26:38.441 "driver_specific": {} 00:26:38.441 } 00:26:38.441 ] 00:26:38.441 21:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:38.441 21:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:38.441 21:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:38.441 21:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:38.701 [2024-07-15 21:40:11.959026] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:38.701 [2024-07-15 21:40:11.959169] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:38.701 [2024-07-15 21:40:11.959219] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:38.701 [2024-07-15 21:40:11.960979] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:38.701 [2024-07-15 21:40:11.961061] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:38.701 21:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:38.701 21:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:38.701 21:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:38.701 21:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:38.701 21:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:38.701 21:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:38.701 21:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:38.701 21:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:38.701 21:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:38.701 21:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:38.701 21:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:38.701 21:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:38.960 21:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:38.960 "name": "Existed_Raid", 00:26:38.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:38.960 "strip_size_kb": 0, 00:26:38.960 "state": "configuring", 00:26:38.960 "raid_level": "raid1", 00:26:38.960 "superblock": false, 00:26:38.960 "num_base_bdevs": 4, 00:26:38.960 "num_base_bdevs_discovered": 3, 00:26:38.960 "num_base_bdevs_operational": 4, 00:26:38.960 "base_bdevs_list": [ 00:26:38.960 { 00:26:38.960 "name": "BaseBdev1", 00:26:38.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:38.960 "is_configured": false, 00:26:38.960 "data_offset": 0, 00:26:38.960 "data_size": 0 00:26:38.960 }, 00:26:38.960 { 00:26:38.960 "name": "BaseBdev2", 00:26:38.960 "uuid": "dc4651ba-dda6-4f0f-baa8-592265cf39f3", 00:26:38.960 "is_configured": true, 00:26:38.960 "data_offset": 0, 00:26:38.960 "data_size": 65536 00:26:38.960 }, 00:26:38.960 { 00:26:38.960 "name": "BaseBdev3", 00:26:38.960 "uuid": "8a51dca9-3c25-4190-b4b3-920c49374bbb", 00:26:38.960 "is_configured": true, 00:26:38.960 "data_offset": 0, 00:26:38.960 "data_size": 65536 00:26:38.960 }, 00:26:38.960 { 00:26:38.960 "name": "BaseBdev4", 00:26:38.960 "uuid": "3faa74d5-3410-422d-9edb-ff25960ce442", 00:26:38.960 "is_configured": true, 00:26:38.960 "data_offset": 0, 00:26:38.960 "data_size": 65536 00:26:38.960 } 00:26:38.960 ] 00:26:38.960 }' 00:26:38.960 21:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:38.960 21:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.527 21:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:26:39.786 [2024-07-15 21:40:12.986690] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:39.786 21:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:39.786 21:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:39.786 21:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:39.786 21:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:39.786 21:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:39.786 21:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:39.786 21:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:39.786 21:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:39.786 21:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:39.786 21:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:39.786 21:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:39.786 21:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:40.044 21:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:40.044 "name": "Existed_Raid", 00:26:40.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:40.044 "strip_size_kb": 0, 00:26:40.044 "state": "configuring", 00:26:40.044 "raid_level": "raid1", 00:26:40.044 "superblock": false, 00:26:40.044 "num_base_bdevs": 4, 00:26:40.044 "num_base_bdevs_discovered": 2, 00:26:40.044 "num_base_bdevs_operational": 4, 00:26:40.044 "base_bdevs_list": [ 00:26:40.044 { 00:26:40.044 "name": "BaseBdev1", 00:26:40.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:40.044 "is_configured": false, 00:26:40.044 "data_offset": 0, 00:26:40.044 "data_size": 0 00:26:40.044 }, 00:26:40.044 { 00:26:40.044 "name": null, 00:26:40.044 "uuid": "dc4651ba-dda6-4f0f-baa8-592265cf39f3", 00:26:40.044 "is_configured": false, 00:26:40.044 "data_offset": 0, 00:26:40.044 "data_size": 65536 00:26:40.044 }, 00:26:40.044 { 00:26:40.044 "name": "BaseBdev3", 00:26:40.044 "uuid": "8a51dca9-3c25-4190-b4b3-920c49374bbb", 00:26:40.044 "is_configured": true, 00:26:40.044 "data_offset": 0, 00:26:40.044 "data_size": 65536 00:26:40.044 }, 00:26:40.044 { 00:26:40.044 "name": "BaseBdev4", 00:26:40.044 "uuid": "3faa74d5-3410-422d-9edb-ff25960ce442", 00:26:40.044 "is_configured": true, 00:26:40.044 "data_offset": 0, 00:26:40.044 "data_size": 65536 00:26:40.044 } 00:26:40.044 ] 00:26:40.044 }' 00:26:40.044 21:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:40.044 21:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:40.616 21:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:40.616 21:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:40.873 21:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:26:40.873 21:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:41.131 [2024-07-15 21:40:14.249627] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:41.131 BaseBdev1 00:26:41.131 21:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:26:41.131 21:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:26:41.131 21:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:41.131 21:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:41.131 21:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:41.131 21:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:41.131 21:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:41.131 21:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:41.388 [ 00:26:41.388 { 00:26:41.388 "name": "BaseBdev1", 00:26:41.388 "aliases": [ 00:26:41.388 "d38bf723-0a1e-4a2f-8fb6-2c259165e4f6" 00:26:41.388 ], 00:26:41.388 "product_name": "Malloc disk", 00:26:41.388 "block_size": 512, 00:26:41.388 "num_blocks": 65536, 00:26:41.388 "uuid": "d38bf723-0a1e-4a2f-8fb6-2c259165e4f6", 00:26:41.388 "assigned_rate_limits": { 00:26:41.388 "rw_ios_per_sec": 0, 00:26:41.388 "rw_mbytes_per_sec": 0, 00:26:41.388 "r_mbytes_per_sec": 0, 00:26:41.388 "w_mbytes_per_sec": 0 00:26:41.388 }, 00:26:41.388 "claimed": true, 00:26:41.388 "claim_type": "exclusive_write", 00:26:41.388 "zoned": false, 00:26:41.388 "supported_io_types": { 00:26:41.388 "read": true, 00:26:41.388 "write": true, 00:26:41.388 "unmap": true, 00:26:41.388 "flush": true, 00:26:41.388 "reset": true, 00:26:41.388 "nvme_admin": false, 00:26:41.388 "nvme_io": false, 00:26:41.388 "nvme_io_md": false, 00:26:41.388 "write_zeroes": true, 00:26:41.388 "zcopy": true, 00:26:41.388 "get_zone_info": false, 00:26:41.388 "zone_management": false, 00:26:41.388 "zone_append": false, 00:26:41.388 "compare": false, 00:26:41.388 "compare_and_write": false, 00:26:41.388 "abort": true, 00:26:41.388 "seek_hole": false, 00:26:41.388 "seek_data": false, 00:26:41.388 "copy": true, 00:26:41.388 "nvme_iov_md": false 00:26:41.388 }, 00:26:41.388 "memory_domains": [ 00:26:41.388 { 00:26:41.388 "dma_device_id": "system", 00:26:41.388 "dma_device_type": 1 00:26:41.388 }, 00:26:41.388 { 00:26:41.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:41.388 "dma_device_type": 2 00:26:41.388 } 00:26:41.388 ], 00:26:41.388 "driver_specific": {} 00:26:41.388 } 00:26:41.388 ] 00:26:41.388 21:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:41.388 21:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:41.389 21:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:41.389 21:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:41.389 21:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:41.389 21:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:41.389 21:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:41.389 21:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:41.389 21:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:41.389 21:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:41.389 21:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:41.389 21:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:41.389 21:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:41.647 21:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:41.647 "name": "Existed_Raid", 00:26:41.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:41.647 "strip_size_kb": 0, 00:26:41.647 "state": "configuring", 00:26:41.647 "raid_level": "raid1", 00:26:41.647 "superblock": false, 00:26:41.647 "num_base_bdevs": 4, 00:26:41.647 "num_base_bdevs_discovered": 3, 00:26:41.647 "num_base_bdevs_operational": 4, 00:26:41.647 "base_bdevs_list": [ 00:26:41.647 { 00:26:41.647 "name": "BaseBdev1", 00:26:41.647 "uuid": "d38bf723-0a1e-4a2f-8fb6-2c259165e4f6", 00:26:41.647 "is_configured": true, 00:26:41.647 "data_offset": 0, 00:26:41.647 "data_size": 65536 00:26:41.647 }, 00:26:41.647 { 00:26:41.647 "name": null, 00:26:41.647 "uuid": "dc4651ba-dda6-4f0f-baa8-592265cf39f3", 00:26:41.647 "is_configured": false, 00:26:41.647 "data_offset": 0, 00:26:41.647 "data_size": 65536 00:26:41.647 }, 00:26:41.647 { 00:26:41.647 "name": "BaseBdev3", 00:26:41.647 "uuid": "8a51dca9-3c25-4190-b4b3-920c49374bbb", 00:26:41.647 "is_configured": true, 00:26:41.647 "data_offset": 0, 00:26:41.647 "data_size": 65536 00:26:41.647 }, 00:26:41.647 { 00:26:41.647 "name": "BaseBdev4", 00:26:41.647 "uuid": "3faa74d5-3410-422d-9edb-ff25960ce442", 00:26:41.647 "is_configured": true, 00:26:41.647 "data_offset": 0, 00:26:41.647 "data_size": 65536 00:26:41.647 } 00:26:41.647 ] 00:26:41.647 }' 00:26:41.647 21:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:41.647 21:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:42.634 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:42.634 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:42.634 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:26:42.634 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:26:42.892 [2024-07-15 21:40:16.006650] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:42.892 21:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:42.893 21:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:42.893 21:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:42.893 21:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:42.893 21:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:42.893 21:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:42.893 21:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:42.893 21:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:42.893 21:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:42.893 21:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:42.893 21:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:42.893 21:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:42.893 21:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:42.893 "name": "Existed_Raid", 00:26:42.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:42.893 "strip_size_kb": 0, 00:26:42.893 "state": "configuring", 00:26:42.893 "raid_level": "raid1", 00:26:42.893 "superblock": false, 00:26:42.893 "num_base_bdevs": 4, 00:26:42.893 "num_base_bdevs_discovered": 2, 00:26:42.893 "num_base_bdevs_operational": 4, 00:26:42.893 "base_bdevs_list": [ 00:26:42.893 { 00:26:42.893 "name": "BaseBdev1", 00:26:42.893 "uuid": "d38bf723-0a1e-4a2f-8fb6-2c259165e4f6", 00:26:42.893 "is_configured": true, 00:26:42.893 "data_offset": 0, 00:26:42.893 "data_size": 65536 00:26:42.893 }, 00:26:42.893 { 00:26:42.893 "name": null, 00:26:42.893 "uuid": "dc4651ba-dda6-4f0f-baa8-592265cf39f3", 00:26:42.893 "is_configured": false, 00:26:42.893 "data_offset": 0, 00:26:42.893 "data_size": 65536 00:26:42.893 }, 00:26:42.893 { 00:26:42.893 "name": null, 00:26:42.893 "uuid": "8a51dca9-3c25-4190-b4b3-920c49374bbb", 00:26:42.893 "is_configured": false, 00:26:42.893 "data_offset": 0, 00:26:42.893 "data_size": 65536 00:26:42.893 }, 00:26:42.893 { 00:26:42.893 "name": "BaseBdev4", 00:26:42.893 "uuid": "3faa74d5-3410-422d-9edb-ff25960ce442", 00:26:42.893 "is_configured": true, 00:26:42.893 "data_offset": 0, 00:26:42.893 "data_size": 65536 00:26:42.893 } 00:26:42.893 ] 00:26:42.893 }' 00:26:42.893 21:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:42.893 21:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:43.828 21:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:43.828 21:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:43.828 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:26:43.828 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:44.086 [2024-07-15 21:40:17.296557] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:44.086 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:44.086 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:44.086 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:44.086 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:44.086 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:44.086 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:44.086 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:44.086 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:44.086 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:44.086 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:44.086 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:44.086 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:44.345 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:44.345 "name": "Existed_Raid", 00:26:44.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:44.345 "strip_size_kb": 0, 00:26:44.345 "state": "configuring", 00:26:44.345 "raid_level": "raid1", 00:26:44.345 "superblock": false, 00:26:44.345 "num_base_bdevs": 4, 00:26:44.345 "num_base_bdevs_discovered": 3, 00:26:44.345 "num_base_bdevs_operational": 4, 00:26:44.345 "base_bdevs_list": [ 00:26:44.345 { 00:26:44.345 "name": "BaseBdev1", 00:26:44.345 "uuid": "d38bf723-0a1e-4a2f-8fb6-2c259165e4f6", 00:26:44.345 "is_configured": true, 00:26:44.345 "data_offset": 0, 00:26:44.345 "data_size": 65536 00:26:44.345 }, 00:26:44.345 { 00:26:44.345 "name": null, 00:26:44.345 "uuid": "dc4651ba-dda6-4f0f-baa8-592265cf39f3", 00:26:44.345 "is_configured": false, 00:26:44.345 "data_offset": 0, 00:26:44.345 "data_size": 65536 00:26:44.345 }, 00:26:44.345 { 00:26:44.345 "name": "BaseBdev3", 00:26:44.345 "uuid": "8a51dca9-3c25-4190-b4b3-920c49374bbb", 00:26:44.345 "is_configured": true, 00:26:44.345 "data_offset": 0, 00:26:44.345 "data_size": 65536 00:26:44.345 }, 00:26:44.345 { 00:26:44.345 "name": "BaseBdev4", 00:26:44.345 "uuid": "3faa74d5-3410-422d-9edb-ff25960ce442", 00:26:44.345 "is_configured": true, 00:26:44.345 "data_offset": 0, 00:26:44.345 "data_size": 65536 00:26:44.345 } 00:26:44.345 ] 00:26:44.345 }' 00:26:44.345 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:44.345 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:44.912 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:44.912 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:45.172 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:26:45.172 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:45.172 [2024-07-15 21:40:18.530509] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:45.431 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:45.431 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:45.431 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:45.431 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:45.431 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:45.431 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:45.431 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:45.431 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:45.431 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:45.431 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:45.431 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:45.431 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:45.691 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:45.691 "name": "Existed_Raid", 00:26:45.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:45.691 "strip_size_kb": 0, 00:26:45.691 "state": "configuring", 00:26:45.691 "raid_level": "raid1", 00:26:45.691 "superblock": false, 00:26:45.691 "num_base_bdevs": 4, 00:26:45.691 "num_base_bdevs_discovered": 2, 00:26:45.691 "num_base_bdevs_operational": 4, 00:26:45.691 "base_bdevs_list": [ 00:26:45.691 { 00:26:45.691 "name": null, 00:26:45.691 "uuid": "d38bf723-0a1e-4a2f-8fb6-2c259165e4f6", 00:26:45.691 "is_configured": false, 00:26:45.691 "data_offset": 0, 00:26:45.691 "data_size": 65536 00:26:45.691 }, 00:26:45.691 { 00:26:45.691 "name": null, 00:26:45.691 "uuid": "dc4651ba-dda6-4f0f-baa8-592265cf39f3", 00:26:45.691 "is_configured": false, 00:26:45.691 "data_offset": 0, 00:26:45.691 "data_size": 65536 00:26:45.691 }, 00:26:45.691 { 00:26:45.691 "name": "BaseBdev3", 00:26:45.691 "uuid": "8a51dca9-3c25-4190-b4b3-920c49374bbb", 00:26:45.691 "is_configured": true, 00:26:45.691 "data_offset": 0, 00:26:45.691 "data_size": 65536 00:26:45.691 }, 00:26:45.691 { 00:26:45.691 "name": "BaseBdev4", 00:26:45.691 "uuid": "3faa74d5-3410-422d-9edb-ff25960ce442", 00:26:45.691 "is_configured": true, 00:26:45.691 "data_offset": 0, 00:26:45.691 "data_size": 65536 00:26:45.691 } 00:26:45.691 ] 00:26:45.691 }' 00:26:45.691 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:45.691 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:46.259 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:46.259 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:46.518 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:26:46.518 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:46.518 [2024-07-15 21:40:19.867630] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:46.518 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:46.518 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:46.518 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:46.518 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:46.518 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:46.518 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:46.518 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:46.518 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:46.518 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:46.518 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:46.518 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:46.518 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:46.778 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:46.778 "name": "Existed_Raid", 00:26:46.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:46.778 "strip_size_kb": 0, 00:26:46.778 "state": "configuring", 00:26:46.778 "raid_level": "raid1", 00:26:46.778 "superblock": false, 00:26:46.778 "num_base_bdevs": 4, 00:26:46.778 "num_base_bdevs_discovered": 3, 00:26:46.778 "num_base_bdevs_operational": 4, 00:26:46.778 "base_bdevs_list": [ 00:26:46.778 { 00:26:46.778 "name": null, 00:26:46.778 "uuid": "d38bf723-0a1e-4a2f-8fb6-2c259165e4f6", 00:26:46.778 "is_configured": false, 00:26:46.778 "data_offset": 0, 00:26:46.778 "data_size": 65536 00:26:46.778 }, 00:26:46.778 { 00:26:46.778 "name": "BaseBdev2", 00:26:46.778 "uuid": "dc4651ba-dda6-4f0f-baa8-592265cf39f3", 00:26:46.778 "is_configured": true, 00:26:46.778 "data_offset": 0, 00:26:46.778 "data_size": 65536 00:26:46.778 }, 00:26:46.778 { 00:26:46.778 "name": "BaseBdev3", 00:26:46.778 "uuid": "8a51dca9-3c25-4190-b4b3-920c49374bbb", 00:26:46.778 "is_configured": true, 00:26:46.778 "data_offset": 0, 00:26:46.778 "data_size": 65536 00:26:46.778 }, 00:26:46.778 { 00:26:46.778 "name": "BaseBdev4", 00:26:46.778 "uuid": "3faa74d5-3410-422d-9edb-ff25960ce442", 00:26:46.778 "is_configured": true, 00:26:46.778 "data_offset": 0, 00:26:46.778 "data_size": 65536 00:26:46.778 } 00:26:46.778 ] 00:26:46.778 }' 00:26:46.778 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:46.778 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.714 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:47.714 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:47.714 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:26:47.714 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:47.714 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:47.996 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u d38bf723-0a1e-4a2f-8fb6-2c259165e4f6 00:26:47.996 [2024-07-15 21:40:21.335861] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:47.996 [2024-07-15 21:40:21.335976] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:26:47.996 [2024-07-15 21:40:21.335995] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:26:47.996 [2024-07-15 21:40:21.336130] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:26:47.996 [2024-07-15 21:40:21.336435] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:26:47.996 [2024-07-15 21:40:21.336479] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009680 00:26:47.996 [2024-07-15 21:40:21.336720] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:47.996 NewBaseBdev 00:26:47.996 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:26:47.996 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:26:47.996 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:47.996 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:47.996 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:47.996 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:47.996 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:48.255 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:48.513 [ 00:26:48.513 { 00:26:48.513 "name": "NewBaseBdev", 00:26:48.513 "aliases": [ 00:26:48.513 "d38bf723-0a1e-4a2f-8fb6-2c259165e4f6" 00:26:48.513 ], 00:26:48.513 "product_name": "Malloc disk", 00:26:48.513 "block_size": 512, 00:26:48.513 "num_blocks": 65536, 00:26:48.513 "uuid": "d38bf723-0a1e-4a2f-8fb6-2c259165e4f6", 00:26:48.513 "assigned_rate_limits": { 00:26:48.513 "rw_ios_per_sec": 0, 00:26:48.513 "rw_mbytes_per_sec": 0, 00:26:48.513 "r_mbytes_per_sec": 0, 00:26:48.513 "w_mbytes_per_sec": 0 00:26:48.513 }, 00:26:48.513 "claimed": true, 00:26:48.513 "claim_type": "exclusive_write", 00:26:48.513 "zoned": false, 00:26:48.514 "supported_io_types": { 00:26:48.514 "read": true, 00:26:48.514 "write": true, 00:26:48.514 "unmap": true, 00:26:48.514 "flush": true, 00:26:48.514 "reset": true, 00:26:48.514 "nvme_admin": false, 00:26:48.514 "nvme_io": false, 00:26:48.514 "nvme_io_md": false, 00:26:48.514 "write_zeroes": true, 00:26:48.514 "zcopy": true, 00:26:48.514 "get_zone_info": false, 00:26:48.514 "zone_management": false, 00:26:48.514 "zone_append": false, 00:26:48.514 "compare": false, 00:26:48.514 "compare_and_write": false, 00:26:48.514 "abort": true, 00:26:48.514 "seek_hole": false, 00:26:48.514 "seek_data": false, 00:26:48.514 "copy": true, 00:26:48.514 "nvme_iov_md": false 00:26:48.514 }, 00:26:48.514 "memory_domains": [ 00:26:48.514 { 00:26:48.514 "dma_device_id": "system", 00:26:48.514 "dma_device_type": 1 00:26:48.514 }, 00:26:48.514 { 00:26:48.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:48.514 "dma_device_type": 2 00:26:48.514 } 00:26:48.514 ], 00:26:48.514 "driver_specific": {} 00:26:48.514 } 00:26:48.514 ] 00:26:48.514 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:48.514 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:26:48.514 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:48.514 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:48.514 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:48.514 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:48.514 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:48.514 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:48.514 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:48.514 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:48.514 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:48.514 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:48.514 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:48.773 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:48.773 "name": "Existed_Raid", 00:26:48.773 "uuid": "255071d3-c5f7-48b1-81c5-95bcfcc51f40", 00:26:48.773 "strip_size_kb": 0, 00:26:48.773 "state": "online", 00:26:48.773 "raid_level": "raid1", 00:26:48.773 "superblock": false, 00:26:48.773 "num_base_bdevs": 4, 00:26:48.773 "num_base_bdevs_discovered": 4, 00:26:48.773 "num_base_bdevs_operational": 4, 00:26:48.773 "base_bdevs_list": [ 00:26:48.773 { 00:26:48.773 "name": "NewBaseBdev", 00:26:48.773 "uuid": "d38bf723-0a1e-4a2f-8fb6-2c259165e4f6", 00:26:48.773 "is_configured": true, 00:26:48.773 "data_offset": 0, 00:26:48.773 "data_size": 65536 00:26:48.773 }, 00:26:48.773 { 00:26:48.773 "name": "BaseBdev2", 00:26:48.773 "uuid": "dc4651ba-dda6-4f0f-baa8-592265cf39f3", 00:26:48.773 "is_configured": true, 00:26:48.773 "data_offset": 0, 00:26:48.773 "data_size": 65536 00:26:48.773 }, 00:26:48.773 { 00:26:48.773 "name": "BaseBdev3", 00:26:48.773 "uuid": "8a51dca9-3c25-4190-b4b3-920c49374bbb", 00:26:48.773 "is_configured": true, 00:26:48.773 "data_offset": 0, 00:26:48.773 "data_size": 65536 00:26:48.773 }, 00:26:48.773 { 00:26:48.773 "name": "BaseBdev4", 00:26:48.773 "uuid": "3faa74d5-3410-422d-9edb-ff25960ce442", 00:26:48.773 "is_configured": true, 00:26:48.773 "data_offset": 0, 00:26:48.773 "data_size": 65536 00:26:48.773 } 00:26:48.773 ] 00:26:48.773 }' 00:26:48.773 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:48.773 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.341 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:26:49.341 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:49.341 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:49.341 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:49.341 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:49.341 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:26:49.341 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:49.341 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:49.600 [2024-07-15 21:40:22.737742] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:49.600 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:49.600 "name": "Existed_Raid", 00:26:49.600 "aliases": [ 00:26:49.600 "255071d3-c5f7-48b1-81c5-95bcfcc51f40" 00:26:49.600 ], 00:26:49.600 "product_name": "Raid Volume", 00:26:49.600 "block_size": 512, 00:26:49.600 "num_blocks": 65536, 00:26:49.600 "uuid": "255071d3-c5f7-48b1-81c5-95bcfcc51f40", 00:26:49.600 "assigned_rate_limits": { 00:26:49.600 "rw_ios_per_sec": 0, 00:26:49.600 "rw_mbytes_per_sec": 0, 00:26:49.600 "r_mbytes_per_sec": 0, 00:26:49.600 "w_mbytes_per_sec": 0 00:26:49.600 }, 00:26:49.600 "claimed": false, 00:26:49.600 "zoned": false, 00:26:49.600 "supported_io_types": { 00:26:49.600 "read": true, 00:26:49.600 "write": true, 00:26:49.600 "unmap": false, 00:26:49.600 "flush": false, 00:26:49.600 "reset": true, 00:26:49.600 "nvme_admin": false, 00:26:49.600 "nvme_io": false, 00:26:49.600 "nvme_io_md": false, 00:26:49.600 "write_zeroes": true, 00:26:49.600 "zcopy": false, 00:26:49.600 "get_zone_info": false, 00:26:49.600 "zone_management": false, 00:26:49.600 "zone_append": false, 00:26:49.600 "compare": false, 00:26:49.600 "compare_and_write": false, 00:26:49.600 "abort": false, 00:26:49.600 "seek_hole": false, 00:26:49.600 "seek_data": false, 00:26:49.600 "copy": false, 00:26:49.600 "nvme_iov_md": false 00:26:49.600 }, 00:26:49.600 "memory_domains": [ 00:26:49.600 { 00:26:49.600 "dma_device_id": "system", 00:26:49.600 "dma_device_type": 1 00:26:49.600 }, 00:26:49.600 { 00:26:49.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:49.600 "dma_device_type": 2 00:26:49.600 }, 00:26:49.600 { 00:26:49.600 "dma_device_id": "system", 00:26:49.600 "dma_device_type": 1 00:26:49.600 }, 00:26:49.600 { 00:26:49.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:49.600 "dma_device_type": 2 00:26:49.600 }, 00:26:49.600 { 00:26:49.600 "dma_device_id": "system", 00:26:49.600 "dma_device_type": 1 00:26:49.600 }, 00:26:49.600 { 00:26:49.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:49.600 "dma_device_type": 2 00:26:49.600 }, 00:26:49.600 { 00:26:49.600 "dma_device_id": "system", 00:26:49.600 "dma_device_type": 1 00:26:49.600 }, 00:26:49.600 { 00:26:49.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:49.600 "dma_device_type": 2 00:26:49.600 } 00:26:49.600 ], 00:26:49.600 "driver_specific": { 00:26:49.600 "raid": { 00:26:49.600 "uuid": "255071d3-c5f7-48b1-81c5-95bcfcc51f40", 00:26:49.600 "strip_size_kb": 0, 00:26:49.600 "state": "online", 00:26:49.600 "raid_level": "raid1", 00:26:49.600 "superblock": false, 00:26:49.600 "num_base_bdevs": 4, 00:26:49.600 "num_base_bdevs_discovered": 4, 00:26:49.600 "num_base_bdevs_operational": 4, 00:26:49.600 "base_bdevs_list": [ 00:26:49.600 { 00:26:49.600 "name": "NewBaseBdev", 00:26:49.600 "uuid": "d38bf723-0a1e-4a2f-8fb6-2c259165e4f6", 00:26:49.600 "is_configured": true, 00:26:49.600 "data_offset": 0, 00:26:49.600 "data_size": 65536 00:26:49.600 }, 00:26:49.600 { 00:26:49.600 "name": "BaseBdev2", 00:26:49.600 "uuid": "dc4651ba-dda6-4f0f-baa8-592265cf39f3", 00:26:49.600 "is_configured": true, 00:26:49.600 "data_offset": 0, 00:26:49.600 "data_size": 65536 00:26:49.600 }, 00:26:49.600 { 00:26:49.600 "name": "BaseBdev3", 00:26:49.600 "uuid": "8a51dca9-3c25-4190-b4b3-920c49374bbb", 00:26:49.600 "is_configured": true, 00:26:49.600 "data_offset": 0, 00:26:49.600 "data_size": 65536 00:26:49.600 }, 00:26:49.600 { 00:26:49.600 "name": "BaseBdev4", 00:26:49.600 "uuid": "3faa74d5-3410-422d-9edb-ff25960ce442", 00:26:49.600 "is_configured": true, 00:26:49.600 "data_offset": 0, 00:26:49.600 "data_size": 65536 00:26:49.600 } 00:26:49.600 ] 00:26:49.600 } 00:26:49.600 } 00:26:49.600 }' 00:26:49.600 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:49.600 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:26:49.600 BaseBdev2 00:26:49.600 BaseBdev3 00:26:49.600 BaseBdev4' 00:26:49.600 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:49.600 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:26:49.600 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:49.859 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:49.859 "name": "NewBaseBdev", 00:26:49.859 "aliases": [ 00:26:49.859 "d38bf723-0a1e-4a2f-8fb6-2c259165e4f6" 00:26:49.859 ], 00:26:49.859 "product_name": "Malloc disk", 00:26:49.859 "block_size": 512, 00:26:49.859 "num_blocks": 65536, 00:26:49.859 "uuid": "d38bf723-0a1e-4a2f-8fb6-2c259165e4f6", 00:26:49.859 "assigned_rate_limits": { 00:26:49.859 "rw_ios_per_sec": 0, 00:26:49.859 "rw_mbytes_per_sec": 0, 00:26:49.859 "r_mbytes_per_sec": 0, 00:26:49.859 "w_mbytes_per_sec": 0 00:26:49.859 }, 00:26:49.859 "claimed": true, 00:26:49.859 "claim_type": "exclusive_write", 00:26:49.859 "zoned": false, 00:26:49.859 "supported_io_types": { 00:26:49.859 "read": true, 00:26:49.859 "write": true, 00:26:49.859 "unmap": true, 00:26:49.859 "flush": true, 00:26:49.859 "reset": true, 00:26:49.859 "nvme_admin": false, 00:26:49.859 "nvme_io": false, 00:26:49.859 "nvme_io_md": false, 00:26:49.859 "write_zeroes": true, 00:26:49.859 "zcopy": true, 00:26:49.859 "get_zone_info": false, 00:26:49.859 "zone_management": false, 00:26:49.859 "zone_append": false, 00:26:49.859 "compare": false, 00:26:49.859 "compare_and_write": false, 00:26:49.859 "abort": true, 00:26:49.859 "seek_hole": false, 00:26:49.859 "seek_data": false, 00:26:49.859 "copy": true, 00:26:49.859 "nvme_iov_md": false 00:26:49.859 }, 00:26:49.859 "memory_domains": [ 00:26:49.859 { 00:26:49.859 "dma_device_id": "system", 00:26:49.859 "dma_device_type": 1 00:26:49.859 }, 00:26:49.859 { 00:26:49.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:49.859 "dma_device_type": 2 00:26:49.859 } 00:26:49.859 ], 00:26:49.859 "driver_specific": {} 00:26:49.859 }' 00:26:49.859 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:49.859 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:49.859 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:49.859 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:49.859 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:49.859 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:49.859 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:50.117 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:50.117 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:50.117 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:50.117 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:50.117 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:50.117 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:50.117 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:50.117 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:50.375 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:50.375 "name": "BaseBdev2", 00:26:50.375 "aliases": [ 00:26:50.375 "dc4651ba-dda6-4f0f-baa8-592265cf39f3" 00:26:50.375 ], 00:26:50.375 "product_name": "Malloc disk", 00:26:50.375 "block_size": 512, 00:26:50.375 "num_blocks": 65536, 00:26:50.375 "uuid": "dc4651ba-dda6-4f0f-baa8-592265cf39f3", 00:26:50.376 "assigned_rate_limits": { 00:26:50.376 "rw_ios_per_sec": 0, 00:26:50.376 "rw_mbytes_per_sec": 0, 00:26:50.376 "r_mbytes_per_sec": 0, 00:26:50.376 "w_mbytes_per_sec": 0 00:26:50.376 }, 00:26:50.376 "claimed": true, 00:26:50.376 "claim_type": "exclusive_write", 00:26:50.376 "zoned": false, 00:26:50.376 "supported_io_types": { 00:26:50.376 "read": true, 00:26:50.376 "write": true, 00:26:50.376 "unmap": true, 00:26:50.376 "flush": true, 00:26:50.376 "reset": true, 00:26:50.376 "nvme_admin": false, 00:26:50.376 "nvme_io": false, 00:26:50.376 "nvme_io_md": false, 00:26:50.376 "write_zeroes": true, 00:26:50.376 "zcopy": true, 00:26:50.376 "get_zone_info": false, 00:26:50.376 "zone_management": false, 00:26:50.376 "zone_append": false, 00:26:50.376 "compare": false, 00:26:50.376 "compare_and_write": false, 00:26:50.376 "abort": true, 00:26:50.376 "seek_hole": false, 00:26:50.376 "seek_data": false, 00:26:50.376 "copy": true, 00:26:50.376 "nvme_iov_md": false 00:26:50.376 }, 00:26:50.376 "memory_domains": [ 00:26:50.376 { 00:26:50.376 "dma_device_id": "system", 00:26:50.376 "dma_device_type": 1 00:26:50.376 }, 00:26:50.376 { 00:26:50.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:50.376 "dma_device_type": 2 00:26:50.376 } 00:26:50.376 ], 00:26:50.376 "driver_specific": {} 00:26:50.376 }' 00:26:50.376 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:50.376 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:50.633 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:50.633 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:50.633 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:50.633 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:50.633 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:50.633 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:50.633 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:50.633 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:50.891 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:50.891 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:50.891 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:50.891 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:50.891 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:51.149 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:51.149 "name": "BaseBdev3", 00:26:51.149 "aliases": [ 00:26:51.149 "8a51dca9-3c25-4190-b4b3-920c49374bbb" 00:26:51.149 ], 00:26:51.149 "product_name": "Malloc disk", 00:26:51.149 "block_size": 512, 00:26:51.149 "num_blocks": 65536, 00:26:51.149 "uuid": "8a51dca9-3c25-4190-b4b3-920c49374bbb", 00:26:51.149 "assigned_rate_limits": { 00:26:51.149 "rw_ios_per_sec": 0, 00:26:51.149 "rw_mbytes_per_sec": 0, 00:26:51.149 "r_mbytes_per_sec": 0, 00:26:51.149 "w_mbytes_per_sec": 0 00:26:51.149 }, 00:26:51.149 "claimed": true, 00:26:51.149 "claim_type": "exclusive_write", 00:26:51.149 "zoned": false, 00:26:51.149 "supported_io_types": { 00:26:51.149 "read": true, 00:26:51.149 "write": true, 00:26:51.149 "unmap": true, 00:26:51.149 "flush": true, 00:26:51.149 "reset": true, 00:26:51.149 "nvme_admin": false, 00:26:51.149 "nvme_io": false, 00:26:51.149 "nvme_io_md": false, 00:26:51.149 "write_zeroes": true, 00:26:51.149 "zcopy": true, 00:26:51.149 "get_zone_info": false, 00:26:51.149 "zone_management": false, 00:26:51.149 "zone_append": false, 00:26:51.149 "compare": false, 00:26:51.149 "compare_and_write": false, 00:26:51.149 "abort": true, 00:26:51.149 "seek_hole": false, 00:26:51.149 "seek_data": false, 00:26:51.149 "copy": true, 00:26:51.149 "nvme_iov_md": false 00:26:51.149 }, 00:26:51.149 "memory_domains": [ 00:26:51.149 { 00:26:51.149 "dma_device_id": "system", 00:26:51.150 "dma_device_type": 1 00:26:51.150 }, 00:26:51.150 { 00:26:51.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:51.150 "dma_device_type": 2 00:26:51.150 } 00:26:51.150 ], 00:26:51.150 "driver_specific": {} 00:26:51.150 }' 00:26:51.150 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:51.150 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:51.150 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:51.150 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:51.150 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:51.407 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:51.407 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:51.407 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:51.407 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:51.407 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:51.407 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:51.407 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:51.407 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:51.407 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:26:51.407 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:51.665 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:51.665 "name": "BaseBdev4", 00:26:51.665 "aliases": [ 00:26:51.665 "3faa74d5-3410-422d-9edb-ff25960ce442" 00:26:51.665 ], 00:26:51.665 "product_name": "Malloc disk", 00:26:51.665 "block_size": 512, 00:26:51.665 "num_blocks": 65536, 00:26:51.665 "uuid": "3faa74d5-3410-422d-9edb-ff25960ce442", 00:26:51.665 "assigned_rate_limits": { 00:26:51.665 "rw_ios_per_sec": 0, 00:26:51.665 "rw_mbytes_per_sec": 0, 00:26:51.665 "r_mbytes_per_sec": 0, 00:26:51.665 "w_mbytes_per_sec": 0 00:26:51.665 }, 00:26:51.665 "claimed": true, 00:26:51.665 "claim_type": "exclusive_write", 00:26:51.665 "zoned": false, 00:26:51.665 "supported_io_types": { 00:26:51.665 "read": true, 00:26:51.665 "write": true, 00:26:51.665 "unmap": true, 00:26:51.665 "flush": true, 00:26:51.665 "reset": true, 00:26:51.665 "nvme_admin": false, 00:26:51.665 "nvme_io": false, 00:26:51.665 "nvme_io_md": false, 00:26:51.665 "write_zeroes": true, 00:26:51.665 "zcopy": true, 00:26:51.665 "get_zone_info": false, 00:26:51.665 "zone_management": false, 00:26:51.665 "zone_append": false, 00:26:51.665 "compare": false, 00:26:51.665 "compare_and_write": false, 00:26:51.665 "abort": true, 00:26:51.665 "seek_hole": false, 00:26:51.665 "seek_data": false, 00:26:51.665 "copy": true, 00:26:51.665 "nvme_iov_md": false 00:26:51.665 }, 00:26:51.665 "memory_domains": [ 00:26:51.665 { 00:26:51.665 "dma_device_id": "system", 00:26:51.665 "dma_device_type": 1 00:26:51.665 }, 00:26:51.665 { 00:26:51.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:51.665 "dma_device_type": 2 00:26:51.665 } 00:26:51.665 ], 00:26:51.665 "driver_specific": {} 00:26:51.665 }' 00:26:51.665 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:51.665 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:51.923 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:51.923 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:51.923 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:51.923 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:51.923 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:51.923 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:51.923 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:51.923 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:52.181 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:52.182 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:52.182 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:52.438 [2024-07-15 21:40:25.616610] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:52.438 [2024-07-15 21:40:25.616713] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:52.438 [2024-07-15 21:40:25.616804] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:52.438 [2024-07-15 21:40:25.617101] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:52.438 [2024-07-15 21:40:25.617136] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name Existed_Raid, state offline 00:26:52.438 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 141776 00:26:52.438 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 141776 ']' 00:26:52.438 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 141776 00:26:52.438 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:26:52.438 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:52.438 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 141776 00:26:52.438 killing process with pid 141776 00:26:52.438 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:52.438 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:52.438 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 141776' 00:26:52.438 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 141776 00:26:52.438 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 141776 00:26:52.438 [2024-07-15 21:40:25.660020] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:52.696 [2024-07-15 21:40:26.051844] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:26:54.070 00:26:54.070 real 0m32.457s 00:26:54.070 user 1m0.128s 00:26:54.070 sys 0m3.986s 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:54.070 ************************************ 00:26:54.070 END TEST raid_state_function_test 00:26:54.070 ************************************ 00:26:54.070 21:40:27 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:26:54.070 21:40:27 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:26:54.070 21:40:27 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:26:54.070 21:40:27 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:54.070 21:40:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:54.070 ************************************ 00:26:54.070 START TEST raid_state_function_test_sb 00:26:54.070 ************************************ 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 true 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=142905 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 142905' 00:26:54.070 Process raid pid: 142905 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 142905 /var/tmp/spdk-raid.sock 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 142905 ']' 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:54.070 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:54.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:54.071 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:54.071 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.329 [2024-07-15 21:40:27.470610] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:26:54.329 [2024-07-15 21:40:27.470811] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:54.329 [2024-07-15 21:40:27.612192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.586 [2024-07-15 21:40:27.823536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.844 [2024-07-15 21:40:28.022212] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:55.102 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:55.102 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:26:55.102 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:55.102 [2024-07-15 21:40:28.472151] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:55.102 [2024-07-15 21:40:28.472295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:55.102 [2024-07-15 21:40:28.472327] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:55.102 [2024-07-15 21:40:28.472363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:55.102 [2024-07-15 21:40:28.472380] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:55.102 [2024-07-15 21:40:28.472401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:55.102 [2024-07-15 21:40:28.472415] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:55.102 [2024-07-15 21:40:28.472462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:55.360 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:55.360 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:55.360 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:55.360 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:55.360 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:55.360 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:55.360 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:55.360 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:55.360 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:55.360 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:55.360 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:55.360 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:55.360 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:55.360 "name": "Existed_Raid", 00:26:55.360 "uuid": "cf0696dc-3df8-4a5b-9ed8-1226748544a2", 00:26:55.360 "strip_size_kb": 0, 00:26:55.360 "state": "configuring", 00:26:55.360 "raid_level": "raid1", 00:26:55.360 "superblock": true, 00:26:55.360 "num_base_bdevs": 4, 00:26:55.360 "num_base_bdevs_discovered": 0, 00:26:55.360 "num_base_bdevs_operational": 4, 00:26:55.360 "base_bdevs_list": [ 00:26:55.360 { 00:26:55.360 "name": "BaseBdev1", 00:26:55.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:55.360 "is_configured": false, 00:26:55.360 "data_offset": 0, 00:26:55.360 "data_size": 0 00:26:55.360 }, 00:26:55.360 { 00:26:55.360 "name": "BaseBdev2", 00:26:55.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:55.360 "is_configured": false, 00:26:55.360 "data_offset": 0, 00:26:55.360 "data_size": 0 00:26:55.360 }, 00:26:55.360 { 00:26:55.360 "name": "BaseBdev3", 00:26:55.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:55.360 "is_configured": false, 00:26:55.360 "data_offset": 0, 00:26:55.360 "data_size": 0 00:26:55.360 }, 00:26:55.360 { 00:26:55.360 "name": "BaseBdev4", 00:26:55.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:55.360 "is_configured": false, 00:26:55.360 "data_offset": 0, 00:26:55.360 "data_size": 0 00:26:55.360 } 00:26:55.360 ] 00:26:55.360 }' 00:26:55.360 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:55.360 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.925 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:56.237 [2024-07-15 21:40:29.474287] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:56.237 [2024-07-15 21:40:29.474382] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:26:56.237 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:56.496 [2024-07-15 21:40:29.657993] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:56.496 [2024-07-15 21:40:29.658095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:56.496 [2024-07-15 21:40:29.658120] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:56.496 [2024-07-15 21:40:29.658169] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:56.496 [2024-07-15 21:40:29.658213] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:56.496 [2024-07-15 21:40:29.658277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:56.496 [2024-07-15 21:40:29.658317] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:56.496 [2024-07-15 21:40:29.658348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:56.496 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:56.755 [2024-07-15 21:40:29.879891] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:56.755 BaseBdev1 00:26:56.755 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:26:56.755 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:26:56.755 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:56.755 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:56.755 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:56.755 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:56.755 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:56.755 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:57.013 [ 00:26:57.013 { 00:26:57.014 "name": "BaseBdev1", 00:26:57.014 "aliases": [ 00:26:57.014 "ffa08315-f23f-4d98-aab4-bc02a867bf13" 00:26:57.014 ], 00:26:57.014 "product_name": "Malloc disk", 00:26:57.014 "block_size": 512, 00:26:57.014 "num_blocks": 65536, 00:26:57.014 "uuid": "ffa08315-f23f-4d98-aab4-bc02a867bf13", 00:26:57.014 "assigned_rate_limits": { 00:26:57.014 "rw_ios_per_sec": 0, 00:26:57.014 "rw_mbytes_per_sec": 0, 00:26:57.014 "r_mbytes_per_sec": 0, 00:26:57.014 "w_mbytes_per_sec": 0 00:26:57.014 }, 00:26:57.014 "claimed": true, 00:26:57.014 "claim_type": "exclusive_write", 00:26:57.014 "zoned": false, 00:26:57.014 "supported_io_types": { 00:26:57.014 "read": true, 00:26:57.014 "write": true, 00:26:57.014 "unmap": true, 00:26:57.014 "flush": true, 00:26:57.014 "reset": true, 00:26:57.014 "nvme_admin": false, 00:26:57.014 "nvme_io": false, 00:26:57.014 "nvme_io_md": false, 00:26:57.014 "write_zeroes": true, 00:26:57.014 "zcopy": true, 00:26:57.014 "get_zone_info": false, 00:26:57.014 "zone_management": false, 00:26:57.014 "zone_append": false, 00:26:57.014 "compare": false, 00:26:57.014 "compare_and_write": false, 00:26:57.014 "abort": true, 00:26:57.014 "seek_hole": false, 00:26:57.014 "seek_data": false, 00:26:57.014 "copy": true, 00:26:57.014 "nvme_iov_md": false 00:26:57.014 }, 00:26:57.014 "memory_domains": [ 00:26:57.014 { 00:26:57.014 "dma_device_id": "system", 00:26:57.014 "dma_device_type": 1 00:26:57.014 }, 00:26:57.014 { 00:26:57.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:57.014 "dma_device_type": 2 00:26:57.014 } 00:26:57.014 ], 00:26:57.014 "driver_specific": {} 00:26:57.014 } 00:26:57.014 ] 00:26:57.014 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:57.014 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:57.014 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:57.014 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:57.014 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:57.014 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:57.014 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:57.014 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:57.014 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:57.014 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:57.014 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:57.014 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:57.014 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:57.273 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:57.273 "name": "Existed_Raid", 00:26:57.273 "uuid": "c86bee15-8288-488e-a560-514ea897b2e0", 00:26:57.273 "strip_size_kb": 0, 00:26:57.273 "state": "configuring", 00:26:57.273 "raid_level": "raid1", 00:26:57.273 "superblock": true, 00:26:57.273 "num_base_bdevs": 4, 00:26:57.273 "num_base_bdevs_discovered": 1, 00:26:57.273 "num_base_bdevs_operational": 4, 00:26:57.273 "base_bdevs_list": [ 00:26:57.273 { 00:26:57.273 "name": "BaseBdev1", 00:26:57.273 "uuid": "ffa08315-f23f-4d98-aab4-bc02a867bf13", 00:26:57.273 "is_configured": true, 00:26:57.273 "data_offset": 2048, 00:26:57.273 "data_size": 63488 00:26:57.273 }, 00:26:57.273 { 00:26:57.273 "name": "BaseBdev2", 00:26:57.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:57.273 "is_configured": false, 00:26:57.273 "data_offset": 0, 00:26:57.273 "data_size": 0 00:26:57.273 }, 00:26:57.273 { 00:26:57.273 "name": "BaseBdev3", 00:26:57.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:57.273 "is_configured": false, 00:26:57.273 "data_offset": 0, 00:26:57.273 "data_size": 0 00:26:57.273 }, 00:26:57.273 { 00:26:57.273 "name": "BaseBdev4", 00:26:57.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:57.273 "is_configured": false, 00:26:57.273 "data_offset": 0, 00:26:57.273 "data_size": 0 00:26:57.273 } 00:26:57.273 ] 00:26:57.273 }' 00:26:57.273 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:57.273 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.840 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:58.098 [2024-07-15 21:40:31.249576] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:58.098 [2024-07-15 21:40:31.249695] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:26:58.098 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:58.098 [2024-07-15 21:40:31.457264] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:58.098 [2024-07-15 21:40:31.459106] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:58.098 [2024-07-15 21:40:31.459528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:58.098 [2024-07-15 21:40:31.459583] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:58.098 [2024-07-15 21:40:31.459710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:58.098 [2024-07-15 21:40:31.459754] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:58.098 [2024-07-15 21:40:31.459862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:58.357 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:26:58.357 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:58.357 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:58.357 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:58.357 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:58.357 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:58.357 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:58.357 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:58.357 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:58.357 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:58.357 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:58.357 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:58.357 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:58.357 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:58.357 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:58.357 "name": "Existed_Raid", 00:26:58.357 "uuid": "51b312e7-8b13-43ff-b8a6-4debefd21963", 00:26:58.357 "strip_size_kb": 0, 00:26:58.357 "state": "configuring", 00:26:58.357 "raid_level": "raid1", 00:26:58.357 "superblock": true, 00:26:58.357 "num_base_bdevs": 4, 00:26:58.357 "num_base_bdevs_discovered": 1, 00:26:58.357 "num_base_bdevs_operational": 4, 00:26:58.357 "base_bdevs_list": [ 00:26:58.357 { 00:26:58.357 "name": "BaseBdev1", 00:26:58.357 "uuid": "ffa08315-f23f-4d98-aab4-bc02a867bf13", 00:26:58.357 "is_configured": true, 00:26:58.357 "data_offset": 2048, 00:26:58.357 "data_size": 63488 00:26:58.357 }, 00:26:58.357 { 00:26:58.357 "name": "BaseBdev2", 00:26:58.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:58.357 "is_configured": false, 00:26:58.357 "data_offset": 0, 00:26:58.357 "data_size": 0 00:26:58.357 }, 00:26:58.357 { 00:26:58.357 "name": "BaseBdev3", 00:26:58.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:58.357 "is_configured": false, 00:26:58.357 "data_offset": 0, 00:26:58.357 "data_size": 0 00:26:58.357 }, 00:26:58.357 { 00:26:58.357 "name": "BaseBdev4", 00:26:58.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:58.357 "is_configured": false, 00:26:58.357 "data_offset": 0, 00:26:58.357 "data_size": 0 00:26:58.357 } 00:26:58.357 ] 00:26:58.357 }' 00:26:58.357 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:58.357 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:59.294 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:59.294 [2024-07-15 21:40:32.554994] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:59.294 BaseBdev2 00:26:59.294 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:26:59.294 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:26:59.294 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:59.294 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:59.294 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:59.294 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:59.294 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:59.553 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:59.813 [ 00:26:59.813 { 00:26:59.813 "name": "BaseBdev2", 00:26:59.813 "aliases": [ 00:26:59.813 "8ca042a8-41f0-4dfc-8dda-0f4bc1f6f701" 00:26:59.813 ], 00:26:59.813 "product_name": "Malloc disk", 00:26:59.813 "block_size": 512, 00:26:59.813 "num_blocks": 65536, 00:26:59.813 "uuid": "8ca042a8-41f0-4dfc-8dda-0f4bc1f6f701", 00:26:59.813 "assigned_rate_limits": { 00:26:59.813 "rw_ios_per_sec": 0, 00:26:59.813 "rw_mbytes_per_sec": 0, 00:26:59.813 "r_mbytes_per_sec": 0, 00:26:59.813 "w_mbytes_per_sec": 0 00:26:59.813 }, 00:26:59.813 "claimed": true, 00:26:59.813 "claim_type": "exclusive_write", 00:26:59.813 "zoned": false, 00:26:59.813 "supported_io_types": { 00:26:59.813 "read": true, 00:26:59.813 "write": true, 00:26:59.813 "unmap": true, 00:26:59.813 "flush": true, 00:26:59.813 "reset": true, 00:26:59.813 "nvme_admin": false, 00:26:59.813 "nvme_io": false, 00:26:59.813 "nvme_io_md": false, 00:26:59.813 "write_zeroes": true, 00:26:59.813 "zcopy": true, 00:26:59.813 "get_zone_info": false, 00:26:59.813 "zone_management": false, 00:26:59.813 "zone_append": false, 00:26:59.813 "compare": false, 00:26:59.813 "compare_and_write": false, 00:26:59.813 "abort": true, 00:26:59.813 "seek_hole": false, 00:26:59.813 "seek_data": false, 00:26:59.813 "copy": true, 00:26:59.813 "nvme_iov_md": false 00:26:59.813 }, 00:26:59.813 "memory_domains": [ 00:26:59.813 { 00:26:59.813 "dma_device_id": "system", 00:26:59.813 "dma_device_type": 1 00:26:59.813 }, 00:26:59.813 { 00:26:59.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:59.813 "dma_device_type": 2 00:26:59.813 } 00:26:59.813 ], 00:26:59.813 "driver_specific": {} 00:26:59.813 } 00:26:59.813 ] 00:26:59.813 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:59.813 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:59.813 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:59.813 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:59.813 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:59.813 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:59.813 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:59.813 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:59.813 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:59.813 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:59.813 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:59.813 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:59.813 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:59.813 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:59.813 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:59.813 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:59.813 "name": "Existed_Raid", 00:26:59.813 "uuid": "51b312e7-8b13-43ff-b8a6-4debefd21963", 00:26:59.813 "strip_size_kb": 0, 00:26:59.813 "state": "configuring", 00:26:59.813 "raid_level": "raid1", 00:26:59.813 "superblock": true, 00:26:59.813 "num_base_bdevs": 4, 00:26:59.813 "num_base_bdevs_discovered": 2, 00:26:59.813 "num_base_bdevs_operational": 4, 00:26:59.813 "base_bdevs_list": [ 00:26:59.813 { 00:26:59.813 "name": "BaseBdev1", 00:26:59.813 "uuid": "ffa08315-f23f-4d98-aab4-bc02a867bf13", 00:26:59.813 "is_configured": true, 00:26:59.813 "data_offset": 2048, 00:26:59.813 "data_size": 63488 00:26:59.813 }, 00:26:59.813 { 00:26:59.813 "name": "BaseBdev2", 00:26:59.813 "uuid": "8ca042a8-41f0-4dfc-8dda-0f4bc1f6f701", 00:26:59.813 "is_configured": true, 00:26:59.813 "data_offset": 2048, 00:26:59.813 "data_size": 63488 00:26:59.813 }, 00:26:59.813 { 00:26:59.813 "name": "BaseBdev3", 00:26:59.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:59.813 "is_configured": false, 00:26:59.813 "data_offset": 0, 00:26:59.813 "data_size": 0 00:26:59.813 }, 00:26:59.813 { 00:26:59.813 "name": "BaseBdev4", 00:26:59.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:59.813 "is_configured": false, 00:26:59.813 "data_offset": 0, 00:26:59.813 "data_size": 0 00:26:59.813 } 00:26:59.813 ] 00:26:59.813 }' 00:26:59.813 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:59.813 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:00.385 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:27:00.643 [2024-07-15 21:40:33.943817] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:00.643 BaseBdev3 00:27:00.643 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:27:00.643 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:27:00.643 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:00.643 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:27:00.643 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:00.643 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:00.643 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:00.901 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:01.159 [ 00:27:01.159 { 00:27:01.159 "name": "BaseBdev3", 00:27:01.159 "aliases": [ 00:27:01.159 "0fa3b736-9c03-4a8a-9fe0-57fad0368e28" 00:27:01.159 ], 00:27:01.159 "product_name": "Malloc disk", 00:27:01.159 "block_size": 512, 00:27:01.159 "num_blocks": 65536, 00:27:01.159 "uuid": "0fa3b736-9c03-4a8a-9fe0-57fad0368e28", 00:27:01.159 "assigned_rate_limits": { 00:27:01.159 "rw_ios_per_sec": 0, 00:27:01.159 "rw_mbytes_per_sec": 0, 00:27:01.159 "r_mbytes_per_sec": 0, 00:27:01.159 "w_mbytes_per_sec": 0 00:27:01.159 }, 00:27:01.159 "claimed": true, 00:27:01.159 "claim_type": "exclusive_write", 00:27:01.159 "zoned": false, 00:27:01.159 "supported_io_types": { 00:27:01.159 "read": true, 00:27:01.159 "write": true, 00:27:01.159 "unmap": true, 00:27:01.159 "flush": true, 00:27:01.159 "reset": true, 00:27:01.159 "nvme_admin": false, 00:27:01.159 "nvme_io": false, 00:27:01.159 "nvme_io_md": false, 00:27:01.159 "write_zeroes": true, 00:27:01.159 "zcopy": true, 00:27:01.159 "get_zone_info": false, 00:27:01.159 "zone_management": false, 00:27:01.159 "zone_append": false, 00:27:01.159 "compare": false, 00:27:01.159 "compare_and_write": false, 00:27:01.159 "abort": true, 00:27:01.159 "seek_hole": false, 00:27:01.159 "seek_data": false, 00:27:01.159 "copy": true, 00:27:01.159 "nvme_iov_md": false 00:27:01.159 }, 00:27:01.159 "memory_domains": [ 00:27:01.159 { 00:27:01.159 "dma_device_id": "system", 00:27:01.159 "dma_device_type": 1 00:27:01.159 }, 00:27:01.159 { 00:27:01.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:01.159 "dma_device_type": 2 00:27:01.159 } 00:27:01.159 ], 00:27:01.159 "driver_specific": {} 00:27:01.159 } 00:27:01.159 ] 00:27:01.159 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:27:01.159 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:27:01.159 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:01.159 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:01.159 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:01.159 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:01.159 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:01.159 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:01.159 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:01.159 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:01.159 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:01.160 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:01.160 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:01.160 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:01.160 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:01.417 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:01.417 "name": "Existed_Raid", 00:27:01.417 "uuid": "51b312e7-8b13-43ff-b8a6-4debefd21963", 00:27:01.417 "strip_size_kb": 0, 00:27:01.417 "state": "configuring", 00:27:01.417 "raid_level": "raid1", 00:27:01.417 "superblock": true, 00:27:01.417 "num_base_bdevs": 4, 00:27:01.417 "num_base_bdevs_discovered": 3, 00:27:01.417 "num_base_bdevs_operational": 4, 00:27:01.417 "base_bdevs_list": [ 00:27:01.417 { 00:27:01.417 "name": "BaseBdev1", 00:27:01.417 "uuid": "ffa08315-f23f-4d98-aab4-bc02a867bf13", 00:27:01.417 "is_configured": true, 00:27:01.417 "data_offset": 2048, 00:27:01.417 "data_size": 63488 00:27:01.417 }, 00:27:01.417 { 00:27:01.417 "name": "BaseBdev2", 00:27:01.417 "uuid": "8ca042a8-41f0-4dfc-8dda-0f4bc1f6f701", 00:27:01.417 "is_configured": true, 00:27:01.417 "data_offset": 2048, 00:27:01.417 "data_size": 63488 00:27:01.417 }, 00:27:01.417 { 00:27:01.417 "name": "BaseBdev3", 00:27:01.417 "uuid": "0fa3b736-9c03-4a8a-9fe0-57fad0368e28", 00:27:01.417 "is_configured": true, 00:27:01.417 "data_offset": 2048, 00:27:01.417 "data_size": 63488 00:27:01.417 }, 00:27:01.417 { 00:27:01.417 "name": "BaseBdev4", 00:27:01.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:01.417 "is_configured": false, 00:27:01.417 "data_offset": 0, 00:27:01.417 "data_size": 0 00:27:01.417 } 00:27:01.417 ] 00:27:01.417 }' 00:27:01.417 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:01.417 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:02.030 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:27:02.287 [2024-07-15 21:40:35.402885] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:02.287 [2024-07-15 21:40:35.403327] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:27:02.287 [2024-07-15 21:40:35.403385] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:02.287 BaseBdev4 00:27:02.287 [2024-07-15 21:40:35.403584] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:27:02.287 [2024-07-15 21:40:35.403953] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:27:02.287 [2024-07-15 21:40:35.404001] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:27:02.287 [2024-07-15 21:40:35.404181] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:02.287 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:27:02.287 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:27:02.287 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:02.287 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:27:02.287 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:02.287 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:02.287 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:02.287 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:02.545 [ 00:27:02.545 { 00:27:02.545 "name": "BaseBdev4", 00:27:02.545 "aliases": [ 00:27:02.545 "23138913-7a21-4819-bf6b-340addf45c1e" 00:27:02.545 ], 00:27:02.545 "product_name": "Malloc disk", 00:27:02.545 "block_size": 512, 00:27:02.545 "num_blocks": 65536, 00:27:02.545 "uuid": "23138913-7a21-4819-bf6b-340addf45c1e", 00:27:02.545 "assigned_rate_limits": { 00:27:02.545 "rw_ios_per_sec": 0, 00:27:02.545 "rw_mbytes_per_sec": 0, 00:27:02.545 "r_mbytes_per_sec": 0, 00:27:02.545 "w_mbytes_per_sec": 0 00:27:02.545 }, 00:27:02.545 "claimed": true, 00:27:02.545 "claim_type": "exclusive_write", 00:27:02.545 "zoned": false, 00:27:02.545 "supported_io_types": { 00:27:02.545 "read": true, 00:27:02.545 "write": true, 00:27:02.545 "unmap": true, 00:27:02.545 "flush": true, 00:27:02.545 "reset": true, 00:27:02.545 "nvme_admin": false, 00:27:02.545 "nvme_io": false, 00:27:02.545 "nvme_io_md": false, 00:27:02.545 "write_zeroes": true, 00:27:02.545 "zcopy": true, 00:27:02.545 "get_zone_info": false, 00:27:02.545 "zone_management": false, 00:27:02.545 "zone_append": false, 00:27:02.545 "compare": false, 00:27:02.545 "compare_and_write": false, 00:27:02.545 "abort": true, 00:27:02.545 "seek_hole": false, 00:27:02.545 "seek_data": false, 00:27:02.545 "copy": true, 00:27:02.545 "nvme_iov_md": false 00:27:02.545 }, 00:27:02.545 "memory_domains": [ 00:27:02.545 { 00:27:02.545 "dma_device_id": "system", 00:27:02.545 "dma_device_type": 1 00:27:02.545 }, 00:27:02.545 { 00:27:02.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:02.546 "dma_device_type": 2 00:27:02.546 } 00:27:02.546 ], 00:27:02.546 "driver_specific": {} 00:27:02.546 } 00:27:02.546 ] 00:27:02.546 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:27:02.546 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:27:02.546 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:02.546 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:27:02.546 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:02.546 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:02.546 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:02.546 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:02.546 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:02.546 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:02.546 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:02.546 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:02.546 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:02.546 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:02.546 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:02.804 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:02.804 "name": "Existed_Raid", 00:27:02.804 "uuid": "51b312e7-8b13-43ff-b8a6-4debefd21963", 00:27:02.804 "strip_size_kb": 0, 00:27:02.804 "state": "online", 00:27:02.804 "raid_level": "raid1", 00:27:02.804 "superblock": true, 00:27:02.804 "num_base_bdevs": 4, 00:27:02.804 "num_base_bdevs_discovered": 4, 00:27:02.804 "num_base_bdevs_operational": 4, 00:27:02.804 "base_bdevs_list": [ 00:27:02.804 { 00:27:02.804 "name": "BaseBdev1", 00:27:02.804 "uuid": "ffa08315-f23f-4d98-aab4-bc02a867bf13", 00:27:02.804 "is_configured": true, 00:27:02.804 "data_offset": 2048, 00:27:02.804 "data_size": 63488 00:27:02.804 }, 00:27:02.804 { 00:27:02.804 "name": "BaseBdev2", 00:27:02.804 "uuid": "8ca042a8-41f0-4dfc-8dda-0f4bc1f6f701", 00:27:02.804 "is_configured": true, 00:27:02.804 "data_offset": 2048, 00:27:02.804 "data_size": 63488 00:27:02.804 }, 00:27:02.804 { 00:27:02.804 "name": "BaseBdev3", 00:27:02.804 "uuid": "0fa3b736-9c03-4a8a-9fe0-57fad0368e28", 00:27:02.804 "is_configured": true, 00:27:02.804 "data_offset": 2048, 00:27:02.804 "data_size": 63488 00:27:02.804 }, 00:27:02.804 { 00:27:02.804 "name": "BaseBdev4", 00:27:02.804 "uuid": "23138913-7a21-4819-bf6b-340addf45c1e", 00:27:02.804 "is_configured": true, 00:27:02.804 "data_offset": 2048, 00:27:02.804 "data_size": 63488 00:27:02.804 } 00:27:02.804 ] 00:27:02.804 }' 00:27:02.804 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:02.804 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:03.370 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:27:03.370 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:27:03.370 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:03.370 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:03.370 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:03.370 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:27:03.370 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:03.370 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:27:03.628 [2024-07-15 21:40:36.745032] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:03.628 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:03.628 "name": "Existed_Raid", 00:27:03.628 "aliases": [ 00:27:03.628 "51b312e7-8b13-43ff-b8a6-4debefd21963" 00:27:03.628 ], 00:27:03.628 "product_name": "Raid Volume", 00:27:03.628 "block_size": 512, 00:27:03.628 "num_blocks": 63488, 00:27:03.628 "uuid": "51b312e7-8b13-43ff-b8a6-4debefd21963", 00:27:03.628 "assigned_rate_limits": { 00:27:03.628 "rw_ios_per_sec": 0, 00:27:03.628 "rw_mbytes_per_sec": 0, 00:27:03.628 "r_mbytes_per_sec": 0, 00:27:03.628 "w_mbytes_per_sec": 0 00:27:03.628 }, 00:27:03.628 "claimed": false, 00:27:03.628 "zoned": false, 00:27:03.628 "supported_io_types": { 00:27:03.628 "read": true, 00:27:03.628 "write": true, 00:27:03.628 "unmap": false, 00:27:03.628 "flush": false, 00:27:03.628 "reset": true, 00:27:03.628 "nvme_admin": false, 00:27:03.628 "nvme_io": false, 00:27:03.628 "nvme_io_md": false, 00:27:03.628 "write_zeroes": true, 00:27:03.628 "zcopy": false, 00:27:03.628 "get_zone_info": false, 00:27:03.628 "zone_management": false, 00:27:03.628 "zone_append": false, 00:27:03.628 "compare": false, 00:27:03.628 "compare_and_write": false, 00:27:03.628 "abort": false, 00:27:03.628 "seek_hole": false, 00:27:03.628 "seek_data": false, 00:27:03.628 "copy": false, 00:27:03.628 "nvme_iov_md": false 00:27:03.628 }, 00:27:03.628 "memory_domains": [ 00:27:03.628 { 00:27:03.628 "dma_device_id": "system", 00:27:03.628 "dma_device_type": 1 00:27:03.628 }, 00:27:03.628 { 00:27:03.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:03.628 "dma_device_type": 2 00:27:03.628 }, 00:27:03.628 { 00:27:03.628 "dma_device_id": "system", 00:27:03.628 "dma_device_type": 1 00:27:03.628 }, 00:27:03.628 { 00:27:03.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:03.628 "dma_device_type": 2 00:27:03.628 }, 00:27:03.628 { 00:27:03.628 "dma_device_id": "system", 00:27:03.628 "dma_device_type": 1 00:27:03.628 }, 00:27:03.628 { 00:27:03.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:03.628 "dma_device_type": 2 00:27:03.628 }, 00:27:03.628 { 00:27:03.628 "dma_device_id": "system", 00:27:03.628 "dma_device_type": 1 00:27:03.628 }, 00:27:03.628 { 00:27:03.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:03.628 "dma_device_type": 2 00:27:03.628 } 00:27:03.628 ], 00:27:03.628 "driver_specific": { 00:27:03.628 "raid": { 00:27:03.628 "uuid": "51b312e7-8b13-43ff-b8a6-4debefd21963", 00:27:03.628 "strip_size_kb": 0, 00:27:03.628 "state": "online", 00:27:03.628 "raid_level": "raid1", 00:27:03.628 "superblock": true, 00:27:03.628 "num_base_bdevs": 4, 00:27:03.628 "num_base_bdevs_discovered": 4, 00:27:03.628 "num_base_bdevs_operational": 4, 00:27:03.628 "base_bdevs_list": [ 00:27:03.628 { 00:27:03.628 "name": "BaseBdev1", 00:27:03.628 "uuid": "ffa08315-f23f-4d98-aab4-bc02a867bf13", 00:27:03.628 "is_configured": true, 00:27:03.628 "data_offset": 2048, 00:27:03.628 "data_size": 63488 00:27:03.628 }, 00:27:03.628 { 00:27:03.628 "name": "BaseBdev2", 00:27:03.628 "uuid": "8ca042a8-41f0-4dfc-8dda-0f4bc1f6f701", 00:27:03.628 "is_configured": true, 00:27:03.628 "data_offset": 2048, 00:27:03.628 "data_size": 63488 00:27:03.628 }, 00:27:03.628 { 00:27:03.628 "name": "BaseBdev3", 00:27:03.628 "uuid": "0fa3b736-9c03-4a8a-9fe0-57fad0368e28", 00:27:03.628 "is_configured": true, 00:27:03.628 "data_offset": 2048, 00:27:03.628 "data_size": 63488 00:27:03.628 }, 00:27:03.628 { 00:27:03.628 "name": "BaseBdev4", 00:27:03.628 "uuid": "23138913-7a21-4819-bf6b-340addf45c1e", 00:27:03.628 "is_configured": true, 00:27:03.628 "data_offset": 2048, 00:27:03.628 "data_size": 63488 00:27:03.628 } 00:27:03.628 ] 00:27:03.628 } 00:27:03.628 } 00:27:03.628 }' 00:27:03.628 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:03.628 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:27:03.628 BaseBdev2 00:27:03.628 BaseBdev3 00:27:03.628 BaseBdev4' 00:27:03.628 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:03.628 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:27:03.628 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:03.628 21:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:03.628 "name": "BaseBdev1", 00:27:03.628 "aliases": [ 00:27:03.628 "ffa08315-f23f-4d98-aab4-bc02a867bf13" 00:27:03.628 ], 00:27:03.628 "product_name": "Malloc disk", 00:27:03.628 "block_size": 512, 00:27:03.628 "num_blocks": 65536, 00:27:03.628 "uuid": "ffa08315-f23f-4d98-aab4-bc02a867bf13", 00:27:03.628 "assigned_rate_limits": { 00:27:03.628 "rw_ios_per_sec": 0, 00:27:03.628 "rw_mbytes_per_sec": 0, 00:27:03.628 "r_mbytes_per_sec": 0, 00:27:03.628 "w_mbytes_per_sec": 0 00:27:03.628 }, 00:27:03.628 "claimed": true, 00:27:03.628 "claim_type": "exclusive_write", 00:27:03.628 "zoned": false, 00:27:03.628 "supported_io_types": { 00:27:03.628 "read": true, 00:27:03.628 "write": true, 00:27:03.628 "unmap": true, 00:27:03.628 "flush": true, 00:27:03.628 "reset": true, 00:27:03.628 "nvme_admin": false, 00:27:03.628 "nvme_io": false, 00:27:03.628 "nvme_io_md": false, 00:27:03.628 "write_zeroes": true, 00:27:03.628 "zcopy": true, 00:27:03.628 "get_zone_info": false, 00:27:03.628 "zone_management": false, 00:27:03.628 "zone_append": false, 00:27:03.628 "compare": false, 00:27:03.628 "compare_and_write": false, 00:27:03.628 "abort": true, 00:27:03.628 "seek_hole": false, 00:27:03.628 "seek_data": false, 00:27:03.628 "copy": true, 00:27:03.628 "nvme_iov_md": false 00:27:03.628 }, 00:27:03.628 "memory_domains": [ 00:27:03.628 { 00:27:03.628 "dma_device_id": "system", 00:27:03.628 "dma_device_type": 1 00:27:03.628 }, 00:27:03.628 { 00:27:03.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:03.628 "dma_device_type": 2 00:27:03.628 } 00:27:03.628 ], 00:27:03.628 "driver_specific": {} 00:27:03.628 }' 00:27:03.886 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:03.886 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:03.886 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:03.886 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:03.886 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:03.886 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:03.886 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:03.886 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:04.144 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:04.145 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:04.145 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:04.145 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:04.145 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:04.145 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:27:04.145 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:04.402 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:04.402 "name": "BaseBdev2", 00:27:04.402 "aliases": [ 00:27:04.402 "8ca042a8-41f0-4dfc-8dda-0f4bc1f6f701" 00:27:04.402 ], 00:27:04.402 "product_name": "Malloc disk", 00:27:04.402 "block_size": 512, 00:27:04.402 "num_blocks": 65536, 00:27:04.402 "uuid": "8ca042a8-41f0-4dfc-8dda-0f4bc1f6f701", 00:27:04.402 "assigned_rate_limits": { 00:27:04.402 "rw_ios_per_sec": 0, 00:27:04.402 "rw_mbytes_per_sec": 0, 00:27:04.402 "r_mbytes_per_sec": 0, 00:27:04.402 "w_mbytes_per_sec": 0 00:27:04.402 }, 00:27:04.402 "claimed": true, 00:27:04.402 "claim_type": "exclusive_write", 00:27:04.402 "zoned": false, 00:27:04.402 "supported_io_types": { 00:27:04.402 "read": true, 00:27:04.402 "write": true, 00:27:04.402 "unmap": true, 00:27:04.402 "flush": true, 00:27:04.402 "reset": true, 00:27:04.402 "nvme_admin": false, 00:27:04.402 "nvme_io": false, 00:27:04.402 "nvme_io_md": false, 00:27:04.402 "write_zeroes": true, 00:27:04.403 "zcopy": true, 00:27:04.403 "get_zone_info": false, 00:27:04.403 "zone_management": false, 00:27:04.403 "zone_append": false, 00:27:04.403 "compare": false, 00:27:04.403 "compare_and_write": false, 00:27:04.403 "abort": true, 00:27:04.403 "seek_hole": false, 00:27:04.403 "seek_data": false, 00:27:04.403 "copy": true, 00:27:04.403 "nvme_iov_md": false 00:27:04.403 }, 00:27:04.403 "memory_domains": [ 00:27:04.403 { 00:27:04.403 "dma_device_id": "system", 00:27:04.403 "dma_device_type": 1 00:27:04.403 }, 00:27:04.403 { 00:27:04.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:04.403 "dma_device_type": 2 00:27:04.403 } 00:27:04.403 ], 00:27:04.403 "driver_specific": {} 00:27:04.403 }' 00:27:04.403 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:04.403 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:04.403 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:04.403 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:04.403 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:04.660 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:04.660 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:04.660 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:04.660 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:04.660 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:04.660 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:04.919 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:04.919 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:04.919 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:27:04.919 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:04.919 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:04.919 "name": "BaseBdev3", 00:27:04.919 "aliases": [ 00:27:04.919 "0fa3b736-9c03-4a8a-9fe0-57fad0368e28" 00:27:04.919 ], 00:27:04.919 "product_name": "Malloc disk", 00:27:04.919 "block_size": 512, 00:27:04.919 "num_blocks": 65536, 00:27:04.919 "uuid": "0fa3b736-9c03-4a8a-9fe0-57fad0368e28", 00:27:04.919 "assigned_rate_limits": { 00:27:04.919 "rw_ios_per_sec": 0, 00:27:04.919 "rw_mbytes_per_sec": 0, 00:27:04.919 "r_mbytes_per_sec": 0, 00:27:04.919 "w_mbytes_per_sec": 0 00:27:04.919 }, 00:27:04.919 "claimed": true, 00:27:04.919 "claim_type": "exclusive_write", 00:27:04.919 "zoned": false, 00:27:04.919 "supported_io_types": { 00:27:04.919 "read": true, 00:27:04.919 "write": true, 00:27:04.919 "unmap": true, 00:27:04.919 "flush": true, 00:27:04.919 "reset": true, 00:27:04.919 "nvme_admin": false, 00:27:04.919 "nvme_io": false, 00:27:04.919 "nvme_io_md": false, 00:27:04.919 "write_zeroes": true, 00:27:04.919 "zcopy": true, 00:27:04.919 "get_zone_info": false, 00:27:04.919 "zone_management": false, 00:27:04.919 "zone_append": false, 00:27:04.919 "compare": false, 00:27:04.919 "compare_and_write": false, 00:27:04.919 "abort": true, 00:27:04.919 "seek_hole": false, 00:27:04.919 "seek_data": false, 00:27:04.919 "copy": true, 00:27:04.919 "nvme_iov_md": false 00:27:04.919 }, 00:27:04.919 "memory_domains": [ 00:27:04.919 { 00:27:04.919 "dma_device_id": "system", 00:27:04.919 "dma_device_type": 1 00:27:04.919 }, 00:27:04.919 { 00:27:04.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:04.919 "dma_device_type": 2 00:27:04.919 } 00:27:04.919 ], 00:27:04.919 "driver_specific": {} 00:27:04.919 }' 00:27:04.919 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:05.177 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:05.177 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:05.177 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:05.177 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:05.177 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:05.177 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:05.177 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:05.436 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:05.436 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:05.436 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:05.436 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:05.436 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:05.436 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:27:05.436 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:05.694 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:05.694 "name": "BaseBdev4", 00:27:05.694 "aliases": [ 00:27:05.694 "23138913-7a21-4819-bf6b-340addf45c1e" 00:27:05.694 ], 00:27:05.694 "product_name": "Malloc disk", 00:27:05.694 "block_size": 512, 00:27:05.694 "num_blocks": 65536, 00:27:05.694 "uuid": "23138913-7a21-4819-bf6b-340addf45c1e", 00:27:05.694 "assigned_rate_limits": { 00:27:05.694 "rw_ios_per_sec": 0, 00:27:05.694 "rw_mbytes_per_sec": 0, 00:27:05.694 "r_mbytes_per_sec": 0, 00:27:05.694 "w_mbytes_per_sec": 0 00:27:05.694 }, 00:27:05.694 "claimed": true, 00:27:05.694 "claim_type": "exclusive_write", 00:27:05.694 "zoned": false, 00:27:05.694 "supported_io_types": { 00:27:05.694 "read": true, 00:27:05.694 "write": true, 00:27:05.694 "unmap": true, 00:27:05.694 "flush": true, 00:27:05.694 "reset": true, 00:27:05.694 "nvme_admin": false, 00:27:05.694 "nvme_io": false, 00:27:05.694 "nvme_io_md": false, 00:27:05.694 "write_zeroes": true, 00:27:05.694 "zcopy": true, 00:27:05.694 "get_zone_info": false, 00:27:05.694 "zone_management": false, 00:27:05.694 "zone_append": false, 00:27:05.694 "compare": false, 00:27:05.694 "compare_and_write": false, 00:27:05.694 "abort": true, 00:27:05.694 "seek_hole": false, 00:27:05.694 "seek_data": false, 00:27:05.694 "copy": true, 00:27:05.694 "nvme_iov_md": false 00:27:05.694 }, 00:27:05.694 "memory_domains": [ 00:27:05.694 { 00:27:05.694 "dma_device_id": "system", 00:27:05.694 "dma_device_type": 1 00:27:05.694 }, 00:27:05.694 { 00:27:05.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:05.694 "dma_device_type": 2 00:27:05.694 } 00:27:05.694 ], 00:27:05.694 "driver_specific": {} 00:27:05.694 }' 00:27:05.694 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:05.694 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:05.694 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:05.694 21:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:05.694 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:05.953 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:05.953 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:05.953 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:05.953 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:05.953 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:05.953 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:06.212 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:06.212 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:06.212 [2024-07-15 21:40:39.504159] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:06.473 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:27:06.473 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:27:06.473 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:06.473 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:27:06.473 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:27:06.473 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:27:06.473 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:06.473 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:06.473 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:06.473 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:06.473 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:06.473 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:06.473 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:06.473 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:06.473 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:06.473 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:06.473 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:06.473 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:06.473 "name": "Existed_Raid", 00:27:06.473 "uuid": "51b312e7-8b13-43ff-b8a6-4debefd21963", 00:27:06.473 "strip_size_kb": 0, 00:27:06.473 "state": "online", 00:27:06.473 "raid_level": "raid1", 00:27:06.473 "superblock": true, 00:27:06.473 "num_base_bdevs": 4, 00:27:06.473 "num_base_bdevs_discovered": 3, 00:27:06.473 "num_base_bdevs_operational": 3, 00:27:06.473 "base_bdevs_list": [ 00:27:06.473 { 00:27:06.473 "name": null, 00:27:06.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:06.473 "is_configured": false, 00:27:06.473 "data_offset": 2048, 00:27:06.473 "data_size": 63488 00:27:06.473 }, 00:27:06.473 { 00:27:06.473 "name": "BaseBdev2", 00:27:06.473 "uuid": "8ca042a8-41f0-4dfc-8dda-0f4bc1f6f701", 00:27:06.473 "is_configured": true, 00:27:06.473 "data_offset": 2048, 00:27:06.473 "data_size": 63488 00:27:06.473 }, 00:27:06.473 { 00:27:06.473 "name": "BaseBdev3", 00:27:06.473 "uuid": "0fa3b736-9c03-4a8a-9fe0-57fad0368e28", 00:27:06.473 "is_configured": true, 00:27:06.473 "data_offset": 2048, 00:27:06.473 "data_size": 63488 00:27:06.473 }, 00:27:06.473 { 00:27:06.473 "name": "BaseBdev4", 00:27:06.473 "uuid": "23138913-7a21-4819-bf6b-340addf45c1e", 00:27:06.473 "is_configured": true, 00:27:06.473 "data_offset": 2048, 00:27:06.473 "data_size": 63488 00:27:06.473 } 00:27:06.473 ] 00:27:06.473 }' 00:27:06.473 21:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:06.473 21:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:07.411 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:27:07.411 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:07.411 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:07.411 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:27:07.411 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:27:07.411 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:07.411 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:27:07.669 [2024-07-15 21:40:40.852639] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:07.669 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:27:07.669 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:07.669 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:07.669 21:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:27:07.928 21:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:27:07.928 21:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:07.928 21:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:27:08.187 [2024-07-15 21:40:41.329042] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:08.187 21:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:27:08.187 21:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:08.187 21:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:08.187 21:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:27:08.446 21:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:27:08.446 21:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:08.446 21:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:27:08.705 [2024-07-15 21:40:41.828567] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:27:08.705 [2024-07-15 21:40:41.828759] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:08.705 [2024-07-15 21:40:41.920922] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:08.705 [2024-07-15 21:40:41.921034] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:08.705 [2024-07-15 21:40:41.921056] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:27:08.705 21:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:27:08.705 21:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:08.705 21:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:08.705 21:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:27:08.965 21:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:27:08.965 21:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:27:08.965 21:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:27:08.965 21:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:27:08.965 21:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:27:08.965 21:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:27:09.222 BaseBdev2 00:27:09.222 21:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:27:09.222 21:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:27:09.222 21:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:09.222 21:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:27:09.222 21:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:09.222 21:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:09.222 21:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:09.223 21:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:09.481 [ 00:27:09.481 { 00:27:09.481 "name": "BaseBdev2", 00:27:09.481 "aliases": [ 00:27:09.481 "f2268f72-797a-495c-a2a6-6effaf17372c" 00:27:09.481 ], 00:27:09.481 "product_name": "Malloc disk", 00:27:09.481 "block_size": 512, 00:27:09.481 "num_blocks": 65536, 00:27:09.481 "uuid": "f2268f72-797a-495c-a2a6-6effaf17372c", 00:27:09.481 "assigned_rate_limits": { 00:27:09.481 "rw_ios_per_sec": 0, 00:27:09.481 "rw_mbytes_per_sec": 0, 00:27:09.481 "r_mbytes_per_sec": 0, 00:27:09.481 "w_mbytes_per_sec": 0 00:27:09.481 }, 00:27:09.481 "claimed": false, 00:27:09.481 "zoned": false, 00:27:09.481 "supported_io_types": { 00:27:09.481 "read": true, 00:27:09.481 "write": true, 00:27:09.481 "unmap": true, 00:27:09.481 "flush": true, 00:27:09.481 "reset": true, 00:27:09.481 "nvme_admin": false, 00:27:09.481 "nvme_io": false, 00:27:09.481 "nvme_io_md": false, 00:27:09.481 "write_zeroes": true, 00:27:09.481 "zcopy": true, 00:27:09.481 "get_zone_info": false, 00:27:09.481 "zone_management": false, 00:27:09.481 "zone_append": false, 00:27:09.481 "compare": false, 00:27:09.481 "compare_and_write": false, 00:27:09.481 "abort": true, 00:27:09.481 "seek_hole": false, 00:27:09.481 "seek_data": false, 00:27:09.481 "copy": true, 00:27:09.481 "nvme_iov_md": false 00:27:09.481 }, 00:27:09.481 "memory_domains": [ 00:27:09.481 { 00:27:09.481 "dma_device_id": "system", 00:27:09.481 "dma_device_type": 1 00:27:09.481 }, 00:27:09.481 { 00:27:09.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:09.481 "dma_device_type": 2 00:27:09.481 } 00:27:09.481 ], 00:27:09.481 "driver_specific": {} 00:27:09.481 } 00:27:09.481 ] 00:27:09.481 21:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:27:09.481 21:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:27:09.481 21:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:27:09.481 21:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:27:09.739 BaseBdev3 00:27:09.739 21:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:27:09.739 21:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:27:09.739 21:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:09.739 21:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:27:09.739 21:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:09.739 21:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:09.739 21:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:09.998 21:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:09.998 [ 00:27:09.998 { 00:27:09.998 "name": "BaseBdev3", 00:27:09.998 "aliases": [ 00:27:09.998 "3a411fb0-a918-4b5e-a1c7-eed5f81e9383" 00:27:09.998 ], 00:27:09.998 "product_name": "Malloc disk", 00:27:09.998 "block_size": 512, 00:27:09.998 "num_blocks": 65536, 00:27:09.998 "uuid": "3a411fb0-a918-4b5e-a1c7-eed5f81e9383", 00:27:09.998 "assigned_rate_limits": { 00:27:09.998 "rw_ios_per_sec": 0, 00:27:09.998 "rw_mbytes_per_sec": 0, 00:27:09.998 "r_mbytes_per_sec": 0, 00:27:09.998 "w_mbytes_per_sec": 0 00:27:09.998 }, 00:27:09.998 "claimed": false, 00:27:09.998 "zoned": false, 00:27:09.998 "supported_io_types": { 00:27:09.998 "read": true, 00:27:09.998 "write": true, 00:27:09.998 "unmap": true, 00:27:09.998 "flush": true, 00:27:09.998 "reset": true, 00:27:09.998 "nvme_admin": false, 00:27:09.998 "nvme_io": false, 00:27:09.998 "nvme_io_md": false, 00:27:09.998 "write_zeroes": true, 00:27:09.998 "zcopy": true, 00:27:09.998 "get_zone_info": false, 00:27:09.998 "zone_management": false, 00:27:09.998 "zone_append": false, 00:27:09.998 "compare": false, 00:27:09.998 "compare_and_write": false, 00:27:09.998 "abort": true, 00:27:09.998 "seek_hole": false, 00:27:09.998 "seek_data": false, 00:27:09.998 "copy": true, 00:27:09.998 "nvme_iov_md": false 00:27:09.998 }, 00:27:09.998 "memory_domains": [ 00:27:09.998 { 00:27:09.998 "dma_device_id": "system", 00:27:09.998 "dma_device_type": 1 00:27:09.998 }, 00:27:09.998 { 00:27:09.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:09.998 "dma_device_type": 2 00:27:09.998 } 00:27:09.998 ], 00:27:09.998 "driver_specific": {} 00:27:09.998 } 00:27:09.998 ] 00:27:09.998 21:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:27:09.998 21:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:27:09.998 21:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:27:09.999 21:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:27:10.258 BaseBdev4 00:27:10.258 21:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:27:10.258 21:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:27:10.258 21:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:10.258 21:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:27:10.258 21:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:10.258 21:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:10.258 21:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:10.515 21:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:10.808 [ 00:27:10.808 { 00:27:10.808 "name": "BaseBdev4", 00:27:10.808 "aliases": [ 00:27:10.808 "4fb70897-661f-43fa-877a-293bbd5c3a41" 00:27:10.808 ], 00:27:10.808 "product_name": "Malloc disk", 00:27:10.808 "block_size": 512, 00:27:10.808 "num_blocks": 65536, 00:27:10.808 "uuid": "4fb70897-661f-43fa-877a-293bbd5c3a41", 00:27:10.808 "assigned_rate_limits": { 00:27:10.808 "rw_ios_per_sec": 0, 00:27:10.808 "rw_mbytes_per_sec": 0, 00:27:10.808 "r_mbytes_per_sec": 0, 00:27:10.808 "w_mbytes_per_sec": 0 00:27:10.808 }, 00:27:10.808 "claimed": false, 00:27:10.808 "zoned": false, 00:27:10.808 "supported_io_types": { 00:27:10.808 "read": true, 00:27:10.808 "write": true, 00:27:10.808 "unmap": true, 00:27:10.808 "flush": true, 00:27:10.808 "reset": true, 00:27:10.808 "nvme_admin": false, 00:27:10.808 "nvme_io": false, 00:27:10.808 "nvme_io_md": false, 00:27:10.808 "write_zeroes": true, 00:27:10.808 "zcopy": true, 00:27:10.808 "get_zone_info": false, 00:27:10.808 "zone_management": false, 00:27:10.808 "zone_append": false, 00:27:10.808 "compare": false, 00:27:10.808 "compare_and_write": false, 00:27:10.808 "abort": true, 00:27:10.808 "seek_hole": false, 00:27:10.808 "seek_data": false, 00:27:10.808 "copy": true, 00:27:10.808 "nvme_iov_md": false 00:27:10.808 }, 00:27:10.808 "memory_domains": [ 00:27:10.808 { 00:27:10.808 "dma_device_id": "system", 00:27:10.808 "dma_device_type": 1 00:27:10.808 }, 00:27:10.808 { 00:27:10.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:10.808 "dma_device_type": 2 00:27:10.808 } 00:27:10.808 ], 00:27:10.808 "driver_specific": {} 00:27:10.808 } 00:27:10.808 ] 00:27:10.808 21:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:27:10.808 21:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:27:10.808 21:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:27:10.808 21:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:10.808 [2024-07-15 21:40:44.138102] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:10.808 [2024-07-15 21:40:44.138213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:10.808 [2024-07-15 21:40:44.138259] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:10.808 [2024-07-15 21:40:44.139857] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:10.808 [2024-07-15 21:40:44.139939] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:10.808 21:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:10.808 21:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:10.808 21:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:10.808 21:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:10.808 21:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:10.808 21:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:10.808 21:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:10.808 21:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:10.808 21:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:10.809 21:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:10.809 21:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:10.809 21:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:11.067 21:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:11.067 "name": "Existed_Raid", 00:27:11.067 "uuid": "5c82be01-39ac-4807-9fa2-2817335727ae", 00:27:11.067 "strip_size_kb": 0, 00:27:11.067 "state": "configuring", 00:27:11.067 "raid_level": "raid1", 00:27:11.067 "superblock": true, 00:27:11.067 "num_base_bdevs": 4, 00:27:11.067 "num_base_bdevs_discovered": 3, 00:27:11.067 "num_base_bdevs_operational": 4, 00:27:11.067 "base_bdevs_list": [ 00:27:11.067 { 00:27:11.067 "name": "BaseBdev1", 00:27:11.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:11.067 "is_configured": false, 00:27:11.067 "data_offset": 0, 00:27:11.067 "data_size": 0 00:27:11.067 }, 00:27:11.067 { 00:27:11.067 "name": "BaseBdev2", 00:27:11.067 "uuid": "f2268f72-797a-495c-a2a6-6effaf17372c", 00:27:11.067 "is_configured": true, 00:27:11.067 "data_offset": 2048, 00:27:11.067 "data_size": 63488 00:27:11.067 }, 00:27:11.067 { 00:27:11.067 "name": "BaseBdev3", 00:27:11.067 "uuid": "3a411fb0-a918-4b5e-a1c7-eed5f81e9383", 00:27:11.067 "is_configured": true, 00:27:11.067 "data_offset": 2048, 00:27:11.067 "data_size": 63488 00:27:11.067 }, 00:27:11.067 { 00:27:11.067 "name": "BaseBdev4", 00:27:11.067 "uuid": "4fb70897-661f-43fa-877a-293bbd5c3a41", 00:27:11.067 "is_configured": true, 00:27:11.067 "data_offset": 2048, 00:27:11.067 "data_size": 63488 00:27:11.067 } 00:27:11.067 ] 00:27:11.067 }' 00:27:11.067 21:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:11.067 21:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:11.634 21:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:27:11.893 [2024-07-15 21:40:45.132369] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:11.893 21:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:11.893 21:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:11.893 21:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:11.893 21:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:11.893 21:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:11.893 21:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:11.893 21:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:11.893 21:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:11.893 21:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:11.893 21:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:11.893 21:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:11.893 21:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:12.151 21:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:12.151 "name": "Existed_Raid", 00:27:12.151 "uuid": "5c82be01-39ac-4807-9fa2-2817335727ae", 00:27:12.151 "strip_size_kb": 0, 00:27:12.151 "state": "configuring", 00:27:12.151 "raid_level": "raid1", 00:27:12.151 "superblock": true, 00:27:12.151 "num_base_bdevs": 4, 00:27:12.151 "num_base_bdevs_discovered": 2, 00:27:12.151 "num_base_bdevs_operational": 4, 00:27:12.151 "base_bdevs_list": [ 00:27:12.151 { 00:27:12.151 "name": "BaseBdev1", 00:27:12.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:12.151 "is_configured": false, 00:27:12.151 "data_offset": 0, 00:27:12.151 "data_size": 0 00:27:12.151 }, 00:27:12.151 { 00:27:12.151 "name": null, 00:27:12.151 "uuid": "f2268f72-797a-495c-a2a6-6effaf17372c", 00:27:12.151 "is_configured": false, 00:27:12.151 "data_offset": 2048, 00:27:12.151 "data_size": 63488 00:27:12.151 }, 00:27:12.151 { 00:27:12.152 "name": "BaseBdev3", 00:27:12.152 "uuid": "3a411fb0-a918-4b5e-a1c7-eed5f81e9383", 00:27:12.152 "is_configured": true, 00:27:12.152 "data_offset": 2048, 00:27:12.152 "data_size": 63488 00:27:12.152 }, 00:27:12.152 { 00:27:12.152 "name": "BaseBdev4", 00:27:12.152 "uuid": "4fb70897-661f-43fa-877a-293bbd5c3a41", 00:27:12.152 "is_configured": true, 00:27:12.152 "data_offset": 2048, 00:27:12.152 "data_size": 63488 00:27:12.152 } 00:27:12.152 ] 00:27:12.152 }' 00:27:12.152 21:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:12.152 21:40:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:12.718 21:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:12.718 21:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:12.977 21:40:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:27:12.977 21:40:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:27:13.236 [2024-07-15 21:40:46.398029] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:13.236 BaseBdev1 00:27:13.236 21:40:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:27:13.236 21:40:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:27:13.236 21:40:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:13.236 21:40:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:27:13.236 21:40:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:13.236 21:40:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:13.236 21:40:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:13.495 21:40:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:13.495 [ 00:27:13.495 { 00:27:13.495 "name": "BaseBdev1", 00:27:13.495 "aliases": [ 00:27:13.495 "f37416c1-6f8d-4da5-a559-1e5b0662d81e" 00:27:13.495 ], 00:27:13.495 "product_name": "Malloc disk", 00:27:13.495 "block_size": 512, 00:27:13.495 "num_blocks": 65536, 00:27:13.495 "uuid": "f37416c1-6f8d-4da5-a559-1e5b0662d81e", 00:27:13.495 "assigned_rate_limits": { 00:27:13.495 "rw_ios_per_sec": 0, 00:27:13.495 "rw_mbytes_per_sec": 0, 00:27:13.495 "r_mbytes_per_sec": 0, 00:27:13.495 "w_mbytes_per_sec": 0 00:27:13.495 }, 00:27:13.495 "claimed": true, 00:27:13.495 "claim_type": "exclusive_write", 00:27:13.495 "zoned": false, 00:27:13.495 "supported_io_types": { 00:27:13.495 "read": true, 00:27:13.495 "write": true, 00:27:13.495 "unmap": true, 00:27:13.495 "flush": true, 00:27:13.495 "reset": true, 00:27:13.495 "nvme_admin": false, 00:27:13.495 "nvme_io": false, 00:27:13.495 "nvme_io_md": false, 00:27:13.495 "write_zeroes": true, 00:27:13.495 "zcopy": true, 00:27:13.495 "get_zone_info": false, 00:27:13.495 "zone_management": false, 00:27:13.495 "zone_append": false, 00:27:13.495 "compare": false, 00:27:13.495 "compare_and_write": false, 00:27:13.495 "abort": true, 00:27:13.495 "seek_hole": false, 00:27:13.495 "seek_data": false, 00:27:13.495 "copy": true, 00:27:13.495 "nvme_iov_md": false 00:27:13.495 }, 00:27:13.495 "memory_domains": [ 00:27:13.495 { 00:27:13.495 "dma_device_id": "system", 00:27:13.495 "dma_device_type": 1 00:27:13.495 }, 00:27:13.495 { 00:27:13.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:13.495 "dma_device_type": 2 00:27:13.495 } 00:27:13.495 ], 00:27:13.495 "driver_specific": {} 00:27:13.495 } 00:27:13.495 ] 00:27:13.495 21:40:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:27:13.495 21:40:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:13.495 21:40:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:13.495 21:40:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:13.495 21:40:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:13.495 21:40:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:13.495 21:40:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:13.495 21:40:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:13.495 21:40:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:13.495 21:40:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:13.495 21:40:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:13.495 21:40:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.495 21:40:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:13.754 21:40:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:13.754 "name": "Existed_Raid", 00:27:13.754 "uuid": "5c82be01-39ac-4807-9fa2-2817335727ae", 00:27:13.754 "strip_size_kb": 0, 00:27:13.754 "state": "configuring", 00:27:13.754 "raid_level": "raid1", 00:27:13.754 "superblock": true, 00:27:13.754 "num_base_bdevs": 4, 00:27:13.754 "num_base_bdevs_discovered": 3, 00:27:13.754 "num_base_bdevs_operational": 4, 00:27:13.754 "base_bdevs_list": [ 00:27:13.754 { 00:27:13.754 "name": "BaseBdev1", 00:27:13.754 "uuid": "f37416c1-6f8d-4da5-a559-1e5b0662d81e", 00:27:13.754 "is_configured": true, 00:27:13.754 "data_offset": 2048, 00:27:13.754 "data_size": 63488 00:27:13.754 }, 00:27:13.754 { 00:27:13.754 "name": null, 00:27:13.754 "uuid": "f2268f72-797a-495c-a2a6-6effaf17372c", 00:27:13.754 "is_configured": false, 00:27:13.754 "data_offset": 2048, 00:27:13.754 "data_size": 63488 00:27:13.754 }, 00:27:13.754 { 00:27:13.754 "name": "BaseBdev3", 00:27:13.754 "uuid": "3a411fb0-a918-4b5e-a1c7-eed5f81e9383", 00:27:13.754 "is_configured": true, 00:27:13.754 "data_offset": 2048, 00:27:13.754 "data_size": 63488 00:27:13.754 }, 00:27:13.754 { 00:27:13.754 "name": "BaseBdev4", 00:27:13.754 "uuid": "4fb70897-661f-43fa-877a-293bbd5c3a41", 00:27:13.754 "is_configured": true, 00:27:13.754 "data_offset": 2048, 00:27:13.754 "data_size": 63488 00:27:13.754 } 00:27:13.754 ] 00:27:13.754 }' 00:27:13.754 21:40:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:13.754 21:40:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:14.322 21:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:14.322 21:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:14.581 21:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:27:14.581 21:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:27:14.841 [2024-07-15 21:40:48.067406] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:14.841 21:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:14.841 21:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:14.841 21:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:14.841 21:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:14.841 21:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:14.841 21:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:14.841 21:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:14.841 21:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:14.841 21:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:14.841 21:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:14.841 21:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:14.841 21:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:15.100 21:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:15.100 "name": "Existed_Raid", 00:27:15.100 "uuid": "5c82be01-39ac-4807-9fa2-2817335727ae", 00:27:15.100 "strip_size_kb": 0, 00:27:15.100 "state": "configuring", 00:27:15.100 "raid_level": "raid1", 00:27:15.100 "superblock": true, 00:27:15.100 "num_base_bdevs": 4, 00:27:15.100 "num_base_bdevs_discovered": 2, 00:27:15.100 "num_base_bdevs_operational": 4, 00:27:15.100 "base_bdevs_list": [ 00:27:15.100 { 00:27:15.100 "name": "BaseBdev1", 00:27:15.100 "uuid": "f37416c1-6f8d-4da5-a559-1e5b0662d81e", 00:27:15.100 "is_configured": true, 00:27:15.100 "data_offset": 2048, 00:27:15.100 "data_size": 63488 00:27:15.100 }, 00:27:15.100 { 00:27:15.100 "name": null, 00:27:15.100 "uuid": "f2268f72-797a-495c-a2a6-6effaf17372c", 00:27:15.100 "is_configured": false, 00:27:15.100 "data_offset": 2048, 00:27:15.100 "data_size": 63488 00:27:15.100 }, 00:27:15.100 { 00:27:15.100 "name": null, 00:27:15.100 "uuid": "3a411fb0-a918-4b5e-a1c7-eed5f81e9383", 00:27:15.100 "is_configured": false, 00:27:15.100 "data_offset": 2048, 00:27:15.100 "data_size": 63488 00:27:15.100 }, 00:27:15.100 { 00:27:15.100 "name": "BaseBdev4", 00:27:15.100 "uuid": "4fb70897-661f-43fa-877a-293bbd5c3a41", 00:27:15.100 "is_configured": true, 00:27:15.100 "data_offset": 2048, 00:27:15.100 "data_size": 63488 00:27:15.100 } 00:27:15.100 ] 00:27:15.100 }' 00:27:15.100 21:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:15.100 21:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:15.668 21:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:15.668 21:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:15.927 21:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:27:15.927 21:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:27:15.927 [2024-07-15 21:40:49.289372] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:16.187 21:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:16.187 21:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:16.187 21:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:16.187 21:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:16.187 21:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:16.187 21:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:16.187 21:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:16.187 21:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:16.187 21:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:16.187 21:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:16.187 21:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:16.187 21:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:16.187 21:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:16.187 "name": "Existed_Raid", 00:27:16.187 "uuid": "5c82be01-39ac-4807-9fa2-2817335727ae", 00:27:16.187 "strip_size_kb": 0, 00:27:16.187 "state": "configuring", 00:27:16.187 "raid_level": "raid1", 00:27:16.187 "superblock": true, 00:27:16.187 "num_base_bdevs": 4, 00:27:16.187 "num_base_bdevs_discovered": 3, 00:27:16.187 "num_base_bdevs_operational": 4, 00:27:16.187 "base_bdevs_list": [ 00:27:16.187 { 00:27:16.187 "name": "BaseBdev1", 00:27:16.187 "uuid": "f37416c1-6f8d-4da5-a559-1e5b0662d81e", 00:27:16.187 "is_configured": true, 00:27:16.187 "data_offset": 2048, 00:27:16.187 "data_size": 63488 00:27:16.187 }, 00:27:16.187 { 00:27:16.187 "name": null, 00:27:16.187 "uuid": "f2268f72-797a-495c-a2a6-6effaf17372c", 00:27:16.187 "is_configured": false, 00:27:16.187 "data_offset": 2048, 00:27:16.187 "data_size": 63488 00:27:16.187 }, 00:27:16.187 { 00:27:16.187 "name": "BaseBdev3", 00:27:16.187 "uuid": "3a411fb0-a918-4b5e-a1c7-eed5f81e9383", 00:27:16.187 "is_configured": true, 00:27:16.187 "data_offset": 2048, 00:27:16.187 "data_size": 63488 00:27:16.187 }, 00:27:16.187 { 00:27:16.187 "name": "BaseBdev4", 00:27:16.187 "uuid": "4fb70897-661f-43fa-877a-293bbd5c3a41", 00:27:16.187 "is_configured": true, 00:27:16.187 "data_offset": 2048, 00:27:16.187 "data_size": 63488 00:27:16.187 } 00:27:16.187 ] 00:27:16.187 }' 00:27:16.187 21:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:16.187 21:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:16.754 21:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:16.754 21:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:17.012 21:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:27:17.013 21:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:17.272 [2024-07-15 21:40:50.475389] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:17.272 21:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:17.272 21:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:17.272 21:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:17.272 21:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:17.272 21:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:17.272 21:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:17.272 21:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:17.272 21:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:17.272 21:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:17.272 21:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:17.272 21:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:17.272 21:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:17.532 21:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:17.532 "name": "Existed_Raid", 00:27:17.532 "uuid": "5c82be01-39ac-4807-9fa2-2817335727ae", 00:27:17.532 "strip_size_kb": 0, 00:27:17.532 "state": "configuring", 00:27:17.532 "raid_level": "raid1", 00:27:17.532 "superblock": true, 00:27:17.532 "num_base_bdevs": 4, 00:27:17.532 "num_base_bdevs_discovered": 2, 00:27:17.532 "num_base_bdevs_operational": 4, 00:27:17.532 "base_bdevs_list": [ 00:27:17.532 { 00:27:17.532 "name": null, 00:27:17.532 "uuid": "f37416c1-6f8d-4da5-a559-1e5b0662d81e", 00:27:17.532 "is_configured": false, 00:27:17.532 "data_offset": 2048, 00:27:17.532 "data_size": 63488 00:27:17.532 }, 00:27:17.532 { 00:27:17.532 "name": null, 00:27:17.532 "uuid": "f2268f72-797a-495c-a2a6-6effaf17372c", 00:27:17.532 "is_configured": false, 00:27:17.532 "data_offset": 2048, 00:27:17.532 "data_size": 63488 00:27:17.532 }, 00:27:17.532 { 00:27:17.532 "name": "BaseBdev3", 00:27:17.532 "uuid": "3a411fb0-a918-4b5e-a1c7-eed5f81e9383", 00:27:17.532 "is_configured": true, 00:27:17.532 "data_offset": 2048, 00:27:17.532 "data_size": 63488 00:27:17.532 }, 00:27:17.532 { 00:27:17.532 "name": "BaseBdev4", 00:27:17.532 "uuid": "4fb70897-661f-43fa-877a-293bbd5c3a41", 00:27:17.532 "is_configured": true, 00:27:17.532 "data_offset": 2048, 00:27:17.532 "data_size": 63488 00:27:17.532 } 00:27:17.532 ] 00:27:17.532 }' 00:27:17.532 21:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:17.532 21:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:18.099 21:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:18.099 21:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:18.358 21:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:27:18.358 21:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:27:18.617 [2024-07-15 21:40:51.819915] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:18.617 21:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:18.617 21:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:18.617 21:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:18.617 21:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:18.617 21:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:18.617 21:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:18.617 21:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:18.617 21:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:18.617 21:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:18.617 21:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:18.617 21:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:18.617 21:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:18.876 21:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:18.876 "name": "Existed_Raid", 00:27:18.876 "uuid": "5c82be01-39ac-4807-9fa2-2817335727ae", 00:27:18.876 "strip_size_kb": 0, 00:27:18.876 "state": "configuring", 00:27:18.876 "raid_level": "raid1", 00:27:18.876 "superblock": true, 00:27:18.876 "num_base_bdevs": 4, 00:27:18.876 "num_base_bdevs_discovered": 3, 00:27:18.876 "num_base_bdevs_operational": 4, 00:27:18.876 "base_bdevs_list": [ 00:27:18.876 { 00:27:18.876 "name": null, 00:27:18.876 "uuid": "f37416c1-6f8d-4da5-a559-1e5b0662d81e", 00:27:18.876 "is_configured": false, 00:27:18.876 "data_offset": 2048, 00:27:18.876 "data_size": 63488 00:27:18.876 }, 00:27:18.876 { 00:27:18.876 "name": "BaseBdev2", 00:27:18.876 "uuid": "f2268f72-797a-495c-a2a6-6effaf17372c", 00:27:18.876 "is_configured": true, 00:27:18.876 "data_offset": 2048, 00:27:18.876 "data_size": 63488 00:27:18.876 }, 00:27:18.876 { 00:27:18.876 "name": "BaseBdev3", 00:27:18.876 "uuid": "3a411fb0-a918-4b5e-a1c7-eed5f81e9383", 00:27:18.876 "is_configured": true, 00:27:18.876 "data_offset": 2048, 00:27:18.876 "data_size": 63488 00:27:18.876 }, 00:27:18.876 { 00:27:18.876 "name": "BaseBdev4", 00:27:18.876 "uuid": "4fb70897-661f-43fa-877a-293bbd5c3a41", 00:27:18.876 "is_configured": true, 00:27:18.876 "data_offset": 2048, 00:27:18.876 "data_size": 63488 00:27:18.876 } 00:27:18.876 ] 00:27:18.876 }' 00:27:18.876 21:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:18.876 21:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:19.445 21:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:19.445 21:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:19.704 21:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:27:19.704 21:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:27:19.704 21:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:19.963 21:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u f37416c1-6f8d-4da5-a559-1e5b0662d81e 00:27:19.963 [2024-07-15 21:40:53.321921] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:27:19.963 [2024-07-15 21:40:53.322241] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:27:19.963 [2024-07-15 21:40:53.322289] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:19.963 [2024-07-15 21:40:53.322428] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:27:19.963 NewBaseBdev 00:27:19.963 [2024-07-15 21:40:53.322735] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:27:19.963 [2024-07-15 21:40:53.322782] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009680 00:27:19.963 [2024-07-15 21:40:53.322928] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:20.221 21:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:27:20.221 21:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:27:20.221 21:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:20.221 21:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:27:20.221 21:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:20.221 21:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:20.221 21:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:20.221 21:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:27:20.479 [ 00:27:20.479 { 00:27:20.479 "name": "NewBaseBdev", 00:27:20.479 "aliases": [ 00:27:20.479 "f37416c1-6f8d-4da5-a559-1e5b0662d81e" 00:27:20.479 ], 00:27:20.479 "product_name": "Malloc disk", 00:27:20.479 "block_size": 512, 00:27:20.479 "num_blocks": 65536, 00:27:20.479 "uuid": "f37416c1-6f8d-4da5-a559-1e5b0662d81e", 00:27:20.479 "assigned_rate_limits": { 00:27:20.479 "rw_ios_per_sec": 0, 00:27:20.479 "rw_mbytes_per_sec": 0, 00:27:20.479 "r_mbytes_per_sec": 0, 00:27:20.479 "w_mbytes_per_sec": 0 00:27:20.479 }, 00:27:20.479 "claimed": true, 00:27:20.479 "claim_type": "exclusive_write", 00:27:20.479 "zoned": false, 00:27:20.479 "supported_io_types": { 00:27:20.479 "read": true, 00:27:20.479 "write": true, 00:27:20.479 "unmap": true, 00:27:20.479 "flush": true, 00:27:20.479 "reset": true, 00:27:20.479 "nvme_admin": false, 00:27:20.479 "nvme_io": false, 00:27:20.479 "nvme_io_md": false, 00:27:20.479 "write_zeroes": true, 00:27:20.479 "zcopy": true, 00:27:20.479 "get_zone_info": false, 00:27:20.479 "zone_management": false, 00:27:20.479 "zone_append": false, 00:27:20.479 "compare": false, 00:27:20.479 "compare_and_write": false, 00:27:20.479 "abort": true, 00:27:20.479 "seek_hole": false, 00:27:20.479 "seek_data": false, 00:27:20.479 "copy": true, 00:27:20.479 "nvme_iov_md": false 00:27:20.479 }, 00:27:20.479 "memory_domains": [ 00:27:20.479 { 00:27:20.479 "dma_device_id": "system", 00:27:20.479 "dma_device_type": 1 00:27:20.479 }, 00:27:20.479 { 00:27:20.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:20.480 "dma_device_type": 2 00:27:20.480 } 00:27:20.480 ], 00:27:20.480 "driver_specific": {} 00:27:20.480 } 00:27:20.480 ] 00:27:20.480 21:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:27:20.480 21:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:27:20.480 21:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:20.480 21:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:20.480 21:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:20.480 21:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:20.480 21:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:20.480 21:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:20.480 21:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:20.480 21:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:20.480 21:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:20.480 21:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:20.480 21:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:20.737 21:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:20.737 "name": "Existed_Raid", 00:27:20.737 "uuid": "5c82be01-39ac-4807-9fa2-2817335727ae", 00:27:20.737 "strip_size_kb": 0, 00:27:20.737 "state": "online", 00:27:20.737 "raid_level": "raid1", 00:27:20.737 "superblock": true, 00:27:20.737 "num_base_bdevs": 4, 00:27:20.737 "num_base_bdevs_discovered": 4, 00:27:20.737 "num_base_bdevs_operational": 4, 00:27:20.737 "base_bdevs_list": [ 00:27:20.737 { 00:27:20.737 "name": "NewBaseBdev", 00:27:20.737 "uuid": "f37416c1-6f8d-4da5-a559-1e5b0662d81e", 00:27:20.737 "is_configured": true, 00:27:20.737 "data_offset": 2048, 00:27:20.737 "data_size": 63488 00:27:20.737 }, 00:27:20.737 { 00:27:20.737 "name": "BaseBdev2", 00:27:20.737 "uuid": "f2268f72-797a-495c-a2a6-6effaf17372c", 00:27:20.737 "is_configured": true, 00:27:20.737 "data_offset": 2048, 00:27:20.737 "data_size": 63488 00:27:20.737 }, 00:27:20.737 { 00:27:20.737 "name": "BaseBdev3", 00:27:20.737 "uuid": "3a411fb0-a918-4b5e-a1c7-eed5f81e9383", 00:27:20.737 "is_configured": true, 00:27:20.737 "data_offset": 2048, 00:27:20.737 "data_size": 63488 00:27:20.737 }, 00:27:20.737 { 00:27:20.737 "name": "BaseBdev4", 00:27:20.737 "uuid": "4fb70897-661f-43fa-877a-293bbd5c3a41", 00:27:20.737 "is_configured": true, 00:27:20.737 "data_offset": 2048, 00:27:20.737 "data_size": 63488 00:27:20.737 } 00:27:20.737 ] 00:27:20.737 }' 00:27:20.737 21:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:20.737 21:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:21.303 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:27:21.303 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:27:21.303 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:21.303 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:21.303 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:21.303 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:27:21.303 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:27:21.303 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:21.562 [2024-07-15 21:40:54.756017] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:21.562 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:21.562 "name": "Existed_Raid", 00:27:21.562 "aliases": [ 00:27:21.562 "5c82be01-39ac-4807-9fa2-2817335727ae" 00:27:21.562 ], 00:27:21.562 "product_name": "Raid Volume", 00:27:21.562 "block_size": 512, 00:27:21.562 "num_blocks": 63488, 00:27:21.562 "uuid": "5c82be01-39ac-4807-9fa2-2817335727ae", 00:27:21.562 "assigned_rate_limits": { 00:27:21.562 "rw_ios_per_sec": 0, 00:27:21.562 "rw_mbytes_per_sec": 0, 00:27:21.562 "r_mbytes_per_sec": 0, 00:27:21.562 "w_mbytes_per_sec": 0 00:27:21.562 }, 00:27:21.562 "claimed": false, 00:27:21.562 "zoned": false, 00:27:21.562 "supported_io_types": { 00:27:21.562 "read": true, 00:27:21.562 "write": true, 00:27:21.562 "unmap": false, 00:27:21.562 "flush": false, 00:27:21.562 "reset": true, 00:27:21.562 "nvme_admin": false, 00:27:21.562 "nvme_io": false, 00:27:21.562 "nvme_io_md": false, 00:27:21.562 "write_zeroes": true, 00:27:21.562 "zcopy": false, 00:27:21.562 "get_zone_info": false, 00:27:21.562 "zone_management": false, 00:27:21.562 "zone_append": false, 00:27:21.562 "compare": false, 00:27:21.562 "compare_and_write": false, 00:27:21.562 "abort": false, 00:27:21.562 "seek_hole": false, 00:27:21.562 "seek_data": false, 00:27:21.562 "copy": false, 00:27:21.562 "nvme_iov_md": false 00:27:21.562 }, 00:27:21.562 "memory_domains": [ 00:27:21.562 { 00:27:21.562 "dma_device_id": "system", 00:27:21.562 "dma_device_type": 1 00:27:21.563 }, 00:27:21.563 { 00:27:21.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:21.563 "dma_device_type": 2 00:27:21.563 }, 00:27:21.563 { 00:27:21.563 "dma_device_id": "system", 00:27:21.563 "dma_device_type": 1 00:27:21.563 }, 00:27:21.563 { 00:27:21.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:21.563 "dma_device_type": 2 00:27:21.563 }, 00:27:21.563 { 00:27:21.563 "dma_device_id": "system", 00:27:21.563 "dma_device_type": 1 00:27:21.563 }, 00:27:21.563 { 00:27:21.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:21.563 "dma_device_type": 2 00:27:21.563 }, 00:27:21.563 { 00:27:21.563 "dma_device_id": "system", 00:27:21.563 "dma_device_type": 1 00:27:21.563 }, 00:27:21.563 { 00:27:21.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:21.563 "dma_device_type": 2 00:27:21.563 } 00:27:21.563 ], 00:27:21.563 "driver_specific": { 00:27:21.563 "raid": { 00:27:21.563 "uuid": "5c82be01-39ac-4807-9fa2-2817335727ae", 00:27:21.563 "strip_size_kb": 0, 00:27:21.563 "state": "online", 00:27:21.563 "raid_level": "raid1", 00:27:21.563 "superblock": true, 00:27:21.563 "num_base_bdevs": 4, 00:27:21.563 "num_base_bdevs_discovered": 4, 00:27:21.563 "num_base_bdevs_operational": 4, 00:27:21.563 "base_bdevs_list": [ 00:27:21.563 { 00:27:21.563 "name": "NewBaseBdev", 00:27:21.563 "uuid": "f37416c1-6f8d-4da5-a559-1e5b0662d81e", 00:27:21.563 "is_configured": true, 00:27:21.563 "data_offset": 2048, 00:27:21.563 "data_size": 63488 00:27:21.563 }, 00:27:21.563 { 00:27:21.563 "name": "BaseBdev2", 00:27:21.563 "uuid": "f2268f72-797a-495c-a2a6-6effaf17372c", 00:27:21.563 "is_configured": true, 00:27:21.563 "data_offset": 2048, 00:27:21.563 "data_size": 63488 00:27:21.563 }, 00:27:21.563 { 00:27:21.563 "name": "BaseBdev3", 00:27:21.563 "uuid": "3a411fb0-a918-4b5e-a1c7-eed5f81e9383", 00:27:21.563 "is_configured": true, 00:27:21.563 "data_offset": 2048, 00:27:21.563 "data_size": 63488 00:27:21.563 }, 00:27:21.563 { 00:27:21.563 "name": "BaseBdev4", 00:27:21.563 "uuid": "4fb70897-661f-43fa-877a-293bbd5c3a41", 00:27:21.563 "is_configured": true, 00:27:21.563 "data_offset": 2048, 00:27:21.563 "data_size": 63488 00:27:21.563 } 00:27:21.563 ] 00:27:21.563 } 00:27:21.563 } 00:27:21.563 }' 00:27:21.563 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:21.563 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:27:21.563 BaseBdev2 00:27:21.563 BaseBdev3 00:27:21.563 BaseBdev4' 00:27:21.563 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:21.563 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:27:21.563 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:21.822 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:21.822 "name": "NewBaseBdev", 00:27:21.822 "aliases": [ 00:27:21.822 "f37416c1-6f8d-4da5-a559-1e5b0662d81e" 00:27:21.822 ], 00:27:21.822 "product_name": "Malloc disk", 00:27:21.822 "block_size": 512, 00:27:21.822 "num_blocks": 65536, 00:27:21.822 "uuid": "f37416c1-6f8d-4da5-a559-1e5b0662d81e", 00:27:21.822 "assigned_rate_limits": { 00:27:21.822 "rw_ios_per_sec": 0, 00:27:21.822 "rw_mbytes_per_sec": 0, 00:27:21.822 "r_mbytes_per_sec": 0, 00:27:21.822 "w_mbytes_per_sec": 0 00:27:21.822 }, 00:27:21.822 "claimed": true, 00:27:21.822 "claim_type": "exclusive_write", 00:27:21.822 "zoned": false, 00:27:21.822 "supported_io_types": { 00:27:21.822 "read": true, 00:27:21.822 "write": true, 00:27:21.822 "unmap": true, 00:27:21.822 "flush": true, 00:27:21.822 "reset": true, 00:27:21.822 "nvme_admin": false, 00:27:21.822 "nvme_io": false, 00:27:21.822 "nvme_io_md": false, 00:27:21.822 "write_zeroes": true, 00:27:21.822 "zcopy": true, 00:27:21.822 "get_zone_info": false, 00:27:21.822 "zone_management": false, 00:27:21.822 "zone_append": false, 00:27:21.822 "compare": false, 00:27:21.822 "compare_and_write": false, 00:27:21.822 "abort": true, 00:27:21.822 "seek_hole": false, 00:27:21.822 "seek_data": false, 00:27:21.822 "copy": true, 00:27:21.822 "nvme_iov_md": false 00:27:21.822 }, 00:27:21.822 "memory_domains": [ 00:27:21.822 { 00:27:21.822 "dma_device_id": "system", 00:27:21.822 "dma_device_type": 1 00:27:21.822 }, 00:27:21.822 { 00:27:21.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:21.822 "dma_device_type": 2 00:27:21.822 } 00:27:21.822 ], 00:27:21.822 "driver_specific": {} 00:27:21.822 }' 00:27:21.822 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:21.822 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:21.822 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:21.822 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:22.080 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:22.080 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:22.080 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:22.080 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:22.080 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:22.080 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:22.080 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:22.338 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:22.338 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:22.338 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:27:22.338 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:22.338 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:22.338 "name": "BaseBdev2", 00:27:22.338 "aliases": [ 00:27:22.338 "f2268f72-797a-495c-a2a6-6effaf17372c" 00:27:22.338 ], 00:27:22.338 "product_name": "Malloc disk", 00:27:22.338 "block_size": 512, 00:27:22.338 "num_blocks": 65536, 00:27:22.338 "uuid": "f2268f72-797a-495c-a2a6-6effaf17372c", 00:27:22.338 "assigned_rate_limits": { 00:27:22.338 "rw_ios_per_sec": 0, 00:27:22.338 "rw_mbytes_per_sec": 0, 00:27:22.338 "r_mbytes_per_sec": 0, 00:27:22.338 "w_mbytes_per_sec": 0 00:27:22.338 }, 00:27:22.338 "claimed": true, 00:27:22.338 "claim_type": "exclusive_write", 00:27:22.338 "zoned": false, 00:27:22.338 "supported_io_types": { 00:27:22.338 "read": true, 00:27:22.338 "write": true, 00:27:22.338 "unmap": true, 00:27:22.338 "flush": true, 00:27:22.338 "reset": true, 00:27:22.338 "nvme_admin": false, 00:27:22.338 "nvme_io": false, 00:27:22.338 "nvme_io_md": false, 00:27:22.338 "write_zeroes": true, 00:27:22.338 "zcopy": true, 00:27:22.338 "get_zone_info": false, 00:27:22.338 "zone_management": false, 00:27:22.338 "zone_append": false, 00:27:22.338 "compare": false, 00:27:22.338 "compare_and_write": false, 00:27:22.338 "abort": true, 00:27:22.338 "seek_hole": false, 00:27:22.338 "seek_data": false, 00:27:22.338 "copy": true, 00:27:22.338 "nvme_iov_md": false 00:27:22.338 }, 00:27:22.338 "memory_domains": [ 00:27:22.338 { 00:27:22.338 "dma_device_id": "system", 00:27:22.338 "dma_device_type": 1 00:27:22.338 }, 00:27:22.338 { 00:27:22.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:22.338 "dma_device_type": 2 00:27:22.338 } 00:27:22.338 ], 00:27:22.338 "driver_specific": {} 00:27:22.338 }' 00:27:22.338 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:22.338 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:22.636 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:22.636 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:22.636 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:22.637 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:22.637 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:22.637 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:22.637 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:22.897 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:22.897 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:22.897 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:22.897 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:22.897 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:27:22.897 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:23.156 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:23.156 "name": "BaseBdev3", 00:27:23.156 "aliases": [ 00:27:23.156 "3a411fb0-a918-4b5e-a1c7-eed5f81e9383" 00:27:23.156 ], 00:27:23.156 "product_name": "Malloc disk", 00:27:23.156 "block_size": 512, 00:27:23.156 "num_blocks": 65536, 00:27:23.156 "uuid": "3a411fb0-a918-4b5e-a1c7-eed5f81e9383", 00:27:23.156 "assigned_rate_limits": { 00:27:23.156 "rw_ios_per_sec": 0, 00:27:23.156 "rw_mbytes_per_sec": 0, 00:27:23.156 "r_mbytes_per_sec": 0, 00:27:23.156 "w_mbytes_per_sec": 0 00:27:23.156 }, 00:27:23.156 "claimed": true, 00:27:23.156 "claim_type": "exclusive_write", 00:27:23.156 "zoned": false, 00:27:23.156 "supported_io_types": { 00:27:23.156 "read": true, 00:27:23.156 "write": true, 00:27:23.156 "unmap": true, 00:27:23.156 "flush": true, 00:27:23.156 "reset": true, 00:27:23.156 "nvme_admin": false, 00:27:23.156 "nvme_io": false, 00:27:23.156 "nvme_io_md": false, 00:27:23.156 "write_zeroes": true, 00:27:23.156 "zcopy": true, 00:27:23.156 "get_zone_info": false, 00:27:23.156 "zone_management": false, 00:27:23.156 "zone_append": false, 00:27:23.156 "compare": false, 00:27:23.156 "compare_and_write": false, 00:27:23.156 "abort": true, 00:27:23.156 "seek_hole": false, 00:27:23.156 "seek_data": false, 00:27:23.156 "copy": true, 00:27:23.156 "nvme_iov_md": false 00:27:23.156 }, 00:27:23.156 "memory_domains": [ 00:27:23.156 { 00:27:23.156 "dma_device_id": "system", 00:27:23.156 "dma_device_type": 1 00:27:23.156 }, 00:27:23.156 { 00:27:23.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:23.156 "dma_device_type": 2 00:27:23.156 } 00:27:23.156 ], 00:27:23.156 "driver_specific": {} 00:27:23.156 }' 00:27:23.156 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:23.156 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:23.156 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:23.156 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:23.156 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:23.415 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:23.415 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:23.415 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:23.415 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:23.415 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:23.415 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:23.415 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:23.415 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:23.415 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:27:23.415 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:23.673 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:23.673 "name": "BaseBdev4", 00:27:23.673 "aliases": [ 00:27:23.673 "4fb70897-661f-43fa-877a-293bbd5c3a41" 00:27:23.673 ], 00:27:23.673 "product_name": "Malloc disk", 00:27:23.673 "block_size": 512, 00:27:23.673 "num_blocks": 65536, 00:27:23.673 "uuid": "4fb70897-661f-43fa-877a-293bbd5c3a41", 00:27:23.673 "assigned_rate_limits": { 00:27:23.673 "rw_ios_per_sec": 0, 00:27:23.673 "rw_mbytes_per_sec": 0, 00:27:23.673 "r_mbytes_per_sec": 0, 00:27:23.673 "w_mbytes_per_sec": 0 00:27:23.673 }, 00:27:23.673 "claimed": true, 00:27:23.673 "claim_type": "exclusive_write", 00:27:23.673 "zoned": false, 00:27:23.673 "supported_io_types": { 00:27:23.673 "read": true, 00:27:23.673 "write": true, 00:27:23.673 "unmap": true, 00:27:23.673 "flush": true, 00:27:23.673 "reset": true, 00:27:23.673 "nvme_admin": false, 00:27:23.673 "nvme_io": false, 00:27:23.673 "nvme_io_md": false, 00:27:23.673 "write_zeroes": true, 00:27:23.673 "zcopy": true, 00:27:23.673 "get_zone_info": false, 00:27:23.673 "zone_management": false, 00:27:23.673 "zone_append": false, 00:27:23.673 "compare": false, 00:27:23.673 "compare_and_write": false, 00:27:23.673 "abort": true, 00:27:23.673 "seek_hole": false, 00:27:23.673 "seek_data": false, 00:27:23.673 "copy": true, 00:27:23.673 "nvme_iov_md": false 00:27:23.673 }, 00:27:23.673 "memory_domains": [ 00:27:23.673 { 00:27:23.673 "dma_device_id": "system", 00:27:23.673 "dma_device_type": 1 00:27:23.673 }, 00:27:23.673 { 00:27:23.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:23.673 "dma_device_type": 2 00:27:23.673 } 00:27:23.673 ], 00:27:23.673 "driver_specific": {} 00:27:23.673 }' 00:27:23.673 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:23.673 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:23.932 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:23.932 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:23.932 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:23.932 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:23.932 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:23.932 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:24.190 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:24.191 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:24.191 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:24.191 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:24.191 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:24.450 [2024-07-15 21:40:57.598959] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:24.450 [2024-07-15 21:40:57.599059] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:24.450 [2024-07-15 21:40:57.599167] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:24.450 [2024-07-15 21:40:57.599426] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:24.450 [2024-07-15 21:40:57.599455] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name Existed_Raid, state offline 00:27:24.450 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 142905 00:27:24.450 21:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 142905 ']' 00:27:24.450 21:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 142905 00:27:24.450 21:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:27:24.450 21:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:24.450 21:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 142905 00:27:24.450 killing process with pid 142905 00:27:24.450 21:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:24.450 21:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:24.450 21:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 142905' 00:27:24.450 21:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 142905 00:27:24.450 [2024-07-15 21:40:57.640105] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:24.450 21:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 142905 00:27:24.709 [2024-07-15 21:40:58.034564] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:26.191 ************************************ 00:27:26.191 END TEST raid_state_function_test_sb 00:27:26.191 ************************************ 00:27:26.191 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:27:26.191 00:27:26.191 real 0m31.902s 00:27:26.191 user 0m58.852s 00:27:26.191 sys 0m4.175s 00:27:26.191 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:26.191 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.191 21:40:59 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:26.191 21:40:59 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:27:26.191 21:40:59 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:27:26.191 21:40:59 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:26.191 21:40:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:26.191 ************************************ 00:27:26.191 START TEST raid_superblock_test 00:27:26.191 ************************************ 00:27:26.191 21:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 4 00:27:26.191 21:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:27:26.191 21:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:27:26.191 21:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:27:26.191 21:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:27:26.191 21:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:27:26.191 21:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:27:26.191 21:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:27:26.191 21:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:27:26.191 21:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:27:26.191 21:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:27:26.191 21:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:27:26.191 21:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:27:26.191 21:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:27:26.191 21:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:27:26.191 21:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:27:26.191 21:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=144023 00:27:26.191 21:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:27:26.191 21:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 144023 /var/tmp/spdk-raid.sock 00:27:26.191 21:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 144023 ']' 00:27:26.191 21:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:26.191 21:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:26.191 21:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:26.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:26.191 21:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:26.191 21:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:26.191 [2024-07-15 21:40:59.446530] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:27:26.191 [2024-07-15 21:40:59.446774] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144023 ] 00:27:26.450 [2024-07-15 21:40:59.606542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.450 [2024-07-15 21:40:59.804954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.710 [2024-07-15 21:41:00.005906] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:26.969 21:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:26.969 21:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:27:26.969 21:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:27:26.969 21:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:26.969 21:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:27:26.969 21:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:27:26.969 21:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:26.969 21:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:26.969 21:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:27:26.969 21:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:26.969 21:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:27:27.227 malloc1 00:27:27.228 21:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:27.486 [2024-07-15 21:41:00.711604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:27.486 [2024-07-15 21:41:00.711761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:27.486 [2024-07-15 21:41:00.711805] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:27:27.486 [2024-07-15 21:41:00.711864] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:27.486 [2024-07-15 21:41:00.714019] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:27.486 [2024-07-15 21:41:00.714099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:27.486 pt1 00:27:27.486 21:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:27:27.486 21:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:27.486 21:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:27:27.486 21:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:27:27.486 21:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:27.486 21:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:27.486 21:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:27:27.486 21:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:27.486 21:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:27:27.744 malloc2 00:27:27.744 21:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:28.003 [2024-07-15 21:41:01.177655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:28.003 [2024-07-15 21:41:01.177844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:28.003 [2024-07-15 21:41:01.177895] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:27:28.003 [2024-07-15 21:41:01.177961] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:28.003 [2024-07-15 21:41:01.179936] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:28.003 [2024-07-15 21:41:01.180038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:28.003 pt2 00:27:28.003 21:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:27:28.003 21:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:28.003 21:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:27:28.003 21:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:27:28.003 21:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:27:28.003 21:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:28.003 21:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:27:28.003 21:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:28.003 21:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:27:28.262 malloc3 00:27:28.262 21:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:28.262 [2024-07-15 21:41:01.578568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:28.262 [2024-07-15 21:41:01.578757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:28.262 [2024-07-15 21:41:01.578801] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:27:28.262 [2024-07-15 21:41:01.578841] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:28.262 [2024-07-15 21:41:01.580820] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:28.262 [2024-07-15 21:41:01.580935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:28.262 pt3 00:27:28.262 21:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:27:28.262 21:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:28.262 21:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:27:28.262 21:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:27:28.262 21:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:27:28.262 21:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:28.262 21:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:27:28.262 21:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:28.262 21:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:27:28.520 malloc4 00:27:28.520 21:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:27:28.779 [2024-07-15 21:41:02.025941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:27:28.779 [2024-07-15 21:41:02.026115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:28.779 [2024-07-15 21:41:02.026160] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:27:28.779 [2024-07-15 21:41:02.026198] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:28.779 [2024-07-15 21:41:02.028223] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:28.779 [2024-07-15 21:41:02.028303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:27:28.779 pt4 00:27:28.779 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:27:28.779 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:28.779 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:27:29.039 [2024-07-15 21:41:02.225678] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:29.039 [2024-07-15 21:41:02.227411] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:29.039 [2024-07-15 21:41:02.227507] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:29.039 [2024-07-15 21:41:02.227569] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:27:29.039 [2024-07-15 21:41:02.227797] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:27:29.039 [2024-07-15 21:41:02.227835] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:29.039 [2024-07-15 21:41:02.228013] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:27:29.039 [2024-07-15 21:41:02.228384] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:27:29.039 [2024-07-15 21:41:02.228427] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:27:29.039 [2024-07-15 21:41:02.228593] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:29.039 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:27:29.039 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:29.039 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:29.039 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:29.039 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:29.039 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:29.039 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:29.039 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:29.039 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:29.039 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:29.039 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:29.039 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:29.297 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:29.297 "name": "raid_bdev1", 00:27:29.297 "uuid": "8b79c6f1-8bf7-4fd8-a391-03edbbec8ab8", 00:27:29.297 "strip_size_kb": 0, 00:27:29.297 "state": "online", 00:27:29.297 "raid_level": "raid1", 00:27:29.298 "superblock": true, 00:27:29.298 "num_base_bdevs": 4, 00:27:29.298 "num_base_bdevs_discovered": 4, 00:27:29.298 "num_base_bdevs_operational": 4, 00:27:29.298 "base_bdevs_list": [ 00:27:29.298 { 00:27:29.298 "name": "pt1", 00:27:29.298 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:29.298 "is_configured": true, 00:27:29.298 "data_offset": 2048, 00:27:29.298 "data_size": 63488 00:27:29.298 }, 00:27:29.298 { 00:27:29.298 "name": "pt2", 00:27:29.298 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:29.298 "is_configured": true, 00:27:29.298 "data_offset": 2048, 00:27:29.298 "data_size": 63488 00:27:29.298 }, 00:27:29.298 { 00:27:29.298 "name": "pt3", 00:27:29.298 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:29.298 "is_configured": true, 00:27:29.298 "data_offset": 2048, 00:27:29.298 "data_size": 63488 00:27:29.298 }, 00:27:29.298 { 00:27:29.298 "name": "pt4", 00:27:29.298 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:29.298 "is_configured": true, 00:27:29.298 "data_offset": 2048, 00:27:29.298 "data_size": 63488 00:27:29.298 } 00:27:29.298 ] 00:27:29.298 }' 00:27:29.298 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:29.298 21:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.864 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:27:29.864 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:27:29.864 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:29.864 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:29.864 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:29.864 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:27:29.864 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:29.864 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:30.127 [2024-07-15 21:41:03.272159] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:30.127 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:30.127 "name": "raid_bdev1", 00:27:30.127 "aliases": [ 00:27:30.127 "8b79c6f1-8bf7-4fd8-a391-03edbbec8ab8" 00:27:30.127 ], 00:27:30.127 "product_name": "Raid Volume", 00:27:30.127 "block_size": 512, 00:27:30.127 "num_blocks": 63488, 00:27:30.127 "uuid": "8b79c6f1-8bf7-4fd8-a391-03edbbec8ab8", 00:27:30.127 "assigned_rate_limits": { 00:27:30.127 "rw_ios_per_sec": 0, 00:27:30.127 "rw_mbytes_per_sec": 0, 00:27:30.127 "r_mbytes_per_sec": 0, 00:27:30.127 "w_mbytes_per_sec": 0 00:27:30.127 }, 00:27:30.127 "claimed": false, 00:27:30.127 "zoned": false, 00:27:30.127 "supported_io_types": { 00:27:30.127 "read": true, 00:27:30.127 "write": true, 00:27:30.127 "unmap": false, 00:27:30.127 "flush": false, 00:27:30.127 "reset": true, 00:27:30.127 "nvme_admin": false, 00:27:30.127 "nvme_io": false, 00:27:30.127 "nvme_io_md": false, 00:27:30.127 "write_zeroes": true, 00:27:30.127 "zcopy": false, 00:27:30.127 "get_zone_info": false, 00:27:30.127 "zone_management": false, 00:27:30.127 "zone_append": false, 00:27:30.127 "compare": false, 00:27:30.127 "compare_and_write": false, 00:27:30.127 "abort": false, 00:27:30.127 "seek_hole": false, 00:27:30.127 "seek_data": false, 00:27:30.127 "copy": false, 00:27:30.127 "nvme_iov_md": false 00:27:30.127 }, 00:27:30.127 "memory_domains": [ 00:27:30.127 { 00:27:30.127 "dma_device_id": "system", 00:27:30.127 "dma_device_type": 1 00:27:30.127 }, 00:27:30.127 { 00:27:30.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:30.127 "dma_device_type": 2 00:27:30.127 }, 00:27:30.127 { 00:27:30.127 "dma_device_id": "system", 00:27:30.127 "dma_device_type": 1 00:27:30.127 }, 00:27:30.127 { 00:27:30.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:30.127 "dma_device_type": 2 00:27:30.127 }, 00:27:30.127 { 00:27:30.127 "dma_device_id": "system", 00:27:30.127 "dma_device_type": 1 00:27:30.127 }, 00:27:30.127 { 00:27:30.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:30.127 "dma_device_type": 2 00:27:30.127 }, 00:27:30.127 { 00:27:30.127 "dma_device_id": "system", 00:27:30.127 "dma_device_type": 1 00:27:30.127 }, 00:27:30.127 { 00:27:30.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:30.127 "dma_device_type": 2 00:27:30.127 } 00:27:30.127 ], 00:27:30.127 "driver_specific": { 00:27:30.127 "raid": { 00:27:30.127 "uuid": "8b79c6f1-8bf7-4fd8-a391-03edbbec8ab8", 00:27:30.127 "strip_size_kb": 0, 00:27:30.127 "state": "online", 00:27:30.127 "raid_level": "raid1", 00:27:30.127 "superblock": true, 00:27:30.127 "num_base_bdevs": 4, 00:27:30.127 "num_base_bdevs_discovered": 4, 00:27:30.127 "num_base_bdevs_operational": 4, 00:27:30.127 "base_bdevs_list": [ 00:27:30.127 { 00:27:30.127 "name": "pt1", 00:27:30.127 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:30.127 "is_configured": true, 00:27:30.127 "data_offset": 2048, 00:27:30.127 "data_size": 63488 00:27:30.127 }, 00:27:30.127 { 00:27:30.127 "name": "pt2", 00:27:30.127 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:30.127 "is_configured": true, 00:27:30.127 "data_offset": 2048, 00:27:30.127 "data_size": 63488 00:27:30.127 }, 00:27:30.127 { 00:27:30.127 "name": "pt3", 00:27:30.127 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:30.127 "is_configured": true, 00:27:30.127 "data_offset": 2048, 00:27:30.127 "data_size": 63488 00:27:30.127 }, 00:27:30.127 { 00:27:30.127 "name": "pt4", 00:27:30.127 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:30.127 "is_configured": true, 00:27:30.127 "data_offset": 2048, 00:27:30.127 "data_size": 63488 00:27:30.127 } 00:27:30.127 ] 00:27:30.127 } 00:27:30.127 } 00:27:30.127 }' 00:27:30.127 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:30.127 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:27:30.127 pt2 00:27:30.127 pt3 00:27:30.127 pt4' 00:27:30.127 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:30.127 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:30.127 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:30.384 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:30.384 "name": "pt1", 00:27:30.384 "aliases": [ 00:27:30.385 "00000000-0000-0000-0000-000000000001" 00:27:30.385 ], 00:27:30.385 "product_name": "passthru", 00:27:30.385 "block_size": 512, 00:27:30.385 "num_blocks": 65536, 00:27:30.385 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:30.385 "assigned_rate_limits": { 00:27:30.385 "rw_ios_per_sec": 0, 00:27:30.385 "rw_mbytes_per_sec": 0, 00:27:30.385 "r_mbytes_per_sec": 0, 00:27:30.385 "w_mbytes_per_sec": 0 00:27:30.385 }, 00:27:30.385 "claimed": true, 00:27:30.385 "claim_type": "exclusive_write", 00:27:30.385 "zoned": false, 00:27:30.385 "supported_io_types": { 00:27:30.385 "read": true, 00:27:30.385 "write": true, 00:27:30.385 "unmap": true, 00:27:30.385 "flush": true, 00:27:30.385 "reset": true, 00:27:30.385 "nvme_admin": false, 00:27:30.385 "nvme_io": false, 00:27:30.385 "nvme_io_md": false, 00:27:30.385 "write_zeroes": true, 00:27:30.385 "zcopy": true, 00:27:30.385 "get_zone_info": false, 00:27:30.385 "zone_management": false, 00:27:30.385 "zone_append": false, 00:27:30.385 "compare": false, 00:27:30.385 "compare_and_write": false, 00:27:30.385 "abort": true, 00:27:30.385 "seek_hole": false, 00:27:30.385 "seek_data": false, 00:27:30.385 "copy": true, 00:27:30.385 "nvme_iov_md": false 00:27:30.385 }, 00:27:30.385 "memory_domains": [ 00:27:30.385 { 00:27:30.385 "dma_device_id": "system", 00:27:30.385 "dma_device_type": 1 00:27:30.385 }, 00:27:30.385 { 00:27:30.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:30.385 "dma_device_type": 2 00:27:30.385 } 00:27:30.385 ], 00:27:30.385 "driver_specific": { 00:27:30.385 "passthru": { 00:27:30.385 "name": "pt1", 00:27:30.385 "base_bdev_name": "malloc1" 00:27:30.385 } 00:27:30.385 } 00:27:30.385 }' 00:27:30.385 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:30.385 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:30.385 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:30.385 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:30.385 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:30.385 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:30.385 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:30.643 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:30.643 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:30.643 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:30.643 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:30.643 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:30.643 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:30.643 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:30.643 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:30.903 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:30.903 "name": "pt2", 00:27:30.903 "aliases": [ 00:27:30.903 "00000000-0000-0000-0000-000000000002" 00:27:30.903 ], 00:27:30.903 "product_name": "passthru", 00:27:30.903 "block_size": 512, 00:27:30.903 "num_blocks": 65536, 00:27:30.903 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:30.903 "assigned_rate_limits": { 00:27:30.903 "rw_ios_per_sec": 0, 00:27:30.903 "rw_mbytes_per_sec": 0, 00:27:30.903 "r_mbytes_per_sec": 0, 00:27:30.903 "w_mbytes_per_sec": 0 00:27:30.903 }, 00:27:30.903 "claimed": true, 00:27:30.903 "claim_type": "exclusive_write", 00:27:30.903 "zoned": false, 00:27:30.903 "supported_io_types": { 00:27:30.903 "read": true, 00:27:30.903 "write": true, 00:27:30.903 "unmap": true, 00:27:30.903 "flush": true, 00:27:30.903 "reset": true, 00:27:30.903 "nvme_admin": false, 00:27:30.903 "nvme_io": false, 00:27:30.903 "nvme_io_md": false, 00:27:30.903 "write_zeroes": true, 00:27:30.903 "zcopy": true, 00:27:30.903 "get_zone_info": false, 00:27:30.903 "zone_management": false, 00:27:30.903 "zone_append": false, 00:27:30.903 "compare": false, 00:27:30.903 "compare_and_write": false, 00:27:30.903 "abort": true, 00:27:30.903 "seek_hole": false, 00:27:30.903 "seek_data": false, 00:27:30.903 "copy": true, 00:27:30.903 "nvme_iov_md": false 00:27:30.903 }, 00:27:30.903 "memory_domains": [ 00:27:30.903 { 00:27:30.903 "dma_device_id": "system", 00:27:30.903 "dma_device_type": 1 00:27:30.903 }, 00:27:30.903 { 00:27:30.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:30.903 "dma_device_type": 2 00:27:30.903 } 00:27:30.903 ], 00:27:30.903 "driver_specific": { 00:27:30.903 "passthru": { 00:27:30.903 "name": "pt2", 00:27:30.903 "base_bdev_name": "malloc2" 00:27:30.903 } 00:27:30.903 } 00:27:30.903 }' 00:27:30.903 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:30.903 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:30.903 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:30.903 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:31.161 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:31.161 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:31.161 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:31.161 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:31.161 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:31.161 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:31.420 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:31.420 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:31.420 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:31.420 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:27:31.420 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:31.420 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:31.420 "name": "pt3", 00:27:31.420 "aliases": [ 00:27:31.420 "00000000-0000-0000-0000-000000000003" 00:27:31.420 ], 00:27:31.420 "product_name": "passthru", 00:27:31.420 "block_size": 512, 00:27:31.420 "num_blocks": 65536, 00:27:31.420 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:31.420 "assigned_rate_limits": { 00:27:31.420 "rw_ios_per_sec": 0, 00:27:31.420 "rw_mbytes_per_sec": 0, 00:27:31.420 "r_mbytes_per_sec": 0, 00:27:31.420 "w_mbytes_per_sec": 0 00:27:31.420 }, 00:27:31.420 "claimed": true, 00:27:31.420 "claim_type": "exclusive_write", 00:27:31.420 "zoned": false, 00:27:31.420 "supported_io_types": { 00:27:31.420 "read": true, 00:27:31.420 "write": true, 00:27:31.420 "unmap": true, 00:27:31.420 "flush": true, 00:27:31.420 "reset": true, 00:27:31.420 "nvme_admin": false, 00:27:31.420 "nvme_io": false, 00:27:31.420 "nvme_io_md": false, 00:27:31.420 "write_zeroes": true, 00:27:31.420 "zcopy": true, 00:27:31.420 "get_zone_info": false, 00:27:31.420 "zone_management": false, 00:27:31.420 "zone_append": false, 00:27:31.420 "compare": false, 00:27:31.420 "compare_and_write": false, 00:27:31.420 "abort": true, 00:27:31.420 "seek_hole": false, 00:27:31.420 "seek_data": false, 00:27:31.420 "copy": true, 00:27:31.420 "nvme_iov_md": false 00:27:31.420 }, 00:27:31.420 "memory_domains": [ 00:27:31.420 { 00:27:31.420 "dma_device_id": "system", 00:27:31.420 "dma_device_type": 1 00:27:31.420 }, 00:27:31.420 { 00:27:31.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:31.420 "dma_device_type": 2 00:27:31.420 } 00:27:31.420 ], 00:27:31.420 "driver_specific": { 00:27:31.420 "passthru": { 00:27:31.420 "name": "pt3", 00:27:31.420 "base_bdev_name": "malloc3" 00:27:31.420 } 00:27:31.420 } 00:27:31.420 }' 00:27:31.420 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:31.679 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:31.679 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:31.680 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:31.680 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:31.680 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:31.680 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:31.680 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:31.939 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:31.939 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:31.939 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:31.939 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:31.939 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:31.939 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:27:31.939 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:32.198 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:32.198 "name": "pt4", 00:27:32.198 "aliases": [ 00:27:32.198 "00000000-0000-0000-0000-000000000004" 00:27:32.198 ], 00:27:32.198 "product_name": "passthru", 00:27:32.198 "block_size": 512, 00:27:32.198 "num_blocks": 65536, 00:27:32.198 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:32.198 "assigned_rate_limits": { 00:27:32.198 "rw_ios_per_sec": 0, 00:27:32.198 "rw_mbytes_per_sec": 0, 00:27:32.198 "r_mbytes_per_sec": 0, 00:27:32.198 "w_mbytes_per_sec": 0 00:27:32.198 }, 00:27:32.198 "claimed": true, 00:27:32.198 "claim_type": "exclusive_write", 00:27:32.198 "zoned": false, 00:27:32.198 "supported_io_types": { 00:27:32.198 "read": true, 00:27:32.198 "write": true, 00:27:32.198 "unmap": true, 00:27:32.198 "flush": true, 00:27:32.198 "reset": true, 00:27:32.198 "nvme_admin": false, 00:27:32.198 "nvme_io": false, 00:27:32.198 "nvme_io_md": false, 00:27:32.198 "write_zeroes": true, 00:27:32.198 "zcopy": true, 00:27:32.198 "get_zone_info": false, 00:27:32.198 "zone_management": false, 00:27:32.198 "zone_append": false, 00:27:32.198 "compare": false, 00:27:32.198 "compare_and_write": false, 00:27:32.198 "abort": true, 00:27:32.198 "seek_hole": false, 00:27:32.198 "seek_data": false, 00:27:32.198 "copy": true, 00:27:32.198 "nvme_iov_md": false 00:27:32.198 }, 00:27:32.198 "memory_domains": [ 00:27:32.198 { 00:27:32.198 "dma_device_id": "system", 00:27:32.198 "dma_device_type": 1 00:27:32.198 }, 00:27:32.198 { 00:27:32.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:32.198 "dma_device_type": 2 00:27:32.198 } 00:27:32.198 ], 00:27:32.198 "driver_specific": { 00:27:32.198 "passthru": { 00:27:32.198 "name": "pt4", 00:27:32.198 "base_bdev_name": "malloc4" 00:27:32.198 } 00:27:32.198 } 00:27:32.198 }' 00:27:32.198 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:32.198 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:32.198 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:32.198 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:32.458 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:32.458 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:32.458 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:32.458 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:32.458 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:32.458 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:32.458 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:32.717 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:32.717 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:32.717 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:27:32.717 [2024-07-15 21:41:06.043798] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:32.717 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=8b79c6f1-8bf7-4fd8-a391-03edbbec8ab8 00:27:32.717 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 8b79c6f1-8bf7-4fd8-a391-03edbbec8ab8 ']' 00:27:32.717 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:32.976 [2024-07-15 21:41:06.243201] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:32.976 [2024-07-15 21:41:06.243309] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:32.976 [2024-07-15 21:41:06.243390] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:32.976 [2024-07-15 21:41:06.243476] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:32.976 [2024-07-15 21:41:06.243493] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:27:32.976 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:27:32.976 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:33.234 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:27:33.234 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:27:33.234 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:33.234 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:33.494 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:33.494 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:33.752 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:33.752 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:27:33.752 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:33.752 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:27:34.010 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:27:34.010 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:34.280 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:27:34.280 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:27:34.280 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:27:34.280 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:27:34.280 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:34.280 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:34.280 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:34.280 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:34.281 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:34.281 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:34.281 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:34.281 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:34.281 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:27:34.281 [2024-07-15 21:41:07.648790] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:34.281 [2024-07-15 21:41:07.650483] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:34.281 [2024-07-15 21:41:07.650592] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:27:34.281 [2024-07-15 21:41:07.650642] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:27:34.281 [2024-07-15 21:41:07.650703] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:34.281 [2024-07-15 21:41:07.650820] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:34.281 [2024-07-15 21:41:07.650908] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:27:34.281 [2024-07-15 21:41:07.650948] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:27:34.281 [2024-07-15 21:41:07.650993] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:34.281 [2024-07-15 21:41:07.651021] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state configuring 00:27:34.281 request: 00:27:34.281 { 00:27:34.281 "name": "raid_bdev1", 00:27:34.281 "raid_level": "raid1", 00:27:34.281 "base_bdevs": [ 00:27:34.281 "malloc1", 00:27:34.281 "malloc2", 00:27:34.281 "malloc3", 00:27:34.281 "malloc4" 00:27:34.281 ], 00:27:34.281 "superblock": false, 00:27:34.281 "method": "bdev_raid_create", 00:27:34.281 "req_id": 1 00:27:34.281 } 00:27:34.281 Got JSON-RPC error response 00:27:34.281 response: 00:27:34.281 { 00:27:34.281 "code": -17, 00:27:34.281 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:34.281 } 00:27:34.540 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:27:34.540 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:34.540 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:34.540 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:34.540 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:34.540 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:27:34.799 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:27:34.799 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:27:34.799 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:34.799 [2024-07-15 21:41:08.128131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:34.799 [2024-07-15 21:41:08.128301] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:34.799 [2024-07-15 21:41:08.128340] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:27:34.799 [2024-07-15 21:41:08.128390] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:34.799 [2024-07-15 21:41:08.130482] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:34.799 [2024-07-15 21:41:08.130561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:34.799 [2024-07-15 21:41:08.130679] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:34.799 [2024-07-15 21:41:08.130778] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:34.799 pt1 00:27:34.799 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:27:34.799 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:34.799 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:34.799 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:34.799 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:34.799 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:34.799 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:34.799 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:34.799 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:34.799 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:34.799 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:34.799 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:35.058 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:35.058 "name": "raid_bdev1", 00:27:35.058 "uuid": "8b79c6f1-8bf7-4fd8-a391-03edbbec8ab8", 00:27:35.058 "strip_size_kb": 0, 00:27:35.058 "state": "configuring", 00:27:35.058 "raid_level": "raid1", 00:27:35.058 "superblock": true, 00:27:35.058 "num_base_bdevs": 4, 00:27:35.058 "num_base_bdevs_discovered": 1, 00:27:35.058 "num_base_bdevs_operational": 4, 00:27:35.058 "base_bdevs_list": [ 00:27:35.058 { 00:27:35.058 "name": "pt1", 00:27:35.058 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:35.058 "is_configured": true, 00:27:35.058 "data_offset": 2048, 00:27:35.058 "data_size": 63488 00:27:35.058 }, 00:27:35.058 { 00:27:35.058 "name": null, 00:27:35.058 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:35.058 "is_configured": false, 00:27:35.058 "data_offset": 2048, 00:27:35.058 "data_size": 63488 00:27:35.058 }, 00:27:35.058 { 00:27:35.058 "name": null, 00:27:35.058 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:35.058 "is_configured": false, 00:27:35.058 "data_offset": 2048, 00:27:35.058 "data_size": 63488 00:27:35.058 }, 00:27:35.058 { 00:27:35.058 "name": null, 00:27:35.058 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:35.058 "is_configured": false, 00:27:35.058 "data_offset": 2048, 00:27:35.058 "data_size": 63488 00:27:35.058 } 00:27:35.058 ] 00:27:35.058 }' 00:27:35.058 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:35.058 21:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:35.627 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:27:35.627 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:35.886 [2024-07-15 21:41:09.174376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:35.886 [2024-07-15 21:41:09.174531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:35.886 [2024-07-15 21:41:09.174584] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:27:35.886 [2024-07-15 21:41:09.174635] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:35.886 [2024-07-15 21:41:09.175139] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:35.886 [2024-07-15 21:41:09.175205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:35.886 [2024-07-15 21:41:09.175361] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:35.886 [2024-07-15 21:41:09.175416] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:35.886 pt2 00:27:35.886 21:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:36.146 [2024-07-15 21:41:09.358155] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:27:36.146 21:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:27:36.146 21:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:36.146 21:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:36.146 21:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:36.146 21:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:36.146 21:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:36.146 21:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:36.146 21:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:36.146 21:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:36.146 21:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:36.146 21:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:36.146 21:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:36.405 21:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:36.405 "name": "raid_bdev1", 00:27:36.405 "uuid": "8b79c6f1-8bf7-4fd8-a391-03edbbec8ab8", 00:27:36.405 "strip_size_kb": 0, 00:27:36.405 "state": "configuring", 00:27:36.405 "raid_level": "raid1", 00:27:36.405 "superblock": true, 00:27:36.405 "num_base_bdevs": 4, 00:27:36.405 "num_base_bdevs_discovered": 1, 00:27:36.405 "num_base_bdevs_operational": 4, 00:27:36.405 "base_bdevs_list": [ 00:27:36.405 { 00:27:36.405 "name": "pt1", 00:27:36.405 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:36.405 "is_configured": true, 00:27:36.405 "data_offset": 2048, 00:27:36.405 "data_size": 63488 00:27:36.405 }, 00:27:36.405 { 00:27:36.405 "name": null, 00:27:36.405 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:36.405 "is_configured": false, 00:27:36.405 "data_offset": 2048, 00:27:36.405 "data_size": 63488 00:27:36.405 }, 00:27:36.405 { 00:27:36.405 "name": null, 00:27:36.405 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:36.405 "is_configured": false, 00:27:36.405 "data_offset": 2048, 00:27:36.405 "data_size": 63488 00:27:36.405 }, 00:27:36.405 { 00:27:36.405 "name": null, 00:27:36.405 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:36.405 "is_configured": false, 00:27:36.405 "data_offset": 2048, 00:27:36.405 "data_size": 63488 00:27:36.405 } 00:27:36.405 ] 00:27:36.405 }' 00:27:36.405 21:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:36.405 21:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:36.973 21:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:27:36.973 21:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:36.973 21:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:37.232 [2024-07-15 21:41:10.348328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:37.232 [2024-07-15 21:41:10.348476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:37.232 [2024-07-15 21:41:10.348527] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:27:37.232 [2024-07-15 21:41:10.348591] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:37.232 [2024-07-15 21:41:10.349074] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:37.232 [2024-07-15 21:41:10.349150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:37.232 [2024-07-15 21:41:10.349304] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:37.232 [2024-07-15 21:41:10.349361] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:37.232 pt2 00:27:37.232 21:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:27:37.232 21:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:37.232 21:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:37.232 [2024-07-15 21:41:10.540020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:37.232 [2024-07-15 21:41:10.540165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:37.232 [2024-07-15 21:41:10.540218] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:27:37.232 [2024-07-15 21:41:10.540270] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:37.232 [2024-07-15 21:41:10.540693] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:37.232 [2024-07-15 21:41:10.540756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:37.232 [2024-07-15 21:41:10.540874] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:27:37.232 [2024-07-15 21:41:10.540918] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:37.232 pt3 00:27:37.232 21:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:27:37.232 21:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:37.232 21:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:27:37.492 [2024-07-15 21:41:10.751628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:27:37.492 [2024-07-15 21:41:10.751751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:37.492 [2024-07-15 21:41:10.751787] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:27:37.492 [2024-07-15 21:41:10.751842] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:37.492 [2024-07-15 21:41:10.752288] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:37.492 [2024-07-15 21:41:10.752350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:27:37.492 [2024-07-15 21:41:10.752468] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:27:37.492 [2024-07-15 21:41:10.752508] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:27:37.492 [2024-07-15 21:41:10.752640] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:27:37.492 [2024-07-15 21:41:10.752668] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:37.492 [2024-07-15 21:41:10.752777] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:27:37.492 [2024-07-15 21:41:10.753072] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:27:37.492 [2024-07-15 21:41:10.753108] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:27:37.492 [2024-07-15 21:41:10.753250] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:37.492 pt4 00:27:37.492 21:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:27:37.492 21:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:37.492 21:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:27:37.492 21:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:37.492 21:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:37.492 21:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:37.492 21:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:37.492 21:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:37.492 21:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:37.492 21:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:37.492 21:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:37.492 21:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:37.492 21:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:37.492 21:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:37.758 21:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:37.758 "name": "raid_bdev1", 00:27:37.758 "uuid": "8b79c6f1-8bf7-4fd8-a391-03edbbec8ab8", 00:27:37.758 "strip_size_kb": 0, 00:27:37.758 "state": "online", 00:27:37.758 "raid_level": "raid1", 00:27:37.758 "superblock": true, 00:27:37.758 "num_base_bdevs": 4, 00:27:37.758 "num_base_bdevs_discovered": 4, 00:27:37.758 "num_base_bdevs_operational": 4, 00:27:37.758 "base_bdevs_list": [ 00:27:37.758 { 00:27:37.758 "name": "pt1", 00:27:37.758 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:37.758 "is_configured": true, 00:27:37.758 "data_offset": 2048, 00:27:37.758 "data_size": 63488 00:27:37.758 }, 00:27:37.758 { 00:27:37.758 "name": "pt2", 00:27:37.758 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:37.758 "is_configured": true, 00:27:37.758 "data_offset": 2048, 00:27:37.758 "data_size": 63488 00:27:37.758 }, 00:27:37.758 { 00:27:37.758 "name": "pt3", 00:27:37.758 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:37.758 "is_configured": true, 00:27:37.758 "data_offset": 2048, 00:27:37.758 "data_size": 63488 00:27:37.758 }, 00:27:37.758 { 00:27:37.758 "name": "pt4", 00:27:37.758 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:37.758 "is_configured": true, 00:27:37.758 "data_offset": 2048, 00:27:37.758 "data_size": 63488 00:27:37.758 } 00:27:37.758 ] 00:27:37.758 }' 00:27:37.758 21:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:37.758 21:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:38.327 21:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:27:38.327 21:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:27:38.327 21:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:38.327 21:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:38.327 21:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:38.327 21:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:27:38.327 21:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:38.327 21:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:38.327 [2024-07-15 21:41:11.678327] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:38.327 21:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:38.327 "name": "raid_bdev1", 00:27:38.327 "aliases": [ 00:27:38.327 "8b79c6f1-8bf7-4fd8-a391-03edbbec8ab8" 00:27:38.327 ], 00:27:38.327 "product_name": "Raid Volume", 00:27:38.327 "block_size": 512, 00:27:38.327 "num_blocks": 63488, 00:27:38.327 "uuid": "8b79c6f1-8bf7-4fd8-a391-03edbbec8ab8", 00:27:38.327 "assigned_rate_limits": { 00:27:38.327 "rw_ios_per_sec": 0, 00:27:38.327 "rw_mbytes_per_sec": 0, 00:27:38.327 "r_mbytes_per_sec": 0, 00:27:38.327 "w_mbytes_per_sec": 0 00:27:38.327 }, 00:27:38.327 "claimed": false, 00:27:38.327 "zoned": false, 00:27:38.327 "supported_io_types": { 00:27:38.327 "read": true, 00:27:38.327 "write": true, 00:27:38.327 "unmap": false, 00:27:38.327 "flush": false, 00:27:38.327 "reset": true, 00:27:38.327 "nvme_admin": false, 00:27:38.327 "nvme_io": false, 00:27:38.327 "nvme_io_md": false, 00:27:38.327 "write_zeroes": true, 00:27:38.327 "zcopy": false, 00:27:38.327 "get_zone_info": false, 00:27:38.327 "zone_management": false, 00:27:38.327 "zone_append": false, 00:27:38.327 "compare": false, 00:27:38.327 "compare_and_write": false, 00:27:38.327 "abort": false, 00:27:38.327 "seek_hole": false, 00:27:38.327 "seek_data": false, 00:27:38.327 "copy": false, 00:27:38.327 "nvme_iov_md": false 00:27:38.327 }, 00:27:38.327 "memory_domains": [ 00:27:38.327 { 00:27:38.327 "dma_device_id": "system", 00:27:38.327 "dma_device_type": 1 00:27:38.327 }, 00:27:38.327 { 00:27:38.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:38.327 "dma_device_type": 2 00:27:38.327 }, 00:27:38.327 { 00:27:38.327 "dma_device_id": "system", 00:27:38.327 "dma_device_type": 1 00:27:38.327 }, 00:27:38.327 { 00:27:38.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:38.327 "dma_device_type": 2 00:27:38.327 }, 00:27:38.327 { 00:27:38.327 "dma_device_id": "system", 00:27:38.327 "dma_device_type": 1 00:27:38.327 }, 00:27:38.327 { 00:27:38.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:38.327 "dma_device_type": 2 00:27:38.327 }, 00:27:38.327 { 00:27:38.327 "dma_device_id": "system", 00:27:38.327 "dma_device_type": 1 00:27:38.327 }, 00:27:38.327 { 00:27:38.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:38.327 "dma_device_type": 2 00:27:38.327 } 00:27:38.327 ], 00:27:38.327 "driver_specific": { 00:27:38.327 "raid": { 00:27:38.327 "uuid": "8b79c6f1-8bf7-4fd8-a391-03edbbec8ab8", 00:27:38.327 "strip_size_kb": 0, 00:27:38.327 "state": "online", 00:27:38.327 "raid_level": "raid1", 00:27:38.327 "superblock": true, 00:27:38.327 "num_base_bdevs": 4, 00:27:38.327 "num_base_bdevs_discovered": 4, 00:27:38.327 "num_base_bdevs_operational": 4, 00:27:38.327 "base_bdevs_list": [ 00:27:38.327 { 00:27:38.327 "name": "pt1", 00:27:38.327 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:38.327 "is_configured": true, 00:27:38.327 "data_offset": 2048, 00:27:38.327 "data_size": 63488 00:27:38.327 }, 00:27:38.327 { 00:27:38.327 "name": "pt2", 00:27:38.327 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:38.327 "is_configured": true, 00:27:38.327 "data_offset": 2048, 00:27:38.327 "data_size": 63488 00:27:38.327 }, 00:27:38.327 { 00:27:38.327 "name": "pt3", 00:27:38.327 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:38.327 "is_configured": true, 00:27:38.327 "data_offset": 2048, 00:27:38.327 "data_size": 63488 00:27:38.327 }, 00:27:38.327 { 00:27:38.327 "name": "pt4", 00:27:38.327 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:38.327 "is_configured": true, 00:27:38.327 "data_offset": 2048, 00:27:38.327 "data_size": 63488 00:27:38.328 } 00:27:38.328 ] 00:27:38.328 } 00:27:38.328 } 00:27:38.328 }' 00:27:38.328 21:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:38.587 21:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:27:38.587 pt2 00:27:38.587 pt3 00:27:38.587 pt4' 00:27:38.587 21:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:38.587 21:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:38.587 21:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:38.587 21:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:38.587 "name": "pt1", 00:27:38.587 "aliases": [ 00:27:38.587 "00000000-0000-0000-0000-000000000001" 00:27:38.587 ], 00:27:38.587 "product_name": "passthru", 00:27:38.587 "block_size": 512, 00:27:38.587 "num_blocks": 65536, 00:27:38.587 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:38.587 "assigned_rate_limits": { 00:27:38.587 "rw_ios_per_sec": 0, 00:27:38.587 "rw_mbytes_per_sec": 0, 00:27:38.587 "r_mbytes_per_sec": 0, 00:27:38.587 "w_mbytes_per_sec": 0 00:27:38.587 }, 00:27:38.587 "claimed": true, 00:27:38.587 "claim_type": "exclusive_write", 00:27:38.587 "zoned": false, 00:27:38.587 "supported_io_types": { 00:27:38.587 "read": true, 00:27:38.587 "write": true, 00:27:38.587 "unmap": true, 00:27:38.587 "flush": true, 00:27:38.587 "reset": true, 00:27:38.587 "nvme_admin": false, 00:27:38.587 "nvme_io": false, 00:27:38.587 "nvme_io_md": false, 00:27:38.587 "write_zeroes": true, 00:27:38.587 "zcopy": true, 00:27:38.587 "get_zone_info": false, 00:27:38.587 "zone_management": false, 00:27:38.587 "zone_append": false, 00:27:38.587 "compare": false, 00:27:38.587 "compare_and_write": false, 00:27:38.587 "abort": true, 00:27:38.587 "seek_hole": false, 00:27:38.587 "seek_data": false, 00:27:38.587 "copy": true, 00:27:38.587 "nvme_iov_md": false 00:27:38.587 }, 00:27:38.587 "memory_domains": [ 00:27:38.587 { 00:27:38.587 "dma_device_id": "system", 00:27:38.587 "dma_device_type": 1 00:27:38.587 }, 00:27:38.587 { 00:27:38.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:38.587 "dma_device_type": 2 00:27:38.587 } 00:27:38.587 ], 00:27:38.587 "driver_specific": { 00:27:38.587 "passthru": { 00:27:38.587 "name": "pt1", 00:27:38.587 "base_bdev_name": "malloc1" 00:27:38.587 } 00:27:38.587 } 00:27:38.587 }' 00:27:38.587 21:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:38.846 21:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:38.846 21:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:38.846 21:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:38.846 21:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:38.846 21:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:38.846 21:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:38.846 21:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:39.106 21:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:39.106 21:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:39.106 21:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:39.106 21:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:39.106 21:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:39.106 21:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:39.106 21:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:39.379 21:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:39.379 "name": "pt2", 00:27:39.379 "aliases": [ 00:27:39.379 "00000000-0000-0000-0000-000000000002" 00:27:39.379 ], 00:27:39.379 "product_name": "passthru", 00:27:39.379 "block_size": 512, 00:27:39.379 "num_blocks": 65536, 00:27:39.379 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:39.379 "assigned_rate_limits": { 00:27:39.379 "rw_ios_per_sec": 0, 00:27:39.379 "rw_mbytes_per_sec": 0, 00:27:39.379 "r_mbytes_per_sec": 0, 00:27:39.379 "w_mbytes_per_sec": 0 00:27:39.379 }, 00:27:39.379 "claimed": true, 00:27:39.379 "claim_type": "exclusive_write", 00:27:39.379 "zoned": false, 00:27:39.379 "supported_io_types": { 00:27:39.379 "read": true, 00:27:39.379 "write": true, 00:27:39.379 "unmap": true, 00:27:39.379 "flush": true, 00:27:39.379 "reset": true, 00:27:39.379 "nvme_admin": false, 00:27:39.379 "nvme_io": false, 00:27:39.379 "nvme_io_md": false, 00:27:39.379 "write_zeroes": true, 00:27:39.379 "zcopy": true, 00:27:39.379 "get_zone_info": false, 00:27:39.379 "zone_management": false, 00:27:39.379 "zone_append": false, 00:27:39.379 "compare": false, 00:27:39.379 "compare_and_write": false, 00:27:39.379 "abort": true, 00:27:39.379 "seek_hole": false, 00:27:39.379 "seek_data": false, 00:27:39.379 "copy": true, 00:27:39.379 "nvme_iov_md": false 00:27:39.379 }, 00:27:39.379 "memory_domains": [ 00:27:39.379 { 00:27:39.379 "dma_device_id": "system", 00:27:39.379 "dma_device_type": 1 00:27:39.379 }, 00:27:39.379 { 00:27:39.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:39.379 "dma_device_type": 2 00:27:39.379 } 00:27:39.379 ], 00:27:39.379 "driver_specific": { 00:27:39.379 "passthru": { 00:27:39.379 "name": "pt2", 00:27:39.379 "base_bdev_name": "malloc2" 00:27:39.379 } 00:27:39.379 } 00:27:39.379 }' 00:27:39.379 21:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:39.379 21:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:39.380 21:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:39.380 21:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:39.380 21:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:39.639 21:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:39.639 21:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:39.639 21:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:39.639 21:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:39.639 21:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:39.639 21:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:39.639 21:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:39.639 21:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:39.639 21:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:27:39.639 21:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:39.899 21:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:39.899 "name": "pt3", 00:27:39.899 "aliases": [ 00:27:39.899 "00000000-0000-0000-0000-000000000003" 00:27:39.899 ], 00:27:39.899 "product_name": "passthru", 00:27:39.899 "block_size": 512, 00:27:39.899 "num_blocks": 65536, 00:27:39.899 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:39.899 "assigned_rate_limits": { 00:27:39.899 "rw_ios_per_sec": 0, 00:27:39.899 "rw_mbytes_per_sec": 0, 00:27:39.899 "r_mbytes_per_sec": 0, 00:27:39.899 "w_mbytes_per_sec": 0 00:27:39.899 }, 00:27:39.899 "claimed": true, 00:27:39.899 "claim_type": "exclusive_write", 00:27:39.899 "zoned": false, 00:27:39.899 "supported_io_types": { 00:27:39.899 "read": true, 00:27:39.899 "write": true, 00:27:39.899 "unmap": true, 00:27:39.899 "flush": true, 00:27:39.899 "reset": true, 00:27:39.899 "nvme_admin": false, 00:27:39.899 "nvme_io": false, 00:27:39.899 "nvme_io_md": false, 00:27:39.899 "write_zeroes": true, 00:27:39.899 "zcopy": true, 00:27:39.899 "get_zone_info": false, 00:27:39.899 "zone_management": false, 00:27:39.899 "zone_append": false, 00:27:39.899 "compare": false, 00:27:39.899 "compare_and_write": false, 00:27:39.899 "abort": true, 00:27:39.899 "seek_hole": false, 00:27:39.899 "seek_data": false, 00:27:39.899 "copy": true, 00:27:39.899 "nvme_iov_md": false 00:27:39.899 }, 00:27:39.899 "memory_domains": [ 00:27:39.899 { 00:27:39.899 "dma_device_id": "system", 00:27:39.899 "dma_device_type": 1 00:27:39.899 }, 00:27:39.899 { 00:27:39.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:39.899 "dma_device_type": 2 00:27:39.899 } 00:27:39.899 ], 00:27:39.899 "driver_specific": { 00:27:39.899 "passthru": { 00:27:39.899 "name": "pt3", 00:27:39.899 "base_bdev_name": "malloc3" 00:27:39.899 } 00:27:39.899 } 00:27:39.899 }' 00:27:39.899 21:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:39.899 21:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:40.157 21:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:40.157 21:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:40.157 21:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:40.157 21:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:40.157 21:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:40.157 21:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:40.415 21:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:40.415 21:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:40.415 21:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:40.415 21:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:40.415 21:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:40.415 21:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:27:40.415 21:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:40.674 21:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:40.674 "name": "pt4", 00:27:40.674 "aliases": [ 00:27:40.674 "00000000-0000-0000-0000-000000000004" 00:27:40.674 ], 00:27:40.674 "product_name": "passthru", 00:27:40.674 "block_size": 512, 00:27:40.674 "num_blocks": 65536, 00:27:40.674 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:40.674 "assigned_rate_limits": { 00:27:40.674 "rw_ios_per_sec": 0, 00:27:40.674 "rw_mbytes_per_sec": 0, 00:27:40.674 "r_mbytes_per_sec": 0, 00:27:40.674 "w_mbytes_per_sec": 0 00:27:40.674 }, 00:27:40.674 "claimed": true, 00:27:40.674 "claim_type": "exclusive_write", 00:27:40.674 "zoned": false, 00:27:40.674 "supported_io_types": { 00:27:40.674 "read": true, 00:27:40.674 "write": true, 00:27:40.674 "unmap": true, 00:27:40.674 "flush": true, 00:27:40.674 "reset": true, 00:27:40.674 "nvme_admin": false, 00:27:40.674 "nvme_io": false, 00:27:40.674 "nvme_io_md": false, 00:27:40.674 "write_zeroes": true, 00:27:40.674 "zcopy": true, 00:27:40.674 "get_zone_info": false, 00:27:40.674 "zone_management": false, 00:27:40.674 "zone_append": false, 00:27:40.674 "compare": false, 00:27:40.674 "compare_and_write": false, 00:27:40.674 "abort": true, 00:27:40.674 "seek_hole": false, 00:27:40.674 "seek_data": false, 00:27:40.674 "copy": true, 00:27:40.674 "nvme_iov_md": false 00:27:40.674 }, 00:27:40.674 "memory_domains": [ 00:27:40.674 { 00:27:40.674 "dma_device_id": "system", 00:27:40.674 "dma_device_type": 1 00:27:40.674 }, 00:27:40.674 { 00:27:40.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:40.674 "dma_device_type": 2 00:27:40.674 } 00:27:40.674 ], 00:27:40.674 "driver_specific": { 00:27:40.674 "passthru": { 00:27:40.674 "name": "pt4", 00:27:40.674 "base_bdev_name": "malloc4" 00:27:40.674 } 00:27:40.674 } 00:27:40.674 }' 00:27:40.674 21:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:40.674 21:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:40.674 21:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:40.674 21:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:40.674 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:40.933 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:40.933 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:40.933 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:40.933 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:40.933 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:40.933 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:40.933 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:40.933 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:40.933 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:27:41.192 [2024-07-15 21:41:14.482336] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:41.192 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 8b79c6f1-8bf7-4fd8-a391-03edbbec8ab8 '!=' 8b79c6f1-8bf7-4fd8-a391-03edbbec8ab8 ']' 00:27:41.192 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:27:41.192 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:41.192 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:27:41.192 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:41.454 [2024-07-15 21:41:14.689757] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:27:41.454 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:41.454 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:41.454 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:41.454 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:41.454 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:41.454 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:41.454 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:41.454 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:41.454 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:41.454 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:41.454 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:41.454 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:41.713 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:41.713 "name": "raid_bdev1", 00:27:41.713 "uuid": "8b79c6f1-8bf7-4fd8-a391-03edbbec8ab8", 00:27:41.713 "strip_size_kb": 0, 00:27:41.713 "state": "online", 00:27:41.713 "raid_level": "raid1", 00:27:41.713 "superblock": true, 00:27:41.713 "num_base_bdevs": 4, 00:27:41.713 "num_base_bdevs_discovered": 3, 00:27:41.713 "num_base_bdevs_operational": 3, 00:27:41.713 "base_bdevs_list": [ 00:27:41.713 { 00:27:41.713 "name": null, 00:27:41.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.713 "is_configured": false, 00:27:41.713 "data_offset": 2048, 00:27:41.713 "data_size": 63488 00:27:41.713 }, 00:27:41.713 { 00:27:41.713 "name": "pt2", 00:27:41.713 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:41.713 "is_configured": true, 00:27:41.713 "data_offset": 2048, 00:27:41.713 "data_size": 63488 00:27:41.713 }, 00:27:41.713 { 00:27:41.713 "name": "pt3", 00:27:41.713 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:41.713 "is_configured": true, 00:27:41.713 "data_offset": 2048, 00:27:41.713 "data_size": 63488 00:27:41.713 }, 00:27:41.713 { 00:27:41.713 "name": "pt4", 00:27:41.713 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:41.713 "is_configured": true, 00:27:41.713 "data_offset": 2048, 00:27:41.713 "data_size": 63488 00:27:41.713 } 00:27:41.713 ] 00:27:41.713 }' 00:27:41.713 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:41.713 21:41:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:42.281 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:42.541 [2024-07-15 21:41:15.745784] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:42.541 [2024-07-15 21:41:15.745887] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:42.541 [2024-07-15 21:41:15.745999] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:42.541 [2024-07-15 21:41:15.746083] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:42.541 [2024-07-15 21:41:15.746121] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:27:42.541 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:42.541 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:27:42.801 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:27:42.801 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:27:42.801 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:27:42.801 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:27:42.801 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:42.801 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:27:42.801 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:27:42.801 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:27:43.060 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:27:43.060 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:27:43.060 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:27:43.320 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:27:43.320 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:27:43.320 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:27:43.320 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:27:43.320 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:43.579 [2024-07-15 21:41:16.780060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:43.579 [2024-07-15 21:41:16.780232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:43.579 [2024-07-15 21:41:16.780292] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:27:43.579 [2024-07-15 21:41:16.780350] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:43.579 [2024-07-15 21:41:16.782433] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:43.579 [2024-07-15 21:41:16.782510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:43.579 [2024-07-15 21:41:16.782652] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:43.579 [2024-07-15 21:41:16.782748] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:43.579 pt2 00:27:43.579 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:27:43.579 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:43.579 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:43.579 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:43.579 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:43.580 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:43.580 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:43.580 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:43.580 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:43.580 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:43.580 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:43.580 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:43.838 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:43.838 "name": "raid_bdev1", 00:27:43.838 "uuid": "8b79c6f1-8bf7-4fd8-a391-03edbbec8ab8", 00:27:43.838 "strip_size_kb": 0, 00:27:43.838 "state": "configuring", 00:27:43.838 "raid_level": "raid1", 00:27:43.838 "superblock": true, 00:27:43.838 "num_base_bdevs": 4, 00:27:43.838 "num_base_bdevs_discovered": 1, 00:27:43.838 "num_base_bdevs_operational": 3, 00:27:43.838 "base_bdevs_list": [ 00:27:43.838 { 00:27:43.838 "name": null, 00:27:43.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:43.838 "is_configured": false, 00:27:43.838 "data_offset": 2048, 00:27:43.838 "data_size": 63488 00:27:43.838 }, 00:27:43.838 { 00:27:43.838 "name": "pt2", 00:27:43.838 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:43.839 "is_configured": true, 00:27:43.839 "data_offset": 2048, 00:27:43.839 "data_size": 63488 00:27:43.839 }, 00:27:43.839 { 00:27:43.839 "name": null, 00:27:43.839 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:43.839 "is_configured": false, 00:27:43.839 "data_offset": 2048, 00:27:43.839 "data_size": 63488 00:27:43.839 }, 00:27:43.839 { 00:27:43.839 "name": null, 00:27:43.839 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:43.839 "is_configured": false, 00:27:43.839 "data_offset": 2048, 00:27:43.839 "data_size": 63488 00:27:43.839 } 00:27:43.839 ] 00:27:43.839 }' 00:27:43.839 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:43.839 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.406 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:27:44.406 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:27:44.406 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:44.406 [2024-07-15 21:41:17.730419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:44.406 [2024-07-15 21:41:17.730584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:44.406 [2024-07-15 21:41:17.730646] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:27:44.406 [2024-07-15 21:41:17.730696] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:44.406 [2024-07-15 21:41:17.731180] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:44.406 [2024-07-15 21:41:17.731248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:44.406 [2024-07-15 21:41:17.731388] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:27:44.406 [2024-07-15 21:41:17.731440] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:44.406 pt3 00:27:44.406 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:27:44.406 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:44.406 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:44.406 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:44.406 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:44.406 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:44.406 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:44.406 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:44.406 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:44.406 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:44.406 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:44.406 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:44.665 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:44.665 "name": "raid_bdev1", 00:27:44.665 "uuid": "8b79c6f1-8bf7-4fd8-a391-03edbbec8ab8", 00:27:44.665 "strip_size_kb": 0, 00:27:44.665 "state": "configuring", 00:27:44.665 "raid_level": "raid1", 00:27:44.665 "superblock": true, 00:27:44.665 "num_base_bdevs": 4, 00:27:44.665 "num_base_bdevs_discovered": 2, 00:27:44.665 "num_base_bdevs_operational": 3, 00:27:44.665 "base_bdevs_list": [ 00:27:44.665 { 00:27:44.665 "name": null, 00:27:44.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:44.665 "is_configured": false, 00:27:44.665 "data_offset": 2048, 00:27:44.665 "data_size": 63488 00:27:44.665 }, 00:27:44.665 { 00:27:44.665 "name": "pt2", 00:27:44.665 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:44.665 "is_configured": true, 00:27:44.665 "data_offset": 2048, 00:27:44.665 "data_size": 63488 00:27:44.665 }, 00:27:44.665 { 00:27:44.665 "name": "pt3", 00:27:44.665 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:44.665 "is_configured": true, 00:27:44.665 "data_offset": 2048, 00:27:44.665 "data_size": 63488 00:27:44.665 }, 00:27:44.665 { 00:27:44.665 "name": null, 00:27:44.665 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:44.665 "is_configured": false, 00:27:44.665 "data_offset": 2048, 00:27:44.665 "data_size": 63488 00:27:44.665 } 00:27:44.665 ] 00:27:44.665 }' 00:27:44.665 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:44.665 21:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:45.232 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:27:45.232 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:27:45.232 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:27:45.232 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:27:45.491 [2024-07-15 21:41:18.728680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:27:45.491 [2024-07-15 21:41:18.728818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:45.491 [2024-07-15 21:41:18.728893] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:27:45.491 [2024-07-15 21:41:18.728949] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:45.491 [2024-07-15 21:41:18.729471] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:45.491 [2024-07-15 21:41:18.729530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:27:45.491 [2024-07-15 21:41:18.729647] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:27:45.491 [2024-07-15 21:41:18.729693] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:27:45.491 [2024-07-15 21:41:18.729833] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:27:45.491 [2024-07-15 21:41:18.729863] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:45.491 [2024-07-15 21:41:18.729976] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:27:45.491 [2024-07-15 21:41:18.730269] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:27:45.491 [2024-07-15 21:41:18.730309] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:27:45.491 [2024-07-15 21:41:18.730461] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:45.491 pt4 00:27:45.491 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:45.491 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:45.491 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:45.491 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:45.491 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:45.491 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:45.491 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:45.491 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:45.491 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:45.491 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:45.491 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:45.491 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:45.749 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:45.749 "name": "raid_bdev1", 00:27:45.749 "uuid": "8b79c6f1-8bf7-4fd8-a391-03edbbec8ab8", 00:27:45.749 "strip_size_kb": 0, 00:27:45.749 "state": "online", 00:27:45.749 "raid_level": "raid1", 00:27:45.749 "superblock": true, 00:27:45.749 "num_base_bdevs": 4, 00:27:45.749 "num_base_bdevs_discovered": 3, 00:27:45.749 "num_base_bdevs_operational": 3, 00:27:45.749 "base_bdevs_list": [ 00:27:45.749 { 00:27:45.749 "name": null, 00:27:45.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:45.749 "is_configured": false, 00:27:45.749 "data_offset": 2048, 00:27:45.749 "data_size": 63488 00:27:45.749 }, 00:27:45.749 { 00:27:45.749 "name": "pt2", 00:27:45.749 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:45.749 "is_configured": true, 00:27:45.749 "data_offset": 2048, 00:27:45.749 "data_size": 63488 00:27:45.749 }, 00:27:45.749 { 00:27:45.749 "name": "pt3", 00:27:45.749 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:45.749 "is_configured": true, 00:27:45.749 "data_offset": 2048, 00:27:45.749 "data_size": 63488 00:27:45.749 }, 00:27:45.749 { 00:27:45.749 "name": "pt4", 00:27:45.749 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:45.749 "is_configured": true, 00:27:45.749 "data_offset": 2048, 00:27:45.749 "data_size": 63488 00:27:45.749 } 00:27:45.749 ] 00:27:45.749 }' 00:27:45.749 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:45.750 21:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:46.316 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:46.576 [2024-07-15 21:41:19.738942] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:46.576 [2024-07-15 21:41:19.739049] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:46.576 [2024-07-15 21:41:19.739144] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:46.576 [2024-07-15 21:41:19.739242] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:46.576 [2024-07-15 21:41:19.739278] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:27:46.576 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:46.576 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:27:46.576 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:27:46.576 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:27:46.576 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:27:46.576 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:27:46.576 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:27:46.835 21:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:47.093 [2024-07-15 21:41:20.309987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:47.093 [2024-07-15 21:41:20.310111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:47.093 [2024-07-15 21:41:20.310155] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:27:47.093 [2024-07-15 21:41:20.310226] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:47.093 [2024-07-15 21:41:20.312236] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:47.093 [2024-07-15 21:41:20.312317] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:47.093 [2024-07-15 21:41:20.312432] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:47.093 [2024-07-15 21:41:20.312517] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:47.093 [2024-07-15 21:41:20.312665] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:27:47.093 [2024-07-15 21:41:20.312701] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:47.093 [2024-07-15 21:41:20.312743] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cf80 name raid_bdev1, state configuring 00:27:47.093 [2024-07-15 21:41:20.312812] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:47.093 [2024-07-15 21:41:20.312950] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:47.093 pt1 00:27:47.093 21:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:27:47.093 21:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:27:47.093 21:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:47.093 21:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:47.093 21:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:47.093 21:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:47.093 21:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:47.093 21:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:47.093 21:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:47.093 21:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:47.093 21:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:47.093 21:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:47.093 21:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:47.351 21:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:47.351 "name": "raid_bdev1", 00:27:47.351 "uuid": "8b79c6f1-8bf7-4fd8-a391-03edbbec8ab8", 00:27:47.351 "strip_size_kb": 0, 00:27:47.351 "state": "configuring", 00:27:47.351 "raid_level": "raid1", 00:27:47.351 "superblock": true, 00:27:47.351 "num_base_bdevs": 4, 00:27:47.351 "num_base_bdevs_discovered": 2, 00:27:47.351 "num_base_bdevs_operational": 3, 00:27:47.351 "base_bdevs_list": [ 00:27:47.351 { 00:27:47.351 "name": null, 00:27:47.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:47.351 "is_configured": false, 00:27:47.351 "data_offset": 2048, 00:27:47.351 "data_size": 63488 00:27:47.351 }, 00:27:47.351 { 00:27:47.351 "name": "pt2", 00:27:47.351 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:47.351 "is_configured": true, 00:27:47.351 "data_offset": 2048, 00:27:47.351 "data_size": 63488 00:27:47.351 }, 00:27:47.351 { 00:27:47.351 "name": "pt3", 00:27:47.351 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:47.351 "is_configured": true, 00:27:47.351 "data_offset": 2048, 00:27:47.351 "data_size": 63488 00:27:47.351 }, 00:27:47.351 { 00:27:47.351 "name": null, 00:27:47.351 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:47.351 "is_configured": false, 00:27:47.351 "data_offset": 2048, 00:27:47.351 "data_size": 63488 00:27:47.351 } 00:27:47.351 ] 00:27:47.351 }' 00:27:47.351 21:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:47.351 21:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.917 21:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:27:47.917 21:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:27:48.176 21:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:27:48.176 21:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:27:48.434 [2024-07-15 21:41:21.563923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:27:48.434 [2024-07-15 21:41:21.564081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:48.434 [2024-07-15 21:41:21.564122] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:27:48.434 [2024-07-15 21:41:21.564193] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:48.434 [2024-07-15 21:41:21.564655] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:48.434 [2024-07-15 21:41:21.564726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:27:48.434 [2024-07-15 21:41:21.564855] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:27:48.434 [2024-07-15 21:41:21.564903] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:27:48.434 [2024-07-15 21:41:21.565039] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000d280 00:27:48.434 [2024-07-15 21:41:21.565071] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:48.434 [2024-07-15 21:41:21.565186] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:27:48.434 [2024-07-15 21:41:21.565524] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000d280 00:27:48.434 [2024-07-15 21:41:21.565563] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000d280 00:27:48.434 [2024-07-15 21:41:21.565707] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:48.434 pt4 00:27:48.434 21:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:48.434 21:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:48.434 21:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:48.434 21:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:48.435 21:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:48.435 21:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:48.435 21:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:48.435 21:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:48.435 21:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:48.435 21:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:48.435 21:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:48.435 21:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:48.435 21:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:48.435 "name": "raid_bdev1", 00:27:48.435 "uuid": "8b79c6f1-8bf7-4fd8-a391-03edbbec8ab8", 00:27:48.435 "strip_size_kb": 0, 00:27:48.435 "state": "online", 00:27:48.435 "raid_level": "raid1", 00:27:48.435 "superblock": true, 00:27:48.435 "num_base_bdevs": 4, 00:27:48.435 "num_base_bdevs_discovered": 3, 00:27:48.435 "num_base_bdevs_operational": 3, 00:27:48.435 "base_bdevs_list": [ 00:27:48.435 { 00:27:48.435 "name": null, 00:27:48.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:48.435 "is_configured": false, 00:27:48.435 "data_offset": 2048, 00:27:48.435 "data_size": 63488 00:27:48.435 }, 00:27:48.435 { 00:27:48.435 "name": "pt2", 00:27:48.435 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:48.435 "is_configured": true, 00:27:48.435 "data_offset": 2048, 00:27:48.435 "data_size": 63488 00:27:48.435 }, 00:27:48.435 { 00:27:48.435 "name": "pt3", 00:27:48.435 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:48.435 "is_configured": true, 00:27:48.435 "data_offset": 2048, 00:27:48.435 "data_size": 63488 00:27:48.435 }, 00:27:48.435 { 00:27:48.435 "name": "pt4", 00:27:48.435 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:48.435 "is_configured": true, 00:27:48.435 "data_offset": 2048, 00:27:48.435 "data_size": 63488 00:27:48.435 } 00:27:48.435 ] 00:27:48.435 }' 00:27:48.435 21:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:48.435 21:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.376 21:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:27:49.376 21:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:27:49.376 21:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:27:49.376 21:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:49.376 21:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:27:49.376 [2024-07-15 21:41:22.742119] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:49.633 21:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 8b79c6f1-8bf7-4fd8-a391-03edbbec8ab8 '!=' 8b79c6f1-8bf7-4fd8-a391-03edbbec8ab8 ']' 00:27:49.633 21:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 144023 00:27:49.633 21:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 144023 ']' 00:27:49.633 21:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 144023 00:27:49.633 21:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:27:49.633 21:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:49.633 21:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 144023 00:27:49.633 21:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:49.633 21:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:49.633 21:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 144023' 00:27:49.633 killing process with pid 144023 00:27:49.633 21:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 144023 00:27:49.633 [2024-07-15 21:41:22.778629] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:49.633 21:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 144023 00:27:49.633 [2024-07-15 21:41:22.778720] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:49.633 [2024-07-15 21:41:22.778781] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:49.633 [2024-07-15 21:41:22.778790] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000d280 name raid_bdev1, state offline 00:27:49.889 [2024-07-15 21:41:23.158627] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:51.257 21:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:27:51.257 00:27:51.257 real 0m24.974s 00:27:51.257 user 0m46.313s 00:27:51.257 sys 0m3.060s 00:27:51.257 21:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:51.257 21:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.257 ************************************ 00:27:51.257 END TEST raid_superblock_test 00:27:51.257 ************************************ 00:27:51.257 21:41:24 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:51.257 21:41:24 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:27:51.257 21:41:24 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:27:51.257 21:41:24 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:51.257 21:41:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:51.257 ************************************ 00:27:51.257 START TEST raid_read_error_test 00:27:51.257 ************************************ 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 read 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.3uSDBF3Yak 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=144912 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 144912 /var/tmp/spdk-raid.sock 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 144912 ']' 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:51.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:51.257 21:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.257 [2024-07-15 21:41:24.509974] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:27:51.257 [2024-07-15 21:41:24.510164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144912 ] 00:27:51.514 [2024-07-15 21:41:24.669557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.514 [2024-07-15 21:41:24.861197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.771 [2024-07-15 21:41:25.048631] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:52.028 21:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:52.028 21:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:27:52.028 21:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:52.028 21:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:52.284 BaseBdev1_malloc 00:27:52.284 21:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:27:52.539 true 00:27:52.539 21:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:27:52.539 [2024-07-15 21:41:25.911132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:27:52.539 [2024-07-15 21:41:25.911296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:52.539 [2024-07-15 21:41:25.911357] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:52.539 [2024-07-15 21:41:25.911389] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:52.795 [2024-07-15 21:41:25.913532] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:52.795 [2024-07-15 21:41:25.913609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:52.795 BaseBdev1 00:27:52.795 21:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:52.795 21:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:52.795 BaseBdev2_malloc 00:27:52.795 21:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:27:53.056 true 00:27:53.056 21:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:27:53.315 [2024-07-15 21:41:26.492996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:27:53.315 [2024-07-15 21:41:26.493162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:53.315 [2024-07-15 21:41:26.493211] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:27:53.315 [2024-07-15 21:41:26.493263] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:53.315 [2024-07-15 21:41:26.495171] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:53.315 [2024-07-15 21:41:26.495245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:53.315 BaseBdev2 00:27:53.315 21:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:53.315 21:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:53.572 BaseBdev3_malloc 00:27:53.572 21:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:27:53.572 true 00:27:53.572 21:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:27:53.829 [2024-07-15 21:41:27.113096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:27:53.829 [2024-07-15 21:41:27.113278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:53.829 [2024-07-15 21:41:27.113338] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:27:53.829 [2024-07-15 21:41:27.113379] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:53.829 [2024-07-15 21:41:27.115341] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:53.829 [2024-07-15 21:41:27.115423] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:53.829 BaseBdev3 00:27:53.829 21:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:53.829 21:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:54.087 BaseBdev4_malloc 00:27:54.087 21:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:27:54.345 true 00:27:54.345 21:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:27:54.345 [2024-07-15 21:41:27.708966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:27:54.345 [2024-07-15 21:41:27.709126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:54.345 [2024-07-15 21:41:27.709172] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:27:54.345 [2024-07-15 21:41:27.709238] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:54.345 [2024-07-15 21:41:27.711216] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:54.345 [2024-07-15 21:41:27.711296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:54.345 BaseBdev4 00:27:54.604 21:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:27:54.604 [2024-07-15 21:41:27.900680] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:54.604 [2024-07-15 21:41:27.902340] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:54.604 [2024-07-15 21:41:27.902456] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:54.604 [2024-07-15 21:41:27.902536] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:54.604 [2024-07-15 21:41:27.902788] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:27:54.604 [2024-07-15 21:41:27.902830] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:54.604 [2024-07-15 21:41:27.902998] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:27:54.604 [2024-07-15 21:41:27.903329] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:27:54.604 [2024-07-15 21:41:27.903370] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:27:54.604 [2024-07-15 21:41:27.903532] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:54.604 21:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:27:54.604 21:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:54.604 21:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:54.604 21:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:54.604 21:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:54.604 21:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:54.604 21:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:54.604 21:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:54.604 21:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:54.604 21:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:54.604 21:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:54.604 21:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:54.862 21:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:54.862 "name": "raid_bdev1", 00:27:54.862 "uuid": "dbb4c6c7-ea47-4cf6-8e0e-6b1292f6842b", 00:27:54.862 "strip_size_kb": 0, 00:27:54.862 "state": "online", 00:27:54.862 "raid_level": "raid1", 00:27:54.862 "superblock": true, 00:27:54.862 "num_base_bdevs": 4, 00:27:54.862 "num_base_bdevs_discovered": 4, 00:27:54.862 "num_base_bdevs_operational": 4, 00:27:54.862 "base_bdevs_list": [ 00:27:54.862 { 00:27:54.862 "name": "BaseBdev1", 00:27:54.862 "uuid": "c3e914ed-1a63-5d40-b24e-ca23d391227f", 00:27:54.862 "is_configured": true, 00:27:54.862 "data_offset": 2048, 00:27:54.862 "data_size": 63488 00:27:54.862 }, 00:27:54.862 { 00:27:54.862 "name": "BaseBdev2", 00:27:54.862 "uuid": "45f3d961-dc39-5c7a-9ad0-eadc63fdb2bd", 00:27:54.862 "is_configured": true, 00:27:54.862 "data_offset": 2048, 00:27:54.862 "data_size": 63488 00:27:54.862 }, 00:27:54.862 { 00:27:54.862 "name": "BaseBdev3", 00:27:54.862 "uuid": "79a87689-e0d4-5de6-8943-aaf3fcbd3c88", 00:27:54.862 "is_configured": true, 00:27:54.862 "data_offset": 2048, 00:27:54.862 "data_size": 63488 00:27:54.862 }, 00:27:54.862 { 00:27:54.862 "name": "BaseBdev4", 00:27:54.862 "uuid": "3f7d514d-9141-5bbc-a921-43f04afe3179", 00:27:54.862 "is_configured": true, 00:27:54.862 "data_offset": 2048, 00:27:54.862 "data_size": 63488 00:27:54.862 } 00:27:54.862 ] 00:27:54.862 }' 00:27:54.862 21:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:54.862 21:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.429 21:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:27:55.429 21:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:27:55.687 [2024-07-15 21:41:28.816129] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:27:56.619 21:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:27:56.619 21:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:27:56.619 21:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:27:56.620 21:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:27:56.620 21:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:27:56.620 21:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:27:56.620 21:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:56.620 21:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:56.620 21:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:56.620 21:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:56.620 21:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:56.620 21:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:56.620 21:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:56.620 21:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:56.620 21:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:56.620 21:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:56.620 21:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:56.878 21:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:56.878 "name": "raid_bdev1", 00:27:56.878 "uuid": "dbb4c6c7-ea47-4cf6-8e0e-6b1292f6842b", 00:27:56.878 "strip_size_kb": 0, 00:27:56.878 "state": "online", 00:27:56.878 "raid_level": "raid1", 00:27:56.878 "superblock": true, 00:27:56.878 "num_base_bdevs": 4, 00:27:56.878 "num_base_bdevs_discovered": 4, 00:27:56.878 "num_base_bdevs_operational": 4, 00:27:56.878 "base_bdevs_list": [ 00:27:56.878 { 00:27:56.878 "name": "BaseBdev1", 00:27:56.878 "uuid": "c3e914ed-1a63-5d40-b24e-ca23d391227f", 00:27:56.878 "is_configured": true, 00:27:56.878 "data_offset": 2048, 00:27:56.878 "data_size": 63488 00:27:56.878 }, 00:27:56.878 { 00:27:56.878 "name": "BaseBdev2", 00:27:56.878 "uuid": "45f3d961-dc39-5c7a-9ad0-eadc63fdb2bd", 00:27:56.878 "is_configured": true, 00:27:56.878 "data_offset": 2048, 00:27:56.878 "data_size": 63488 00:27:56.878 }, 00:27:56.878 { 00:27:56.878 "name": "BaseBdev3", 00:27:56.878 "uuid": "79a87689-e0d4-5de6-8943-aaf3fcbd3c88", 00:27:56.878 "is_configured": true, 00:27:56.878 "data_offset": 2048, 00:27:56.879 "data_size": 63488 00:27:56.879 }, 00:27:56.879 { 00:27:56.879 "name": "BaseBdev4", 00:27:56.879 "uuid": "3f7d514d-9141-5bbc-a921-43f04afe3179", 00:27:56.879 "is_configured": true, 00:27:56.879 "data_offset": 2048, 00:27:56.879 "data_size": 63488 00:27:56.879 } 00:27:56.879 ] 00:27:56.879 }' 00:27:56.879 21:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:56.879 21:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.472 21:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:57.730 [2024-07-15 21:41:30.899505] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:57.730 [2024-07-15 21:41:30.899608] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:57.730 [2024-07-15 21:41:30.902179] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:57.730 [2024-07-15 21:41:30.902256] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:57.730 [2024-07-15 21:41:30.902369] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:57.730 [2024-07-15 21:41:30.902398] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:27:57.730 0 00:27:57.730 21:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 144912 00:27:57.730 21:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 144912 ']' 00:27:57.730 21:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 144912 00:27:57.730 21:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:27:57.730 21:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:57.730 21:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 144912 00:27:57.730 21:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:57.730 21:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:57.730 21:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 144912' 00:27:57.730 killing process with pid 144912 00:27:57.730 21:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 144912 00:27:57.730 21:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 144912 00:27:57.730 [2024-07-15 21:41:30.954355] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:57.988 [2024-07-15 21:41:31.268340] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:59.373 21:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:27:59.373 21:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.3uSDBF3Yak 00:27:59.373 21:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:27:59.373 21:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:27:59.373 21:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:27:59.373 21:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:59.373 21:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:27:59.373 21:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:27:59.373 00:27:59.373 real 0m8.103s 00:27:59.373 user 0m12.064s 00:27:59.373 sys 0m0.985s 00:27:59.373 21:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:59.373 21:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:59.373 ************************************ 00:27:59.373 END TEST raid_read_error_test 00:27:59.373 ************************************ 00:27:59.373 21:41:32 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:59.373 21:41:32 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:27:59.373 21:41:32 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:27:59.373 21:41:32 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:59.373 21:41:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:59.373 ************************************ 00:27:59.373 START TEST raid_write_error_test 00:27:59.373 ************************************ 00:27:59.373 21:41:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 write 00:27:59.373 21:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:27:59.373 21:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:27:59.373 21:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:27:59.373 21:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:27:59.373 21:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:27:59.373 21:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:59.373 21:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:27:59.373 21:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:59.373 21:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:59.374 21:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:27:59.374 21:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:59.374 21:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:59.374 21:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:27:59.374 21:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:59.374 21:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:59.374 21:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:27:59.374 21:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:59.374 21:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:59.374 21:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:27:59.374 21:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:27:59.374 21:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:27:59.374 21:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:27:59.374 21:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:27:59.374 21:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:27:59.374 21:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:27:59.374 21:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:27:59.374 21:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:27:59.374 21:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.Y9sjbKzZYW 00:27:59.374 21:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=145132 00:27:59.374 21:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 145132 /var/tmp/spdk-raid.sock 00:27:59.374 21:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:27:59.374 21:41:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 145132 ']' 00:27:59.374 21:41:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:59.374 21:41:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:59.374 21:41:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:59.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:59.374 21:41:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:59.374 21:41:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:59.374 [2024-07-15 21:41:32.667945] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:27:59.374 [2024-07-15 21:41:32.668135] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145132 ] 00:27:59.633 [2024-07-15 21:41:32.826644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.891 [2024-07-15 21:41:33.022816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.891 [2024-07-15 21:41:33.199937] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:00.154 21:41:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:00.154 21:41:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:28:00.154 21:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:28:00.154 21:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:00.413 BaseBdev1_malloc 00:28:00.413 21:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:28:00.670 true 00:28:00.670 21:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:28:00.929 [2024-07-15 21:41:34.046072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:28:00.929 [2024-07-15 21:41:34.046250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:00.929 [2024-07-15 21:41:34.046301] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:00.929 [2024-07-15 21:41:34.046354] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:00.929 [2024-07-15 21:41:34.048341] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:00.929 [2024-07-15 21:41:34.048433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:00.929 BaseBdev1 00:28:00.929 21:41:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:28:00.929 21:41:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:00.929 BaseBdev2_malloc 00:28:00.929 21:41:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:28:01.187 true 00:28:01.187 21:41:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:28:01.446 [2024-07-15 21:41:34.616543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:28:01.446 [2024-07-15 21:41:34.616707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:01.446 [2024-07-15 21:41:34.616759] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:28:01.446 [2024-07-15 21:41:34.616796] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:01.446 [2024-07-15 21:41:34.618719] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:01.446 [2024-07-15 21:41:34.618795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:01.446 BaseBdev2 00:28:01.446 21:41:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:28:01.446 21:41:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:28:01.446 BaseBdev3_malloc 00:28:01.446 21:41:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:28:01.705 true 00:28:01.705 21:41:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:28:01.964 [2024-07-15 21:41:35.156065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:28:01.964 [2024-07-15 21:41:35.156240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:01.964 [2024-07-15 21:41:35.156292] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:28:01.964 [2024-07-15 21:41:35.156335] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:01.964 [2024-07-15 21:41:35.158462] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:01.964 [2024-07-15 21:41:35.158554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:01.964 BaseBdev3 00:28:01.964 21:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:28:01.964 21:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:28:02.222 BaseBdev4_malloc 00:28:02.222 21:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:28:02.222 true 00:28:02.222 21:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:28:02.480 [2024-07-15 21:41:35.721097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:28:02.480 [2024-07-15 21:41:35.721262] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:02.480 [2024-07-15 21:41:35.721317] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:28:02.480 [2024-07-15 21:41:35.721376] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:02.480 [2024-07-15 21:41:35.723358] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:02.480 [2024-07-15 21:41:35.723446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:28:02.480 BaseBdev4 00:28:02.480 21:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:28:02.739 [2024-07-15 21:41:35.880850] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:02.739 [2024-07-15 21:41:35.882614] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:02.739 [2024-07-15 21:41:35.882745] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:02.739 [2024-07-15 21:41:35.882811] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:02.739 [2024-07-15 21:41:35.883052] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:28:02.739 [2024-07-15 21:41:35.883086] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:02.739 [2024-07-15 21:41:35.883244] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:28:02.739 [2024-07-15 21:41:35.883567] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:28:02.739 [2024-07-15 21:41:35.883607] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:28:02.739 [2024-07-15 21:41:35.883780] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:02.739 21:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:02.739 21:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:02.739 21:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:02.739 21:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:02.739 21:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:02.739 21:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:02.739 21:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:02.739 21:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:02.739 21:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:02.739 21:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:02.739 21:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:02.739 21:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:02.739 21:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:02.739 "name": "raid_bdev1", 00:28:02.739 "uuid": "2f907e32-b828-4da8-b862-25c126eff7b8", 00:28:02.739 "strip_size_kb": 0, 00:28:02.739 "state": "online", 00:28:02.739 "raid_level": "raid1", 00:28:02.739 "superblock": true, 00:28:02.739 "num_base_bdevs": 4, 00:28:02.739 "num_base_bdevs_discovered": 4, 00:28:02.739 "num_base_bdevs_operational": 4, 00:28:02.739 "base_bdevs_list": [ 00:28:02.739 { 00:28:02.739 "name": "BaseBdev1", 00:28:02.739 "uuid": "194b992c-c45c-5145-bdb4-ecdda968d55d", 00:28:02.739 "is_configured": true, 00:28:02.739 "data_offset": 2048, 00:28:02.739 "data_size": 63488 00:28:02.739 }, 00:28:02.739 { 00:28:02.739 "name": "BaseBdev2", 00:28:02.739 "uuid": "03a18184-29d7-5e68-91c3-5d12c299bb64", 00:28:02.739 "is_configured": true, 00:28:02.739 "data_offset": 2048, 00:28:02.739 "data_size": 63488 00:28:02.739 }, 00:28:02.739 { 00:28:02.739 "name": "BaseBdev3", 00:28:02.739 "uuid": "52ff50b7-4600-5095-95ec-0de39f21d959", 00:28:02.739 "is_configured": true, 00:28:02.739 "data_offset": 2048, 00:28:02.739 "data_size": 63488 00:28:02.739 }, 00:28:02.739 { 00:28:02.739 "name": "BaseBdev4", 00:28:02.739 "uuid": "25d284b0-6e59-57c8-8028-3d84d91ee389", 00:28:02.739 "is_configured": true, 00:28:02.739 "data_offset": 2048, 00:28:02.739 "data_size": 63488 00:28:02.739 } 00:28:02.739 ] 00:28:02.739 }' 00:28:02.739 21:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:02.739 21:41:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:03.672 21:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:28:03.672 21:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:28:03.672 [2024-07-15 21:41:36.772651] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:28:04.606 21:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:28:04.606 [2024-07-15 21:41:37.864887] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:28:04.606 [2024-07-15 21:41:37.865067] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:04.606 [2024-07-15 21:41:37.865312] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:28:04.606 21:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:28:04.606 21:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:28:04.606 21:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:28:04.606 21:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=3 00:28:04.606 21:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:04.606 21:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:04.606 21:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:04.606 21:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:04.606 21:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:04.606 21:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:04.606 21:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:04.606 21:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:04.606 21:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:04.606 21:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:04.606 21:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:04.606 21:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:04.865 21:41:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:04.865 "name": "raid_bdev1", 00:28:04.865 "uuid": "2f907e32-b828-4da8-b862-25c126eff7b8", 00:28:04.865 "strip_size_kb": 0, 00:28:04.865 "state": "online", 00:28:04.865 "raid_level": "raid1", 00:28:04.865 "superblock": true, 00:28:04.865 "num_base_bdevs": 4, 00:28:04.865 "num_base_bdevs_discovered": 3, 00:28:04.865 "num_base_bdevs_operational": 3, 00:28:04.865 "base_bdevs_list": [ 00:28:04.865 { 00:28:04.865 "name": null, 00:28:04.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:04.865 "is_configured": false, 00:28:04.865 "data_offset": 2048, 00:28:04.865 "data_size": 63488 00:28:04.865 }, 00:28:04.865 { 00:28:04.865 "name": "BaseBdev2", 00:28:04.865 "uuid": "03a18184-29d7-5e68-91c3-5d12c299bb64", 00:28:04.865 "is_configured": true, 00:28:04.865 "data_offset": 2048, 00:28:04.865 "data_size": 63488 00:28:04.865 }, 00:28:04.865 { 00:28:04.865 "name": "BaseBdev3", 00:28:04.865 "uuid": "52ff50b7-4600-5095-95ec-0de39f21d959", 00:28:04.865 "is_configured": true, 00:28:04.865 "data_offset": 2048, 00:28:04.865 "data_size": 63488 00:28:04.865 }, 00:28:04.865 { 00:28:04.865 "name": "BaseBdev4", 00:28:04.865 "uuid": "25d284b0-6e59-57c8-8028-3d84d91ee389", 00:28:04.865 "is_configured": true, 00:28:04.865 "data_offset": 2048, 00:28:04.865 "data_size": 63488 00:28:04.865 } 00:28:04.865 ] 00:28:04.865 }' 00:28:04.865 21:41:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:04.865 21:41:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:05.431 21:41:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:05.712 [2024-07-15 21:41:38.910173] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:05.712 [2024-07-15 21:41:38.910271] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:05.712 [2024-07-15 21:41:38.912752] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:05.712 [2024-07-15 21:41:38.912838] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:05.712 [2024-07-15 21:41:38.912949] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:05.712 [2024-07-15 21:41:38.912978] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:28:05.712 0 00:28:05.712 21:41:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 145132 00:28:05.712 21:41:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 145132 ']' 00:28:05.712 21:41:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 145132 00:28:05.712 21:41:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:28:05.712 21:41:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:05.712 21:41:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 145132 00:28:05.712 killing process with pid 145132 00:28:05.712 21:41:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:05.712 21:41:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:05.712 21:41:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 145132' 00:28:05.712 21:41:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 145132 00:28:05.712 21:41:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 145132 00:28:05.712 [2024-07-15 21:41:38.964735] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:05.985 [2024-07-15 21:41:39.277135] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:07.358 21:41:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.Y9sjbKzZYW 00:28:07.358 21:41:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:28:07.358 21:41:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:28:07.358 ************************************ 00:28:07.358 END TEST raid_write_error_test 00:28:07.358 ************************************ 00:28:07.358 21:41:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:28:07.358 21:41:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:28:07.358 21:41:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:28:07.358 21:41:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:28:07.358 21:41:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:28:07.358 00:28:07.358 real 0m7.936s 00:28:07.358 user 0m11.948s 00:28:07.358 sys 0m0.752s 00:28:07.358 21:41:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:07.358 21:41:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:07.358 21:41:40 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:28:07.358 21:41:40 bdev_raid -- bdev/bdev_raid.sh@875 -- # '[' true = true ']' 00:28:07.358 21:41:40 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:28:07.358 21:41:40 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:28:07.358 21:41:40 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:28:07.358 21:41:40 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:07.358 21:41:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:07.358 ************************************ 00:28:07.358 START TEST raid_rebuild_test 00:28:07.358 ************************************ 00:28:07.358 21:41:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 false false true 00:28:07.358 21:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:28:07.358 21:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:28:07.358 21:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:28:07.358 21:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:28:07.358 21:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:28:07.358 21:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:28:07.358 21:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:28:07.358 21:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:07.358 21:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:28:07.358 21:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:07.358 21:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:07.358 21:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:28:07.358 21:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:07.359 21:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:07.359 21:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:28:07.359 21:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:28:07.359 21:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:28:07.359 21:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:28:07.359 21:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:28:07.359 21:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:28:07.359 21:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:28:07.359 21:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:28:07.359 21:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:28:07.359 21:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=145339 00:28:07.359 21:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 145339 /var/tmp/spdk-raid.sock 00:28:07.359 21:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:07.359 21:41:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 145339 ']' 00:28:07.359 21:41:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:07.359 21:41:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:07.359 21:41:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:07.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:07.359 21:41:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:07.359 21:41:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:07.359 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:07.359 Zero copy mechanism will not be used. 00:28:07.359 [2024-07-15 21:41:40.673388] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:28:07.359 [2024-07-15 21:41:40.673554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145339 ] 00:28:07.617 [2024-07-15 21:41:40.832474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.876 [2024-07-15 21:41:41.023908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.876 [2024-07-15 21:41:41.207688] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:08.445 21:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:08.445 21:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:28:08.445 21:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:08.445 21:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:08.445 BaseBdev1_malloc 00:28:08.445 21:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:08.703 [2024-07-15 21:41:41.926802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:08.703 [2024-07-15 21:41:41.926947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:08.703 [2024-07-15 21:41:41.926993] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:28:08.703 [2024-07-15 21:41:41.927049] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:08.703 [2024-07-15 21:41:41.929094] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:08.703 [2024-07-15 21:41:41.929175] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:08.703 BaseBdev1 00:28:08.703 21:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:08.703 21:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:08.962 BaseBdev2_malloc 00:28:08.962 21:41:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:08.962 [2024-07-15 21:41:42.303143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:08.962 [2024-07-15 21:41:42.303321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:08.962 [2024-07-15 21:41:42.303368] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:28:08.962 [2024-07-15 21:41:42.303403] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:08.962 [2024-07-15 21:41:42.305334] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:08.962 [2024-07-15 21:41:42.305413] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:08.962 BaseBdev2 00:28:08.962 21:41:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:28:09.222 spare_malloc 00:28:09.222 21:41:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:09.492 spare_delay 00:28:09.492 21:41:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:09.492 [2024-07-15 21:41:42.856367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:09.492 [2024-07-15 21:41:42.856534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:09.492 [2024-07-15 21:41:42.856581] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:28:09.492 [2024-07-15 21:41:42.856622] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:09.492 [2024-07-15 21:41:42.858557] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:09.492 [2024-07-15 21:41:42.858665] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:09.791 spare 00:28:09.791 21:41:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:28:09.791 [2024-07-15 21:41:43.020144] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:09.791 [2024-07-15 21:41:43.022020] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:09.791 [2024-07-15 21:41:43.022164] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:28:09.791 [2024-07-15 21:41:43.022197] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:28:09.791 [2024-07-15 21:41:43.022384] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:28:09.791 [2024-07-15 21:41:43.022705] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:28:09.791 [2024-07-15 21:41:43.022746] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:28:09.791 [2024-07-15 21:41:43.022961] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:09.791 21:41:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:09.791 21:41:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:09.791 21:41:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:09.791 21:41:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:09.791 21:41:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:09.791 21:41:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:09.791 21:41:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:09.791 21:41:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:09.791 21:41:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:09.792 21:41:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:09.792 21:41:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:09.792 21:41:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:10.050 21:41:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:10.050 "name": "raid_bdev1", 00:28:10.050 "uuid": "32e400eb-eebc-40b3-bf0d-ccd27e0bbc3e", 00:28:10.050 "strip_size_kb": 0, 00:28:10.050 "state": "online", 00:28:10.050 "raid_level": "raid1", 00:28:10.050 "superblock": false, 00:28:10.050 "num_base_bdevs": 2, 00:28:10.050 "num_base_bdevs_discovered": 2, 00:28:10.050 "num_base_bdevs_operational": 2, 00:28:10.050 "base_bdevs_list": [ 00:28:10.050 { 00:28:10.050 "name": "BaseBdev1", 00:28:10.050 "uuid": "1ad36457-3385-5547-9183-b05636924539", 00:28:10.050 "is_configured": true, 00:28:10.050 "data_offset": 0, 00:28:10.050 "data_size": 65536 00:28:10.050 }, 00:28:10.050 { 00:28:10.050 "name": "BaseBdev2", 00:28:10.050 "uuid": "a14a45df-3312-5045-8d1f-57cccdf61fad", 00:28:10.050 "is_configured": true, 00:28:10.050 "data_offset": 0, 00:28:10.050 "data_size": 65536 00:28:10.050 } 00:28:10.050 ] 00:28:10.050 }' 00:28:10.050 21:41:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:10.050 21:41:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:10.618 21:41:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:10.618 21:41:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:28:10.878 [2024-07-15 21:41:44.010691] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:10.878 21:41:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:28:10.878 21:41:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:10.878 21:41:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:10.878 21:41:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:28:10.878 21:41:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:28:10.878 21:41:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:28:10.878 21:41:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:28:10.878 21:41:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:28:10.878 21:41:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:10.878 21:41:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:28:10.878 21:41:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:10.878 21:41:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:28:10.878 21:41:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:10.878 21:41:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:28:10.878 21:41:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:10.878 21:41:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:10.878 21:41:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:28:11.137 [2024-07-15 21:41:44.349936] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:28:11.137 /dev/nbd0 00:28:11.137 21:41:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:11.137 21:41:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:11.137 21:41:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:28:11.137 21:41:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:28:11.137 21:41:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:11.137 21:41:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:11.137 21:41:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:28:11.137 21:41:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:28:11.137 21:41:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:11.137 21:41:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:11.137 21:41:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:11.137 1+0 records in 00:28:11.137 1+0 records out 00:28:11.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000531231 s, 7.7 MB/s 00:28:11.137 21:41:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:11.137 21:41:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:28:11.137 21:41:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:11.137 21:41:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:11.137 21:41:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:28:11.137 21:41:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:11.137 21:41:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:11.137 21:41:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:28:11.137 21:41:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:28:11.137 21:41:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:28:15.326 65536+0 records in 00:28:15.326 65536+0 records out 00:28:15.326 33554432 bytes (34 MB, 32 MiB) copied, 3.55479 s, 9.4 MB/s 00:28:15.326 21:41:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:28:15.326 21:41:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:15.326 21:41:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:28:15.326 21:41:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:15.326 21:41:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:28:15.326 21:41:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:15.326 21:41:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:15.326 21:41:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:15.326 21:41:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:15.326 21:41:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:15.326 21:41:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:15.326 21:41:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:15.326 21:41:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:15.326 21:41:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:28:15.326 [2024-07-15 21:41:48.178381] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:15.326 21:41:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:28:15.326 21:41:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:15.326 21:41:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:15.326 21:41:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:28:15.326 21:41:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:28:15.326 21:41:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:15.326 [2024-07-15 21:41:48.449653] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:15.326 21:41:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:15.326 21:41:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:15.326 21:41:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:15.326 21:41:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:15.326 21:41:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:15.326 21:41:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:15.326 21:41:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:15.326 21:41:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:15.326 21:41:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:15.326 21:41:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:15.326 21:41:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:15.326 21:41:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:15.326 21:41:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:15.326 "name": "raid_bdev1", 00:28:15.326 "uuid": "32e400eb-eebc-40b3-bf0d-ccd27e0bbc3e", 00:28:15.326 "strip_size_kb": 0, 00:28:15.326 "state": "online", 00:28:15.326 "raid_level": "raid1", 00:28:15.326 "superblock": false, 00:28:15.326 "num_base_bdevs": 2, 00:28:15.326 "num_base_bdevs_discovered": 1, 00:28:15.326 "num_base_bdevs_operational": 1, 00:28:15.326 "base_bdevs_list": [ 00:28:15.326 { 00:28:15.326 "name": null, 00:28:15.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:15.326 "is_configured": false, 00:28:15.326 "data_offset": 0, 00:28:15.326 "data_size": 65536 00:28:15.326 }, 00:28:15.326 { 00:28:15.326 "name": "BaseBdev2", 00:28:15.326 "uuid": "a14a45df-3312-5045-8d1f-57cccdf61fad", 00:28:15.326 "is_configured": true, 00:28:15.326 "data_offset": 0, 00:28:15.326 "data_size": 65536 00:28:15.326 } 00:28:15.326 ] 00:28:15.326 }' 00:28:15.326 21:41:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:15.326 21:41:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:15.892 21:41:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:16.152 [2024-07-15 21:41:49.384563] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:16.152 [2024-07-15 21:41:49.398712] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b910 00:28:16.152 [2024-07-15 21:41:49.400413] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:16.152 21:41:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:28:17.088 21:41:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:17.088 21:41:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:17.088 21:41:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:17.088 21:41:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:17.088 21:41:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:17.088 21:41:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:17.088 21:41:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:17.346 21:41:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:17.346 "name": "raid_bdev1", 00:28:17.346 "uuid": "32e400eb-eebc-40b3-bf0d-ccd27e0bbc3e", 00:28:17.346 "strip_size_kb": 0, 00:28:17.346 "state": "online", 00:28:17.346 "raid_level": "raid1", 00:28:17.346 "superblock": false, 00:28:17.346 "num_base_bdevs": 2, 00:28:17.346 "num_base_bdevs_discovered": 2, 00:28:17.346 "num_base_bdevs_operational": 2, 00:28:17.346 "process": { 00:28:17.346 "type": "rebuild", 00:28:17.346 "target": "spare", 00:28:17.346 "progress": { 00:28:17.346 "blocks": 22528, 00:28:17.346 "percent": 34 00:28:17.346 } 00:28:17.346 }, 00:28:17.346 "base_bdevs_list": [ 00:28:17.346 { 00:28:17.346 "name": "spare", 00:28:17.346 "uuid": "4d178713-60a6-5dfe-aec4-acb5d6874d6a", 00:28:17.346 "is_configured": true, 00:28:17.346 "data_offset": 0, 00:28:17.346 "data_size": 65536 00:28:17.346 }, 00:28:17.346 { 00:28:17.346 "name": "BaseBdev2", 00:28:17.346 "uuid": "a14a45df-3312-5045-8d1f-57cccdf61fad", 00:28:17.346 "is_configured": true, 00:28:17.346 "data_offset": 0, 00:28:17.346 "data_size": 65536 00:28:17.346 } 00:28:17.346 ] 00:28:17.346 }' 00:28:17.346 21:41:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:17.346 21:41:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:17.346 21:41:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:17.605 21:41:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:17.605 21:41:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:17.605 [2024-07-15 21:41:50.891604] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:17.605 [2024-07-15 21:41:50.906946] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:17.605 [2024-07-15 21:41:50.907028] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:17.605 [2024-07-15 21:41:50.907054] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:17.605 [2024-07-15 21:41:50.907093] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:17.605 21:41:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:17.605 21:41:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:17.605 21:41:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:17.605 21:41:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:17.605 21:41:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:17.605 21:41:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:17.605 21:41:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:17.605 21:41:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:17.605 21:41:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:17.605 21:41:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:17.605 21:41:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:17.605 21:41:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:17.863 21:41:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:17.863 "name": "raid_bdev1", 00:28:17.863 "uuid": "32e400eb-eebc-40b3-bf0d-ccd27e0bbc3e", 00:28:17.863 "strip_size_kb": 0, 00:28:17.863 "state": "online", 00:28:17.863 "raid_level": "raid1", 00:28:17.863 "superblock": false, 00:28:17.863 "num_base_bdevs": 2, 00:28:17.863 "num_base_bdevs_discovered": 1, 00:28:17.863 "num_base_bdevs_operational": 1, 00:28:17.863 "base_bdevs_list": [ 00:28:17.863 { 00:28:17.863 "name": null, 00:28:17.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:17.863 "is_configured": false, 00:28:17.863 "data_offset": 0, 00:28:17.863 "data_size": 65536 00:28:17.863 }, 00:28:17.863 { 00:28:17.863 "name": "BaseBdev2", 00:28:17.863 "uuid": "a14a45df-3312-5045-8d1f-57cccdf61fad", 00:28:17.863 "is_configured": true, 00:28:17.863 "data_offset": 0, 00:28:17.863 "data_size": 65536 00:28:17.863 } 00:28:17.863 ] 00:28:17.863 }' 00:28:17.863 21:41:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:17.863 21:41:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:18.429 21:41:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:18.429 21:41:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:18.429 21:41:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:18.429 21:41:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:18.429 21:41:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:18.429 21:41:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:18.429 21:41:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:18.688 21:41:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:18.688 "name": "raid_bdev1", 00:28:18.688 "uuid": "32e400eb-eebc-40b3-bf0d-ccd27e0bbc3e", 00:28:18.688 "strip_size_kb": 0, 00:28:18.688 "state": "online", 00:28:18.688 "raid_level": "raid1", 00:28:18.688 "superblock": false, 00:28:18.688 "num_base_bdevs": 2, 00:28:18.688 "num_base_bdevs_discovered": 1, 00:28:18.688 "num_base_bdevs_operational": 1, 00:28:18.688 "base_bdevs_list": [ 00:28:18.688 { 00:28:18.688 "name": null, 00:28:18.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:18.688 "is_configured": false, 00:28:18.688 "data_offset": 0, 00:28:18.688 "data_size": 65536 00:28:18.688 }, 00:28:18.688 { 00:28:18.688 "name": "BaseBdev2", 00:28:18.688 "uuid": "a14a45df-3312-5045-8d1f-57cccdf61fad", 00:28:18.688 "is_configured": true, 00:28:18.688 "data_offset": 0, 00:28:18.688 "data_size": 65536 00:28:18.688 } 00:28:18.688 ] 00:28:18.688 }' 00:28:18.688 21:41:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:18.688 21:41:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:18.688 21:41:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:18.688 21:41:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:18.688 21:41:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:18.948 [2024-07-15 21:41:52.212562] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:18.948 [2024-07-15 21:41:52.227905] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0bab0 00:28:18.948 21:41:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:18.948 [2024-07-15 21:41:52.238600] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:19.883 21:41:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:19.883 21:41:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:19.883 21:41:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:19.883 21:41:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:19.883 21:41:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:19.883 21:41:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:19.883 21:41:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:20.141 21:41:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:20.141 "name": "raid_bdev1", 00:28:20.141 "uuid": "32e400eb-eebc-40b3-bf0d-ccd27e0bbc3e", 00:28:20.141 "strip_size_kb": 0, 00:28:20.141 "state": "online", 00:28:20.141 "raid_level": "raid1", 00:28:20.141 "superblock": false, 00:28:20.141 "num_base_bdevs": 2, 00:28:20.141 "num_base_bdevs_discovered": 2, 00:28:20.141 "num_base_bdevs_operational": 2, 00:28:20.141 "process": { 00:28:20.141 "type": "rebuild", 00:28:20.141 "target": "spare", 00:28:20.141 "progress": { 00:28:20.141 "blocks": 22528, 00:28:20.141 "percent": 34 00:28:20.141 } 00:28:20.141 }, 00:28:20.141 "base_bdevs_list": [ 00:28:20.141 { 00:28:20.141 "name": "spare", 00:28:20.141 "uuid": "4d178713-60a6-5dfe-aec4-acb5d6874d6a", 00:28:20.141 "is_configured": true, 00:28:20.141 "data_offset": 0, 00:28:20.141 "data_size": 65536 00:28:20.141 }, 00:28:20.141 { 00:28:20.141 "name": "BaseBdev2", 00:28:20.141 "uuid": "a14a45df-3312-5045-8d1f-57cccdf61fad", 00:28:20.141 "is_configured": true, 00:28:20.141 "data_offset": 0, 00:28:20.141 "data_size": 65536 00:28:20.141 } 00:28:20.141 ] 00:28:20.141 }' 00:28:20.141 21:41:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:20.141 21:41:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:20.141 21:41:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:20.399 21:41:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:20.399 21:41:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:28:20.399 21:41:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:28:20.399 21:41:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:28:20.399 21:41:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:28:20.399 21:41:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=766 00:28:20.399 21:41:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:20.399 21:41:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:20.399 21:41:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:20.399 21:41:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:20.399 21:41:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:20.399 21:41:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:20.399 21:41:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:20.399 21:41:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:20.399 21:41:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:20.399 "name": "raid_bdev1", 00:28:20.399 "uuid": "32e400eb-eebc-40b3-bf0d-ccd27e0bbc3e", 00:28:20.399 "strip_size_kb": 0, 00:28:20.399 "state": "online", 00:28:20.399 "raid_level": "raid1", 00:28:20.399 "superblock": false, 00:28:20.399 "num_base_bdevs": 2, 00:28:20.399 "num_base_bdevs_discovered": 2, 00:28:20.399 "num_base_bdevs_operational": 2, 00:28:20.399 "process": { 00:28:20.399 "type": "rebuild", 00:28:20.399 "target": "spare", 00:28:20.399 "progress": { 00:28:20.399 "blocks": 28672, 00:28:20.399 "percent": 43 00:28:20.399 } 00:28:20.399 }, 00:28:20.399 "base_bdevs_list": [ 00:28:20.399 { 00:28:20.399 "name": "spare", 00:28:20.399 "uuid": "4d178713-60a6-5dfe-aec4-acb5d6874d6a", 00:28:20.399 "is_configured": true, 00:28:20.399 "data_offset": 0, 00:28:20.399 "data_size": 65536 00:28:20.399 }, 00:28:20.399 { 00:28:20.399 "name": "BaseBdev2", 00:28:20.399 "uuid": "a14a45df-3312-5045-8d1f-57cccdf61fad", 00:28:20.399 "is_configured": true, 00:28:20.400 "data_offset": 0, 00:28:20.400 "data_size": 65536 00:28:20.400 } 00:28:20.400 ] 00:28:20.400 }' 00:28:20.400 21:41:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:20.658 21:41:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:20.658 21:41:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:20.658 21:41:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:20.658 21:41:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:21.595 21:41:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:21.595 21:41:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:21.595 21:41:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:21.595 21:41:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:21.595 21:41:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:21.595 21:41:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:21.595 21:41:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:21.595 21:41:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:21.854 21:41:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:21.854 "name": "raid_bdev1", 00:28:21.854 "uuid": "32e400eb-eebc-40b3-bf0d-ccd27e0bbc3e", 00:28:21.854 "strip_size_kb": 0, 00:28:21.854 "state": "online", 00:28:21.854 "raid_level": "raid1", 00:28:21.854 "superblock": false, 00:28:21.854 "num_base_bdevs": 2, 00:28:21.854 "num_base_bdevs_discovered": 2, 00:28:21.854 "num_base_bdevs_operational": 2, 00:28:21.854 "process": { 00:28:21.854 "type": "rebuild", 00:28:21.854 "target": "spare", 00:28:21.854 "progress": { 00:28:21.854 "blocks": 55296, 00:28:21.854 "percent": 84 00:28:21.854 } 00:28:21.854 }, 00:28:21.854 "base_bdevs_list": [ 00:28:21.854 { 00:28:21.854 "name": "spare", 00:28:21.854 "uuid": "4d178713-60a6-5dfe-aec4-acb5d6874d6a", 00:28:21.854 "is_configured": true, 00:28:21.854 "data_offset": 0, 00:28:21.854 "data_size": 65536 00:28:21.854 }, 00:28:21.854 { 00:28:21.854 "name": "BaseBdev2", 00:28:21.854 "uuid": "a14a45df-3312-5045-8d1f-57cccdf61fad", 00:28:21.854 "is_configured": true, 00:28:21.854 "data_offset": 0, 00:28:21.854 "data_size": 65536 00:28:21.854 } 00:28:21.854 ] 00:28:21.854 }' 00:28:21.854 21:41:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:21.854 21:41:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:21.854 21:41:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:21.854 21:41:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:21.854 21:41:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:22.111 [2024-07-15 21:41:55.452448] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:22.111 [2024-07-15 21:41:55.452593] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:22.111 [2024-07-15 21:41:55.452707] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:23.061 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:23.061 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:23.061 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:23.061 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:23.061 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:23.061 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:23.061 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:23.061 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:23.061 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:23.061 "name": "raid_bdev1", 00:28:23.061 "uuid": "32e400eb-eebc-40b3-bf0d-ccd27e0bbc3e", 00:28:23.061 "strip_size_kb": 0, 00:28:23.061 "state": "online", 00:28:23.061 "raid_level": "raid1", 00:28:23.061 "superblock": false, 00:28:23.061 "num_base_bdevs": 2, 00:28:23.061 "num_base_bdevs_discovered": 2, 00:28:23.061 "num_base_bdevs_operational": 2, 00:28:23.061 "base_bdevs_list": [ 00:28:23.061 { 00:28:23.061 "name": "spare", 00:28:23.061 "uuid": "4d178713-60a6-5dfe-aec4-acb5d6874d6a", 00:28:23.061 "is_configured": true, 00:28:23.061 "data_offset": 0, 00:28:23.061 "data_size": 65536 00:28:23.061 }, 00:28:23.061 { 00:28:23.061 "name": "BaseBdev2", 00:28:23.061 "uuid": "a14a45df-3312-5045-8d1f-57cccdf61fad", 00:28:23.061 "is_configured": true, 00:28:23.061 "data_offset": 0, 00:28:23.061 "data_size": 65536 00:28:23.061 } 00:28:23.061 ] 00:28:23.061 }' 00:28:23.061 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:23.061 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:23.061 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:23.320 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:28:23.320 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:28:23.320 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:23.320 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:23.320 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:23.320 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:23.320 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:23.320 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:23.320 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:23.320 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:23.320 "name": "raid_bdev1", 00:28:23.320 "uuid": "32e400eb-eebc-40b3-bf0d-ccd27e0bbc3e", 00:28:23.320 "strip_size_kb": 0, 00:28:23.320 "state": "online", 00:28:23.320 "raid_level": "raid1", 00:28:23.320 "superblock": false, 00:28:23.320 "num_base_bdevs": 2, 00:28:23.320 "num_base_bdevs_discovered": 2, 00:28:23.320 "num_base_bdevs_operational": 2, 00:28:23.320 "base_bdevs_list": [ 00:28:23.320 { 00:28:23.320 "name": "spare", 00:28:23.320 "uuid": "4d178713-60a6-5dfe-aec4-acb5d6874d6a", 00:28:23.320 "is_configured": true, 00:28:23.320 "data_offset": 0, 00:28:23.320 "data_size": 65536 00:28:23.320 }, 00:28:23.320 { 00:28:23.320 "name": "BaseBdev2", 00:28:23.320 "uuid": "a14a45df-3312-5045-8d1f-57cccdf61fad", 00:28:23.320 "is_configured": true, 00:28:23.320 "data_offset": 0, 00:28:23.320 "data_size": 65536 00:28:23.320 } 00:28:23.320 ] 00:28:23.320 }' 00:28:23.320 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:23.320 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:23.320 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:23.579 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:23.579 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:23.579 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:23.579 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:23.579 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:23.579 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:23.579 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:23.579 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:23.579 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:23.579 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:23.579 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:23.579 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:23.579 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:23.838 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:23.838 "name": "raid_bdev1", 00:28:23.838 "uuid": "32e400eb-eebc-40b3-bf0d-ccd27e0bbc3e", 00:28:23.838 "strip_size_kb": 0, 00:28:23.838 "state": "online", 00:28:23.838 "raid_level": "raid1", 00:28:23.838 "superblock": false, 00:28:23.838 "num_base_bdevs": 2, 00:28:23.838 "num_base_bdevs_discovered": 2, 00:28:23.838 "num_base_bdevs_operational": 2, 00:28:23.838 "base_bdevs_list": [ 00:28:23.838 { 00:28:23.838 "name": "spare", 00:28:23.838 "uuid": "4d178713-60a6-5dfe-aec4-acb5d6874d6a", 00:28:23.838 "is_configured": true, 00:28:23.838 "data_offset": 0, 00:28:23.838 "data_size": 65536 00:28:23.838 }, 00:28:23.838 { 00:28:23.838 "name": "BaseBdev2", 00:28:23.838 "uuid": "a14a45df-3312-5045-8d1f-57cccdf61fad", 00:28:23.838 "is_configured": true, 00:28:23.838 "data_offset": 0, 00:28:23.838 "data_size": 65536 00:28:23.838 } 00:28:23.838 ] 00:28:23.838 }' 00:28:23.838 21:41:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:23.838 21:41:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.405 21:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:24.405 [2024-07-15 21:41:57.760026] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:24.405 [2024-07-15 21:41:57.760124] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:24.405 [2024-07-15 21:41:57.760215] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:24.405 [2024-07-15 21:41:57.760295] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:24.405 [2024-07-15 21:41:57.760317] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:28:24.405 21:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:24.664 21:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:28:24.664 21:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:28:24.664 21:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:28:24.664 21:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:28:24.664 21:41:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:28:24.664 21:41:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:24.664 21:41:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:28:24.664 21:41:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:24.664 21:41:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:28:24.664 21:41:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:24.664 21:41:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:28:24.664 21:41:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:24.664 21:41:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:24.664 21:41:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:28:24.923 /dev/nbd0 00:28:24.923 21:41:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:24.923 21:41:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:24.923 21:41:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:28:24.923 21:41:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:28:24.923 21:41:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:24.923 21:41:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:24.923 21:41:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:28:24.923 21:41:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:28:24.923 21:41:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:24.923 21:41:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:24.923 21:41:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:24.923 1+0 records in 00:28:24.923 1+0 records out 00:28:24.923 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545382 s, 7.5 MB/s 00:28:24.923 21:41:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:24.923 21:41:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:28:24.923 21:41:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:24.923 21:41:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:24.923 21:41:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:28:24.923 21:41:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:24.923 21:41:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:24.924 21:41:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:28:25.182 /dev/nbd1 00:28:25.182 21:41:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:25.182 21:41:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:25.182 21:41:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:28:25.182 21:41:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:28:25.182 21:41:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:25.182 21:41:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:25.183 21:41:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:28:25.183 21:41:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:28:25.183 21:41:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:25.183 21:41:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:25.183 21:41:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:25.183 1+0 records in 00:28:25.183 1+0 records out 00:28:25.183 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000836031 s, 4.9 MB/s 00:28:25.183 21:41:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:25.183 21:41:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:28:25.183 21:41:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:25.183 21:41:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:25.183 21:41:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:28:25.183 21:41:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:25.183 21:41:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:25.183 21:41:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:28:25.441 21:41:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:28:25.441 21:41:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:25.441 21:41:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:28:25.441 21:41:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:25.441 21:41:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:28:25.441 21:41:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:25.441 21:41:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:25.700 21:41:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:25.700 21:41:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:25.700 21:41:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:25.700 21:41:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:25.700 21:41:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:25.700 21:41:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:25.700 21:41:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:28:25.700 21:41:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:28:25.700 21:41:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:25.700 21:41:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:28:25.700 21:41:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:25.700 21:41:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:25.700 21:41:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:25.700 21:41:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:25.700 21:41:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:25.700 21:41:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:25.700 21:41:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:28:25.959 21:41:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:28:25.959 21:41:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:25.959 21:41:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:25.960 21:41:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:28:25.960 21:41:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:28:25.960 21:41:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:28:25.960 21:41:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 145339 00:28:25.960 21:41:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 145339 ']' 00:28:25.960 21:41:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 145339 00:28:25.960 21:41:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:28:25.960 21:41:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:25.960 21:41:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 145339 00:28:25.960 21:41:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:25.960 21:41:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:25.960 21:41:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 145339' 00:28:25.960 killing process with pid 145339 00:28:25.960 21:41:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@967 -- # kill 145339 00:28:25.960 Received shutdown signal, test time was about 60.000000 seconds 00:28:25.960 00:28:25.960 Latency(us) 00:28:25.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:25.960 =================================================================================================================== 00:28:25.960 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:25.960 21:41:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # wait 145339 00:28:25.960 [2024-07-15 21:41:59.186739] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:26.218 [2024-07-15 21:41:59.462914] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:28:27.599 00:28:27.599 real 0m20.079s 00:28:27.599 user 0m27.872s 00:28:27.599 sys 0m3.163s 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:27.599 ************************************ 00:28:27.599 END TEST raid_rebuild_test 00:28:27.599 ************************************ 00:28:27.599 21:42:00 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:28:27.599 21:42:00 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:28:27.599 21:42:00 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:28:27.599 21:42:00 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:27.599 21:42:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:27.599 ************************************ 00:28:27.599 START TEST raid_rebuild_test_sb 00:28:27.599 ************************************ 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=145895 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 145895 /var/tmp/spdk-raid.sock 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 145895 ']' 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:27.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:27.599 21:42:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:27.599 [2024-07-15 21:42:00.839781] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:28:27.599 [2024-07-15 21:42:00.840048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145895 ] 00:28:27.599 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:27.599 Zero copy mechanism will not be used. 00:28:27.858 [2024-07-15 21:42:01.010109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.858 [2024-07-15 21:42:01.206089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.117 [2024-07-15 21:42:01.394705] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:28.375 21:42:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:28.375 21:42:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:28:28.375 21:42:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:28.375 21:42:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:28.635 BaseBdev1_malloc 00:28:28.635 21:42:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:28.894 [2024-07-15 21:42:02.058334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:28.894 [2024-07-15 21:42:02.058496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:28.894 [2024-07-15 21:42:02.058547] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:28:28.894 [2024-07-15 21:42:02.058617] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:28.894 [2024-07-15 21:42:02.060534] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:28.894 [2024-07-15 21:42:02.060606] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:28.894 BaseBdev1 00:28:28.894 21:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:28.894 21:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:29.152 BaseBdev2_malloc 00:28:29.152 21:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:29.409 [2024-07-15 21:42:02.590984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:29.409 [2024-07-15 21:42:02.591132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:29.409 [2024-07-15 21:42:02.591180] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:28:29.409 [2024-07-15 21:42:02.591219] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:29.409 [2024-07-15 21:42:02.593090] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:29.409 [2024-07-15 21:42:02.593168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:29.409 BaseBdev2 00:28:29.409 21:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:28:29.667 spare_malloc 00:28:29.667 21:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:29.667 spare_delay 00:28:29.667 21:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:29.926 [2024-07-15 21:42:03.200806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:29.926 [2024-07-15 21:42:03.200983] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:29.926 [2024-07-15 21:42:03.201031] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:28:29.926 [2024-07-15 21:42:03.201084] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:29.926 [2024-07-15 21:42:03.203046] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:29.926 [2024-07-15 21:42:03.203132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:29.926 spare 00:28:29.926 21:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:28:30.184 [2024-07-15 21:42:03.412512] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:30.184 [2024-07-15 21:42:03.414252] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:30.184 [2024-07-15 21:42:03.414460] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:28:30.184 [2024-07-15 21:42:03.414502] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:30.184 [2024-07-15 21:42:03.414655] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:28:30.184 [2024-07-15 21:42:03.414972] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:28:30.184 [2024-07-15 21:42:03.415014] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:28:30.184 [2024-07-15 21:42:03.415185] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:30.184 21:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:30.184 21:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:30.184 21:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:30.184 21:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:30.184 21:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:30.184 21:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:30.184 21:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:30.184 21:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:30.184 21:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:30.184 21:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:30.184 21:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:30.184 21:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:30.442 21:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:30.442 "name": "raid_bdev1", 00:28:30.442 "uuid": "826a955b-03e9-44d9-bc99-915214de76cc", 00:28:30.442 "strip_size_kb": 0, 00:28:30.442 "state": "online", 00:28:30.442 "raid_level": "raid1", 00:28:30.442 "superblock": true, 00:28:30.442 "num_base_bdevs": 2, 00:28:30.442 "num_base_bdevs_discovered": 2, 00:28:30.442 "num_base_bdevs_operational": 2, 00:28:30.442 "base_bdevs_list": [ 00:28:30.442 { 00:28:30.442 "name": "BaseBdev1", 00:28:30.442 "uuid": "1a48bc35-26c4-5153-b58a-c8cca93d03eb", 00:28:30.442 "is_configured": true, 00:28:30.442 "data_offset": 2048, 00:28:30.442 "data_size": 63488 00:28:30.442 }, 00:28:30.442 { 00:28:30.442 "name": "BaseBdev2", 00:28:30.443 "uuid": "54f52184-b452-5f14-8ed8-75c7a8bf5701", 00:28:30.443 "is_configured": true, 00:28:30.443 "data_offset": 2048, 00:28:30.443 "data_size": 63488 00:28:30.443 } 00:28:30.443 ] 00:28:30.443 }' 00:28:30.443 21:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:30.443 21:42:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:31.009 21:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:31.009 21:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:28:31.009 [2024-07-15 21:42:04.343055] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:31.009 21:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:28:31.009 21:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:31.009 21:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:31.267 21:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:28:31.267 21:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:28:31.267 21:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:28:31.267 21:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:28:31.267 21:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:28:31.267 21:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:31.267 21:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:28:31.267 21:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:31.267 21:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:28:31.267 21:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:31.267 21:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:28:31.267 21:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:31.267 21:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:31.267 21:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:28:31.526 [2024-07-15 21:42:04.718252] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:28:31.526 /dev/nbd0 00:28:31.526 21:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:31.526 21:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:31.526 21:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:28:31.526 21:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:28:31.526 21:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:31.526 21:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:31.526 21:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:28:31.526 21:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:28:31.526 21:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:31.526 21:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:31.527 21:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:31.527 1+0 records in 00:28:31.527 1+0 records out 00:28:31.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000506653 s, 8.1 MB/s 00:28:31.527 21:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:31.527 21:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:28:31.527 21:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:31.527 21:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:31.527 21:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:28:31.527 21:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:31.527 21:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:31.527 21:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:28:31.527 21:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:28:31.527 21:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:28:35.737 63488+0 records in 00:28:35.737 63488+0 records out 00:28:35.737 32505856 bytes (33 MB, 31 MiB) copied, 3.93367 s, 8.3 MB/s 00:28:35.737 21:42:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:28:35.737 21:42:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:35.737 21:42:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:28:35.737 21:42:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:35.737 21:42:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:28:35.737 21:42:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:35.737 21:42:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:35.737 21:42:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:35.737 21:42:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:35.737 21:42:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:35.737 21:42:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:35.737 21:42:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:35.737 21:42:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:35.737 [2024-07-15 21:42:08.940478] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:35.737 21:42:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:28:35.737 21:42:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:28:35.737 21:42:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:35.995 [2024-07-15 21:42:09.119841] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:35.995 21:42:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:35.995 21:42:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:35.995 21:42:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:35.995 21:42:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:35.995 21:42:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:35.995 21:42:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:35.995 21:42:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:35.995 21:42:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:35.995 21:42:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:35.995 21:42:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:35.995 21:42:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:35.995 21:42:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:35.995 21:42:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:35.995 "name": "raid_bdev1", 00:28:35.995 "uuid": "826a955b-03e9-44d9-bc99-915214de76cc", 00:28:35.995 "strip_size_kb": 0, 00:28:35.995 "state": "online", 00:28:35.995 "raid_level": "raid1", 00:28:35.995 "superblock": true, 00:28:35.995 "num_base_bdevs": 2, 00:28:35.995 "num_base_bdevs_discovered": 1, 00:28:35.995 "num_base_bdevs_operational": 1, 00:28:35.995 "base_bdevs_list": [ 00:28:35.995 { 00:28:35.995 "name": null, 00:28:35.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:35.995 "is_configured": false, 00:28:35.995 "data_offset": 2048, 00:28:35.995 "data_size": 63488 00:28:35.995 }, 00:28:35.995 { 00:28:35.995 "name": "BaseBdev2", 00:28:35.995 "uuid": "54f52184-b452-5f14-8ed8-75c7a8bf5701", 00:28:35.995 "is_configured": true, 00:28:35.995 "data_offset": 2048, 00:28:35.995 "data_size": 63488 00:28:35.995 } 00:28:35.995 ] 00:28:35.995 }' 00:28:35.995 21:42:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:35.995 21:42:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:36.561 21:42:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:36.819 [2024-07-15 21:42:10.130162] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:36.819 [2024-07-15 21:42:10.144016] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca50a0 00:28:36.819 [2024-07-15 21:42:10.145788] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:36.819 21:42:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:28:38.196 21:42:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:38.196 21:42:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:38.196 21:42:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:38.196 21:42:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:38.196 21:42:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:38.196 21:42:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:38.196 21:42:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:38.196 21:42:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:38.197 "name": "raid_bdev1", 00:28:38.197 "uuid": "826a955b-03e9-44d9-bc99-915214de76cc", 00:28:38.197 "strip_size_kb": 0, 00:28:38.197 "state": "online", 00:28:38.197 "raid_level": "raid1", 00:28:38.197 "superblock": true, 00:28:38.197 "num_base_bdevs": 2, 00:28:38.197 "num_base_bdevs_discovered": 2, 00:28:38.197 "num_base_bdevs_operational": 2, 00:28:38.197 "process": { 00:28:38.197 "type": "rebuild", 00:28:38.197 "target": "spare", 00:28:38.197 "progress": { 00:28:38.197 "blocks": 24576, 00:28:38.197 "percent": 38 00:28:38.197 } 00:28:38.197 }, 00:28:38.197 "base_bdevs_list": [ 00:28:38.197 { 00:28:38.197 "name": "spare", 00:28:38.197 "uuid": "f3775f3b-a195-5a33-b7ef-e650dd02f2f6", 00:28:38.197 "is_configured": true, 00:28:38.197 "data_offset": 2048, 00:28:38.197 "data_size": 63488 00:28:38.197 }, 00:28:38.197 { 00:28:38.197 "name": "BaseBdev2", 00:28:38.197 "uuid": "54f52184-b452-5f14-8ed8-75c7a8bf5701", 00:28:38.197 "is_configured": true, 00:28:38.197 "data_offset": 2048, 00:28:38.197 "data_size": 63488 00:28:38.197 } 00:28:38.197 ] 00:28:38.197 }' 00:28:38.197 21:42:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:38.197 21:42:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:38.197 21:42:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:38.197 21:42:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:38.197 21:42:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:38.455 [2024-07-15 21:42:11.694464] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:38.455 [2024-07-15 21:42:11.752961] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:38.455 [2024-07-15 21:42:11.753075] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:38.455 [2024-07-15 21:42:11.753102] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:38.455 [2024-07-15 21:42:11.753122] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:38.455 21:42:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:38.455 21:42:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:38.455 21:42:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:38.455 21:42:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:38.455 21:42:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:38.455 21:42:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:38.455 21:42:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:38.455 21:42:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:38.455 21:42:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:38.455 21:42:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:38.455 21:42:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:38.456 21:42:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:38.715 21:42:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:38.715 "name": "raid_bdev1", 00:28:38.715 "uuid": "826a955b-03e9-44d9-bc99-915214de76cc", 00:28:38.715 "strip_size_kb": 0, 00:28:38.715 "state": "online", 00:28:38.715 "raid_level": "raid1", 00:28:38.715 "superblock": true, 00:28:38.715 "num_base_bdevs": 2, 00:28:38.715 "num_base_bdevs_discovered": 1, 00:28:38.715 "num_base_bdevs_operational": 1, 00:28:38.715 "base_bdevs_list": [ 00:28:38.715 { 00:28:38.715 "name": null, 00:28:38.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:38.715 "is_configured": false, 00:28:38.715 "data_offset": 2048, 00:28:38.715 "data_size": 63488 00:28:38.715 }, 00:28:38.715 { 00:28:38.715 "name": "BaseBdev2", 00:28:38.715 "uuid": "54f52184-b452-5f14-8ed8-75c7a8bf5701", 00:28:38.715 "is_configured": true, 00:28:38.715 "data_offset": 2048, 00:28:38.715 "data_size": 63488 00:28:38.715 } 00:28:38.715 ] 00:28:38.715 }' 00:28:38.715 21:42:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:38.715 21:42:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:39.284 21:42:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:39.284 21:42:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:39.284 21:42:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:39.284 21:42:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:39.284 21:42:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:39.284 21:42:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:39.284 21:42:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:39.544 21:42:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:39.544 "name": "raid_bdev1", 00:28:39.544 "uuid": "826a955b-03e9-44d9-bc99-915214de76cc", 00:28:39.544 "strip_size_kb": 0, 00:28:39.544 "state": "online", 00:28:39.544 "raid_level": "raid1", 00:28:39.544 "superblock": true, 00:28:39.544 "num_base_bdevs": 2, 00:28:39.544 "num_base_bdevs_discovered": 1, 00:28:39.544 "num_base_bdevs_operational": 1, 00:28:39.544 "base_bdevs_list": [ 00:28:39.544 { 00:28:39.544 "name": null, 00:28:39.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:39.544 "is_configured": false, 00:28:39.544 "data_offset": 2048, 00:28:39.544 "data_size": 63488 00:28:39.544 }, 00:28:39.544 { 00:28:39.544 "name": "BaseBdev2", 00:28:39.544 "uuid": "54f52184-b452-5f14-8ed8-75c7a8bf5701", 00:28:39.544 "is_configured": true, 00:28:39.544 "data_offset": 2048, 00:28:39.544 "data_size": 63488 00:28:39.544 } 00:28:39.544 ] 00:28:39.544 }' 00:28:39.544 21:42:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:39.544 21:42:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:39.544 21:42:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:39.544 21:42:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:39.544 21:42:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:39.803 [2024-07-15 21:42:13.086048] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:39.803 [2024-07-15 21:42:13.100873] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca5240 00:28:39.803 [2024-07-15 21:42:13.102795] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:39.803 21:42:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:41.183 21:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:41.183 21:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:41.183 21:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:41.183 21:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:41.183 21:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:41.183 21:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:41.183 21:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:41.183 21:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:41.183 "name": "raid_bdev1", 00:28:41.183 "uuid": "826a955b-03e9-44d9-bc99-915214de76cc", 00:28:41.183 "strip_size_kb": 0, 00:28:41.183 "state": "online", 00:28:41.183 "raid_level": "raid1", 00:28:41.183 "superblock": true, 00:28:41.183 "num_base_bdevs": 2, 00:28:41.183 "num_base_bdevs_discovered": 2, 00:28:41.183 "num_base_bdevs_operational": 2, 00:28:41.183 "process": { 00:28:41.183 "type": "rebuild", 00:28:41.183 "target": "spare", 00:28:41.183 "progress": { 00:28:41.183 "blocks": 24576, 00:28:41.183 "percent": 38 00:28:41.183 } 00:28:41.183 }, 00:28:41.183 "base_bdevs_list": [ 00:28:41.183 { 00:28:41.183 "name": "spare", 00:28:41.183 "uuid": "f3775f3b-a195-5a33-b7ef-e650dd02f2f6", 00:28:41.183 "is_configured": true, 00:28:41.183 "data_offset": 2048, 00:28:41.183 "data_size": 63488 00:28:41.183 }, 00:28:41.183 { 00:28:41.183 "name": "BaseBdev2", 00:28:41.183 "uuid": "54f52184-b452-5f14-8ed8-75c7a8bf5701", 00:28:41.183 "is_configured": true, 00:28:41.183 "data_offset": 2048, 00:28:41.183 "data_size": 63488 00:28:41.183 } 00:28:41.183 ] 00:28:41.183 }' 00:28:41.183 21:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:41.183 21:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:41.183 21:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:41.183 21:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:41.183 21:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:28:41.183 21:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:28:41.183 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:28:41.183 21:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:28:41.183 21:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:28:41.183 21:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:28:41.183 21:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=787 00:28:41.183 21:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:41.183 21:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:41.183 21:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:41.183 21:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:41.183 21:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:41.183 21:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:41.183 21:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:41.183 21:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:41.443 21:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:41.443 "name": "raid_bdev1", 00:28:41.443 "uuid": "826a955b-03e9-44d9-bc99-915214de76cc", 00:28:41.443 "strip_size_kb": 0, 00:28:41.443 "state": "online", 00:28:41.443 "raid_level": "raid1", 00:28:41.443 "superblock": true, 00:28:41.443 "num_base_bdevs": 2, 00:28:41.443 "num_base_bdevs_discovered": 2, 00:28:41.443 "num_base_bdevs_operational": 2, 00:28:41.443 "process": { 00:28:41.443 "type": "rebuild", 00:28:41.443 "target": "spare", 00:28:41.443 "progress": { 00:28:41.443 "blocks": 30720, 00:28:41.443 "percent": 48 00:28:41.443 } 00:28:41.443 }, 00:28:41.443 "base_bdevs_list": [ 00:28:41.443 { 00:28:41.443 "name": "spare", 00:28:41.443 "uuid": "f3775f3b-a195-5a33-b7ef-e650dd02f2f6", 00:28:41.443 "is_configured": true, 00:28:41.443 "data_offset": 2048, 00:28:41.443 "data_size": 63488 00:28:41.443 }, 00:28:41.443 { 00:28:41.443 "name": "BaseBdev2", 00:28:41.443 "uuid": "54f52184-b452-5f14-8ed8-75c7a8bf5701", 00:28:41.443 "is_configured": true, 00:28:41.443 "data_offset": 2048, 00:28:41.443 "data_size": 63488 00:28:41.443 } 00:28:41.443 ] 00:28:41.443 }' 00:28:41.443 21:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:41.443 21:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:41.443 21:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:41.443 21:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:41.443 21:42:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:42.821 21:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:42.821 21:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:42.821 21:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:42.821 21:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:42.821 21:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:42.821 21:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:42.821 21:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:42.821 21:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:42.821 21:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:42.821 "name": "raid_bdev1", 00:28:42.821 "uuid": "826a955b-03e9-44d9-bc99-915214de76cc", 00:28:42.821 "strip_size_kb": 0, 00:28:42.821 "state": "online", 00:28:42.821 "raid_level": "raid1", 00:28:42.821 "superblock": true, 00:28:42.821 "num_base_bdevs": 2, 00:28:42.821 "num_base_bdevs_discovered": 2, 00:28:42.821 "num_base_bdevs_operational": 2, 00:28:42.821 "process": { 00:28:42.821 "type": "rebuild", 00:28:42.821 "target": "spare", 00:28:42.821 "progress": { 00:28:42.821 "blocks": 57344, 00:28:42.821 "percent": 90 00:28:42.821 } 00:28:42.821 }, 00:28:42.821 "base_bdevs_list": [ 00:28:42.821 { 00:28:42.821 "name": "spare", 00:28:42.821 "uuid": "f3775f3b-a195-5a33-b7ef-e650dd02f2f6", 00:28:42.821 "is_configured": true, 00:28:42.821 "data_offset": 2048, 00:28:42.821 "data_size": 63488 00:28:42.821 }, 00:28:42.821 { 00:28:42.821 "name": "BaseBdev2", 00:28:42.821 "uuid": "54f52184-b452-5f14-8ed8-75c7a8bf5701", 00:28:42.821 "is_configured": true, 00:28:42.821 "data_offset": 2048, 00:28:42.821 "data_size": 63488 00:28:42.821 } 00:28:42.821 ] 00:28:42.821 }' 00:28:42.821 21:42:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:42.821 21:42:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:42.821 21:42:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:42.821 21:42:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:42.821 21:42:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:43.080 [2024-07-15 21:42:16.215944] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:43.080 [2024-07-15 21:42:16.216099] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:43.080 [2024-07-15 21:42:16.216276] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:44.018 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:44.018 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:44.018 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:44.018 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:44.018 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:44.018 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:44.018 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:44.018 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:44.018 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:44.018 "name": "raid_bdev1", 00:28:44.018 "uuid": "826a955b-03e9-44d9-bc99-915214de76cc", 00:28:44.018 "strip_size_kb": 0, 00:28:44.018 "state": "online", 00:28:44.018 "raid_level": "raid1", 00:28:44.018 "superblock": true, 00:28:44.018 "num_base_bdevs": 2, 00:28:44.018 "num_base_bdevs_discovered": 2, 00:28:44.018 "num_base_bdevs_operational": 2, 00:28:44.018 "base_bdevs_list": [ 00:28:44.018 { 00:28:44.018 "name": "spare", 00:28:44.018 "uuid": "f3775f3b-a195-5a33-b7ef-e650dd02f2f6", 00:28:44.018 "is_configured": true, 00:28:44.018 "data_offset": 2048, 00:28:44.018 "data_size": 63488 00:28:44.018 }, 00:28:44.018 { 00:28:44.018 "name": "BaseBdev2", 00:28:44.018 "uuid": "54f52184-b452-5f14-8ed8-75c7a8bf5701", 00:28:44.018 "is_configured": true, 00:28:44.018 "data_offset": 2048, 00:28:44.018 "data_size": 63488 00:28:44.018 } 00:28:44.018 ] 00:28:44.018 }' 00:28:44.018 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:44.018 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:44.018 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:44.018 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:28:44.018 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:28:44.018 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:44.018 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:44.018 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:44.018 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:44.018 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:44.277 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:44.277 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:44.277 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:44.277 "name": "raid_bdev1", 00:28:44.277 "uuid": "826a955b-03e9-44d9-bc99-915214de76cc", 00:28:44.277 "strip_size_kb": 0, 00:28:44.277 "state": "online", 00:28:44.277 "raid_level": "raid1", 00:28:44.277 "superblock": true, 00:28:44.277 "num_base_bdevs": 2, 00:28:44.277 "num_base_bdevs_discovered": 2, 00:28:44.277 "num_base_bdevs_operational": 2, 00:28:44.277 "base_bdevs_list": [ 00:28:44.277 { 00:28:44.277 "name": "spare", 00:28:44.277 "uuid": "f3775f3b-a195-5a33-b7ef-e650dd02f2f6", 00:28:44.277 "is_configured": true, 00:28:44.277 "data_offset": 2048, 00:28:44.277 "data_size": 63488 00:28:44.277 }, 00:28:44.278 { 00:28:44.278 "name": "BaseBdev2", 00:28:44.278 "uuid": "54f52184-b452-5f14-8ed8-75c7a8bf5701", 00:28:44.278 "is_configured": true, 00:28:44.278 "data_offset": 2048, 00:28:44.278 "data_size": 63488 00:28:44.278 } 00:28:44.278 ] 00:28:44.278 }' 00:28:44.278 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:44.278 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:44.278 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:44.539 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:44.539 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:44.539 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:44.539 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:44.539 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:44.539 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:44.539 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:44.539 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:44.539 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:44.539 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:44.539 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:44.539 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:44.539 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:44.540 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:44.540 "name": "raid_bdev1", 00:28:44.540 "uuid": "826a955b-03e9-44d9-bc99-915214de76cc", 00:28:44.540 "strip_size_kb": 0, 00:28:44.540 "state": "online", 00:28:44.540 "raid_level": "raid1", 00:28:44.540 "superblock": true, 00:28:44.540 "num_base_bdevs": 2, 00:28:44.540 "num_base_bdevs_discovered": 2, 00:28:44.540 "num_base_bdevs_operational": 2, 00:28:44.540 "base_bdevs_list": [ 00:28:44.540 { 00:28:44.540 "name": "spare", 00:28:44.540 "uuid": "f3775f3b-a195-5a33-b7ef-e650dd02f2f6", 00:28:44.540 "is_configured": true, 00:28:44.540 "data_offset": 2048, 00:28:44.540 "data_size": 63488 00:28:44.540 }, 00:28:44.540 { 00:28:44.540 "name": "BaseBdev2", 00:28:44.540 "uuid": "54f52184-b452-5f14-8ed8-75c7a8bf5701", 00:28:44.540 "is_configured": true, 00:28:44.540 "data_offset": 2048, 00:28:44.540 "data_size": 63488 00:28:44.540 } 00:28:44.540 ] 00:28:44.540 }' 00:28:44.540 21:42:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:44.540 21:42:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:45.110 21:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:45.370 [2024-07-15 21:42:18.641231] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:45.370 [2024-07-15 21:42:18.641344] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:45.370 [2024-07-15 21:42:18.641465] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:45.370 [2024-07-15 21:42:18.641558] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:45.370 [2024-07-15 21:42:18.641602] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:28:45.370 21:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:45.370 21:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:28:45.630 21:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:28:45.630 21:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:28:45.630 21:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:28:45.630 21:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:28:45.630 21:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:45.630 21:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:28:45.630 21:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:45.630 21:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:28:45.630 21:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:45.630 21:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:28:45.630 21:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:45.630 21:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:45.630 21:42:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:28:45.890 /dev/nbd0 00:28:45.890 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:45.890 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:45.890 21:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:28:45.890 21:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:28:45.890 21:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:45.890 21:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:45.890 21:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:28:45.890 21:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:28:45.890 21:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:45.890 21:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:45.890 21:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:45.890 1+0 records in 00:28:45.890 1+0 records out 00:28:45.890 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466102 s, 8.8 MB/s 00:28:45.890 21:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:45.890 21:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:28:45.890 21:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:45.890 21:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:45.890 21:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:28:45.890 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:45.890 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:45.890 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:28:46.150 /dev/nbd1 00:28:46.150 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:46.150 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:46.150 21:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:28:46.150 21:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:28:46.150 21:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:46.150 21:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:46.150 21:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:28:46.150 21:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:28:46.150 21:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:46.150 21:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:46.150 21:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:46.150 1+0 records in 00:28:46.150 1+0 records out 00:28:46.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460368 s, 8.9 MB/s 00:28:46.150 21:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:46.150 21:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:28:46.150 21:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:46.150 21:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:46.150 21:42:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:28:46.150 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:46.150 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:46.150 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:28:46.150 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:28:46.150 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:46.150 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:28:46.150 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:46.150 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:28:46.150 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:46.150 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:46.409 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:46.409 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:46.409 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:46.409 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:46.409 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:46.409 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:46.409 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:28:46.409 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:28:46.409 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:46.409 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:28:46.668 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:46.668 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:46.668 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:46.668 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:46.668 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:46.668 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:46.668 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:28:46.668 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:28:46.668 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:28:46.668 21:42:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:46.925 21:42:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:47.183 [2024-07-15 21:42:20.304140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:47.183 [2024-07-15 21:42:20.304288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:47.183 [2024-07-15 21:42:20.304373] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:28:47.183 [2024-07-15 21:42:20.304410] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:47.183 [2024-07-15 21:42:20.306455] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:47.183 [2024-07-15 21:42:20.306544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:47.183 [2024-07-15 21:42:20.306681] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:47.183 [2024-07-15 21:42:20.306742] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:47.183 [2024-07-15 21:42:20.306954] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:47.183 spare 00:28:47.183 21:42:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:47.183 21:42:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:47.183 21:42:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:47.183 21:42:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:47.183 21:42:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:47.183 21:42:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:47.183 21:42:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:47.183 21:42:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:47.183 21:42:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:47.183 21:42:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:47.183 21:42:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:47.183 21:42:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:47.183 [2024-07-15 21:42:20.406887] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:28:47.183 [2024-07-15 21:42:20.406984] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:47.183 [2024-07-15 21:42:20.407182] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc5be0 00:28:47.183 [2024-07-15 21:42:20.407538] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:28:47.183 [2024-07-15 21:42:20.407583] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:28:47.183 [2024-07-15 21:42:20.407733] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:47.183 21:42:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:47.183 "name": "raid_bdev1", 00:28:47.183 "uuid": "826a955b-03e9-44d9-bc99-915214de76cc", 00:28:47.183 "strip_size_kb": 0, 00:28:47.183 "state": "online", 00:28:47.183 "raid_level": "raid1", 00:28:47.183 "superblock": true, 00:28:47.183 "num_base_bdevs": 2, 00:28:47.183 "num_base_bdevs_discovered": 2, 00:28:47.183 "num_base_bdevs_operational": 2, 00:28:47.183 "base_bdevs_list": [ 00:28:47.183 { 00:28:47.183 "name": "spare", 00:28:47.183 "uuid": "f3775f3b-a195-5a33-b7ef-e650dd02f2f6", 00:28:47.183 "is_configured": true, 00:28:47.183 "data_offset": 2048, 00:28:47.183 "data_size": 63488 00:28:47.183 }, 00:28:47.183 { 00:28:47.183 "name": "BaseBdev2", 00:28:47.183 "uuid": "54f52184-b452-5f14-8ed8-75c7a8bf5701", 00:28:47.183 "is_configured": true, 00:28:47.183 "data_offset": 2048, 00:28:47.183 "data_size": 63488 00:28:47.183 } 00:28:47.183 ] 00:28:47.183 }' 00:28:47.183 21:42:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:47.184 21:42:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:48.136 21:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:48.136 21:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:48.136 21:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:48.136 21:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:48.136 21:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:48.136 21:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:48.136 21:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:48.136 21:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:48.136 "name": "raid_bdev1", 00:28:48.136 "uuid": "826a955b-03e9-44d9-bc99-915214de76cc", 00:28:48.136 "strip_size_kb": 0, 00:28:48.136 "state": "online", 00:28:48.136 "raid_level": "raid1", 00:28:48.136 "superblock": true, 00:28:48.136 "num_base_bdevs": 2, 00:28:48.136 "num_base_bdevs_discovered": 2, 00:28:48.136 "num_base_bdevs_operational": 2, 00:28:48.136 "base_bdevs_list": [ 00:28:48.136 { 00:28:48.136 "name": "spare", 00:28:48.136 "uuid": "f3775f3b-a195-5a33-b7ef-e650dd02f2f6", 00:28:48.136 "is_configured": true, 00:28:48.136 "data_offset": 2048, 00:28:48.136 "data_size": 63488 00:28:48.136 }, 00:28:48.136 { 00:28:48.136 "name": "BaseBdev2", 00:28:48.136 "uuid": "54f52184-b452-5f14-8ed8-75c7a8bf5701", 00:28:48.136 "is_configured": true, 00:28:48.136 "data_offset": 2048, 00:28:48.136 "data_size": 63488 00:28:48.136 } 00:28:48.136 ] 00:28:48.136 }' 00:28:48.136 21:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:48.136 21:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:48.136 21:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:48.136 21:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:48.136 21:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:48.136 21:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:28:48.395 21:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:28:48.395 21:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:48.653 [2024-07-15 21:42:21.793620] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:48.653 21:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:48.653 21:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:48.653 21:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:48.653 21:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:48.653 21:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:48.654 21:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:48.654 21:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:48.654 21:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:48.654 21:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:48.654 21:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:48.654 21:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:48.654 21:42:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:48.654 21:42:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:48.654 "name": "raid_bdev1", 00:28:48.654 "uuid": "826a955b-03e9-44d9-bc99-915214de76cc", 00:28:48.654 "strip_size_kb": 0, 00:28:48.654 "state": "online", 00:28:48.654 "raid_level": "raid1", 00:28:48.654 "superblock": true, 00:28:48.654 "num_base_bdevs": 2, 00:28:48.654 "num_base_bdevs_discovered": 1, 00:28:48.654 "num_base_bdevs_operational": 1, 00:28:48.654 "base_bdevs_list": [ 00:28:48.654 { 00:28:48.654 "name": null, 00:28:48.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:48.654 "is_configured": false, 00:28:48.654 "data_offset": 2048, 00:28:48.654 "data_size": 63488 00:28:48.654 }, 00:28:48.654 { 00:28:48.654 "name": "BaseBdev2", 00:28:48.654 "uuid": "54f52184-b452-5f14-8ed8-75c7a8bf5701", 00:28:48.654 "is_configured": true, 00:28:48.654 "data_offset": 2048, 00:28:48.654 "data_size": 63488 00:28:48.654 } 00:28:48.654 ] 00:28:48.654 }' 00:28:48.654 21:42:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:48.654 21:42:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:49.223 21:42:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:49.482 [2024-07-15 21:42:22.748062] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:49.482 [2024-07-15 21:42:22.748311] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:49.482 [2024-07-15 21:42:22.748351] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:49.482 [2024-07-15 21:42:22.748411] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:49.482 [2024-07-15 21:42:22.762914] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc5d80 00:28:49.482 21:42:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:28:49.482 [2024-07-15 21:42:22.773185] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:50.417 21:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:50.417 21:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:50.417 21:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:50.417 21:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:50.417 21:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:50.417 21:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:50.417 21:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:50.676 21:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:50.676 "name": "raid_bdev1", 00:28:50.676 "uuid": "826a955b-03e9-44d9-bc99-915214de76cc", 00:28:50.676 "strip_size_kb": 0, 00:28:50.676 "state": "online", 00:28:50.676 "raid_level": "raid1", 00:28:50.676 "superblock": true, 00:28:50.676 "num_base_bdevs": 2, 00:28:50.676 "num_base_bdevs_discovered": 2, 00:28:50.676 "num_base_bdevs_operational": 2, 00:28:50.676 "process": { 00:28:50.676 "type": "rebuild", 00:28:50.676 "target": "spare", 00:28:50.676 "progress": { 00:28:50.676 "blocks": 22528, 00:28:50.676 "percent": 35 00:28:50.676 } 00:28:50.676 }, 00:28:50.676 "base_bdevs_list": [ 00:28:50.676 { 00:28:50.676 "name": "spare", 00:28:50.676 "uuid": "f3775f3b-a195-5a33-b7ef-e650dd02f2f6", 00:28:50.676 "is_configured": true, 00:28:50.676 "data_offset": 2048, 00:28:50.676 "data_size": 63488 00:28:50.676 }, 00:28:50.676 { 00:28:50.676 "name": "BaseBdev2", 00:28:50.676 "uuid": "54f52184-b452-5f14-8ed8-75c7a8bf5701", 00:28:50.676 "is_configured": true, 00:28:50.676 "data_offset": 2048, 00:28:50.676 "data_size": 63488 00:28:50.676 } 00:28:50.676 ] 00:28:50.676 }' 00:28:50.676 21:42:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:50.676 21:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:50.676 21:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:50.935 21:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:50.935 21:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:50.935 [2024-07-15 21:42:24.271706] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:50.935 [2024-07-15 21:42:24.279471] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:50.935 [2024-07-15 21:42:24.279596] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:50.935 [2024-07-15 21:42:24.279627] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:50.935 [2024-07-15 21:42:24.279650] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:51.196 21:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:51.196 21:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:51.196 21:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:51.196 21:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:51.196 21:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:51.196 21:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:51.196 21:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:51.196 21:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:51.196 21:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:51.196 21:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:51.196 21:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:51.196 21:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:51.196 21:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:51.196 "name": "raid_bdev1", 00:28:51.196 "uuid": "826a955b-03e9-44d9-bc99-915214de76cc", 00:28:51.196 "strip_size_kb": 0, 00:28:51.196 "state": "online", 00:28:51.196 "raid_level": "raid1", 00:28:51.196 "superblock": true, 00:28:51.196 "num_base_bdevs": 2, 00:28:51.196 "num_base_bdevs_discovered": 1, 00:28:51.196 "num_base_bdevs_operational": 1, 00:28:51.196 "base_bdevs_list": [ 00:28:51.196 { 00:28:51.196 "name": null, 00:28:51.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:51.196 "is_configured": false, 00:28:51.196 "data_offset": 2048, 00:28:51.196 "data_size": 63488 00:28:51.196 }, 00:28:51.196 { 00:28:51.196 "name": "BaseBdev2", 00:28:51.196 "uuid": "54f52184-b452-5f14-8ed8-75c7a8bf5701", 00:28:51.196 "is_configured": true, 00:28:51.196 "data_offset": 2048, 00:28:51.196 "data_size": 63488 00:28:51.196 } 00:28:51.196 ] 00:28:51.196 }' 00:28:51.196 21:42:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:51.196 21:42:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:51.767 21:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:52.027 [2024-07-15 21:42:25.301837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:52.027 [2024-07-15 21:42:25.302000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:52.027 [2024-07-15 21:42:25.302051] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:28:52.027 [2024-07-15 21:42:25.302098] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:52.027 [2024-07-15 21:42:25.302637] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:52.027 [2024-07-15 21:42:25.302711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:52.027 [2024-07-15 21:42:25.302865] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:52.027 [2024-07-15 21:42:25.302902] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:52.027 [2024-07-15 21:42:25.302926] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:52.027 [2024-07-15 21:42:25.302976] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:52.027 [2024-07-15 21:42:25.319073] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc60c0 00:28:52.027 spare 00:28:52.027 [2024-07-15 21:42:25.320718] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:52.027 21:42:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:28:52.966 21:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:52.966 21:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:52.966 21:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:52.966 21:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:52.966 21:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:53.225 21:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:53.225 21:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:53.225 21:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:53.225 "name": "raid_bdev1", 00:28:53.225 "uuid": "826a955b-03e9-44d9-bc99-915214de76cc", 00:28:53.225 "strip_size_kb": 0, 00:28:53.225 "state": "online", 00:28:53.225 "raid_level": "raid1", 00:28:53.225 "superblock": true, 00:28:53.225 "num_base_bdevs": 2, 00:28:53.225 "num_base_bdevs_discovered": 2, 00:28:53.225 "num_base_bdevs_operational": 2, 00:28:53.225 "process": { 00:28:53.225 "type": "rebuild", 00:28:53.225 "target": "spare", 00:28:53.225 "progress": { 00:28:53.225 "blocks": 22528, 00:28:53.225 "percent": 35 00:28:53.225 } 00:28:53.225 }, 00:28:53.225 "base_bdevs_list": [ 00:28:53.225 { 00:28:53.225 "name": "spare", 00:28:53.225 "uuid": "f3775f3b-a195-5a33-b7ef-e650dd02f2f6", 00:28:53.225 "is_configured": true, 00:28:53.225 "data_offset": 2048, 00:28:53.225 "data_size": 63488 00:28:53.225 }, 00:28:53.225 { 00:28:53.225 "name": "BaseBdev2", 00:28:53.225 "uuid": "54f52184-b452-5f14-8ed8-75c7a8bf5701", 00:28:53.225 "is_configured": true, 00:28:53.225 "data_offset": 2048, 00:28:53.225 "data_size": 63488 00:28:53.225 } 00:28:53.225 ] 00:28:53.225 }' 00:28:53.225 21:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:53.225 21:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:53.225 21:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:53.484 21:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:53.485 21:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:53.485 [2024-07-15 21:42:26.808306] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:53.485 [2024-07-15 21:42:26.826945] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:53.485 [2024-07-15 21:42:26.827080] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:53.485 [2024-07-15 21:42:26.827110] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:53.485 [2024-07-15 21:42:26.827134] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:53.745 21:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:53.745 21:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:53.745 21:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:53.745 21:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:53.745 21:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:53.745 21:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:53.745 21:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:53.745 21:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:53.745 21:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:53.745 21:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:53.745 21:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:53.745 21:42:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:53.745 21:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:53.745 "name": "raid_bdev1", 00:28:53.745 "uuid": "826a955b-03e9-44d9-bc99-915214de76cc", 00:28:53.745 "strip_size_kb": 0, 00:28:53.745 "state": "online", 00:28:53.745 "raid_level": "raid1", 00:28:53.745 "superblock": true, 00:28:53.745 "num_base_bdevs": 2, 00:28:53.745 "num_base_bdevs_discovered": 1, 00:28:53.745 "num_base_bdevs_operational": 1, 00:28:53.745 "base_bdevs_list": [ 00:28:53.745 { 00:28:53.745 "name": null, 00:28:53.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:53.745 "is_configured": false, 00:28:53.745 "data_offset": 2048, 00:28:53.745 "data_size": 63488 00:28:53.745 }, 00:28:53.745 { 00:28:53.745 "name": "BaseBdev2", 00:28:53.745 "uuid": "54f52184-b452-5f14-8ed8-75c7a8bf5701", 00:28:53.745 "is_configured": true, 00:28:53.745 "data_offset": 2048, 00:28:53.745 "data_size": 63488 00:28:53.745 } 00:28:53.745 ] 00:28:53.745 }' 00:28:53.745 21:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:53.745 21:42:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:54.709 21:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:54.709 21:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:54.709 21:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:54.709 21:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:54.709 21:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:54.709 21:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:54.709 21:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:54.709 21:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:54.709 "name": "raid_bdev1", 00:28:54.709 "uuid": "826a955b-03e9-44d9-bc99-915214de76cc", 00:28:54.709 "strip_size_kb": 0, 00:28:54.709 "state": "online", 00:28:54.709 "raid_level": "raid1", 00:28:54.709 "superblock": true, 00:28:54.709 "num_base_bdevs": 2, 00:28:54.709 "num_base_bdevs_discovered": 1, 00:28:54.709 "num_base_bdevs_operational": 1, 00:28:54.709 "base_bdevs_list": [ 00:28:54.709 { 00:28:54.709 "name": null, 00:28:54.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:54.709 "is_configured": false, 00:28:54.709 "data_offset": 2048, 00:28:54.709 "data_size": 63488 00:28:54.709 }, 00:28:54.709 { 00:28:54.709 "name": "BaseBdev2", 00:28:54.709 "uuid": "54f52184-b452-5f14-8ed8-75c7a8bf5701", 00:28:54.709 "is_configured": true, 00:28:54.709 "data_offset": 2048, 00:28:54.709 "data_size": 63488 00:28:54.709 } 00:28:54.709 ] 00:28:54.709 }' 00:28:54.709 21:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:54.709 21:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:54.709 21:42:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:54.709 21:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:54.709 21:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:28:54.968 21:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:55.228 [2024-07-15 21:42:28.360812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:55.228 [2024-07-15 21:42:28.360967] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:55.228 [2024-07-15 21:42:28.361010] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:28:55.228 [2024-07-15 21:42:28.361044] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:55.228 [2024-07-15 21:42:28.361516] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:55.228 [2024-07-15 21:42:28.361574] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:55.228 [2024-07-15 21:42:28.361706] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:28:55.228 [2024-07-15 21:42:28.361743] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:55.228 [2024-07-15 21:42:28.361765] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:55.228 BaseBdev1 00:28:55.228 21:42:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:28:56.163 21:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:56.163 21:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:56.163 21:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:56.163 21:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:56.163 21:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:56.163 21:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:56.163 21:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:56.163 21:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:56.163 21:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:56.163 21:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:56.163 21:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:56.163 21:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:56.420 21:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:56.420 "name": "raid_bdev1", 00:28:56.420 "uuid": "826a955b-03e9-44d9-bc99-915214de76cc", 00:28:56.420 "strip_size_kb": 0, 00:28:56.420 "state": "online", 00:28:56.420 "raid_level": "raid1", 00:28:56.420 "superblock": true, 00:28:56.420 "num_base_bdevs": 2, 00:28:56.420 "num_base_bdevs_discovered": 1, 00:28:56.420 "num_base_bdevs_operational": 1, 00:28:56.420 "base_bdevs_list": [ 00:28:56.420 { 00:28:56.420 "name": null, 00:28:56.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:56.420 "is_configured": false, 00:28:56.420 "data_offset": 2048, 00:28:56.420 "data_size": 63488 00:28:56.420 }, 00:28:56.420 { 00:28:56.420 "name": "BaseBdev2", 00:28:56.420 "uuid": "54f52184-b452-5f14-8ed8-75c7a8bf5701", 00:28:56.420 "is_configured": true, 00:28:56.420 "data_offset": 2048, 00:28:56.420 "data_size": 63488 00:28:56.420 } 00:28:56.420 ] 00:28:56.420 }' 00:28:56.420 21:42:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:56.420 21:42:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:56.985 21:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:56.985 21:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:56.985 21:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:56.985 21:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:56.985 21:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:56.985 21:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:56.985 21:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:57.243 21:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:57.243 "name": "raid_bdev1", 00:28:57.243 "uuid": "826a955b-03e9-44d9-bc99-915214de76cc", 00:28:57.243 "strip_size_kb": 0, 00:28:57.243 "state": "online", 00:28:57.243 "raid_level": "raid1", 00:28:57.243 "superblock": true, 00:28:57.243 "num_base_bdevs": 2, 00:28:57.243 "num_base_bdevs_discovered": 1, 00:28:57.243 "num_base_bdevs_operational": 1, 00:28:57.243 "base_bdevs_list": [ 00:28:57.243 { 00:28:57.243 "name": null, 00:28:57.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:57.243 "is_configured": false, 00:28:57.243 "data_offset": 2048, 00:28:57.243 "data_size": 63488 00:28:57.243 }, 00:28:57.243 { 00:28:57.243 "name": "BaseBdev2", 00:28:57.243 "uuid": "54f52184-b452-5f14-8ed8-75c7a8bf5701", 00:28:57.243 "is_configured": true, 00:28:57.243 "data_offset": 2048, 00:28:57.243 "data_size": 63488 00:28:57.243 } 00:28:57.243 ] 00:28:57.243 }' 00:28:57.243 21:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:57.243 21:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:57.243 21:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:57.243 21:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:57.243 21:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:57.243 21:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:28:57.243 21:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:57.243 21:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:57.243 21:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:57.243 21:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:57.243 21:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:57.243 21:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:57.243 21:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:57.243 21:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:57.243 21:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:28:57.243 21:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:57.501 [2024-07-15 21:42:30.668899] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:57.501 [2024-07-15 21:42:30.669139] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:57.501 [2024-07-15 21:42:30.669172] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:57.501 request: 00:28:57.501 { 00:28:57.501 "base_bdev": "BaseBdev1", 00:28:57.501 "raid_bdev": "raid_bdev1", 00:28:57.501 "method": "bdev_raid_add_base_bdev", 00:28:57.501 "req_id": 1 00:28:57.501 } 00:28:57.501 Got JSON-RPC error response 00:28:57.501 response: 00:28:57.501 { 00:28:57.501 "code": -22, 00:28:57.501 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:28:57.501 } 00:28:57.501 21:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:28:57.501 21:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:57.501 21:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:57.501 21:42:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:57.501 21:42:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:28:58.501 21:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:58.501 21:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:58.501 21:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:58.501 21:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:58.501 21:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:58.501 21:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:58.501 21:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:58.501 21:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:58.501 21:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:58.501 21:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:58.501 21:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:58.501 21:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:58.501 21:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:58.501 "name": "raid_bdev1", 00:28:58.501 "uuid": "826a955b-03e9-44d9-bc99-915214de76cc", 00:28:58.501 "strip_size_kb": 0, 00:28:58.501 "state": "online", 00:28:58.501 "raid_level": "raid1", 00:28:58.501 "superblock": true, 00:28:58.501 "num_base_bdevs": 2, 00:28:58.501 "num_base_bdevs_discovered": 1, 00:28:58.501 "num_base_bdevs_operational": 1, 00:28:58.501 "base_bdevs_list": [ 00:28:58.501 { 00:28:58.501 "name": null, 00:28:58.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:58.501 "is_configured": false, 00:28:58.501 "data_offset": 2048, 00:28:58.501 "data_size": 63488 00:28:58.501 }, 00:28:58.501 { 00:28:58.501 "name": "BaseBdev2", 00:28:58.501 "uuid": "54f52184-b452-5f14-8ed8-75c7a8bf5701", 00:28:58.501 "is_configured": true, 00:28:58.501 "data_offset": 2048, 00:28:58.501 "data_size": 63488 00:28:58.501 } 00:28:58.501 ] 00:28:58.501 }' 00:28:58.501 21:42:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:58.501 21:42:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:59.434 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:59.434 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:59.434 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:59.434 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:59.434 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:59.434 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:59.434 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:59.434 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:59.434 "name": "raid_bdev1", 00:28:59.434 "uuid": "826a955b-03e9-44d9-bc99-915214de76cc", 00:28:59.434 "strip_size_kb": 0, 00:28:59.434 "state": "online", 00:28:59.434 "raid_level": "raid1", 00:28:59.434 "superblock": true, 00:28:59.434 "num_base_bdevs": 2, 00:28:59.434 "num_base_bdevs_discovered": 1, 00:28:59.434 "num_base_bdevs_operational": 1, 00:28:59.434 "base_bdevs_list": [ 00:28:59.434 { 00:28:59.434 "name": null, 00:28:59.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:59.434 "is_configured": false, 00:28:59.434 "data_offset": 2048, 00:28:59.434 "data_size": 63488 00:28:59.434 }, 00:28:59.434 { 00:28:59.434 "name": "BaseBdev2", 00:28:59.434 "uuid": "54f52184-b452-5f14-8ed8-75c7a8bf5701", 00:28:59.434 "is_configured": true, 00:28:59.434 "data_offset": 2048, 00:28:59.434 "data_size": 63488 00:28:59.434 } 00:28:59.434 ] 00:28:59.434 }' 00:28:59.434 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:59.434 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:59.434 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:59.434 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:59.434 21:42:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 145895 00:28:59.434 21:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 145895 ']' 00:28:59.434 21:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 145895 00:28:59.434 21:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:28:59.434 21:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:59.434 21:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 145895 00:28:59.434 killing process with pid 145895 00:28:59.434 Received shutdown signal, test time was about 60.000000 seconds 00:28:59.434 00:28:59.434 Latency(us) 00:28:59.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.434 =================================================================================================================== 00:28:59.434 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:59.434 21:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:59.434 21:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:59.434 21:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 145895' 00:28:59.434 21:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 145895 00:28:59.434 21:42:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 145895 00:28:59.434 [2024-07-15 21:42:32.798922] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:59.434 [2024-07-15 21:42:32.799108] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:59.434 [2024-07-15 21:42:32.799212] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:59.434 [2024-07-15 21:42:32.799247] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:29:00.002 [2024-07-15 21:42:33.079489] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:00.936 ************************************ 00:29:00.936 END TEST raid_rebuild_test_sb 00:29:00.936 ************************************ 00:29:00.936 21:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:29:00.936 00:29:00.936 real 0m33.535s 00:29:00.936 user 0m49.441s 00:29:00.936 sys 0m4.540s 00:29:00.936 21:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:00.936 21:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:01.193 21:42:34 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:29:01.193 21:42:34 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:29:01.193 21:42:34 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:29:01.193 21:42:34 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:01.193 21:42:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:01.193 ************************************ 00:29:01.193 START TEST raid_rebuild_test_io 00:29:01.193 ************************************ 00:29:01.193 21:42:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 false true true 00:29:01.193 21:42:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:29:01.193 21:42:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:29:01.193 21:42:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:29:01.193 21:42:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:29:01.193 21:42:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:29:01.193 21:42:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:29:01.193 21:42:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:29:01.193 21:42:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:01.193 21:42:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:29:01.193 21:42:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:01.193 21:42:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:01.193 21:42:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:29:01.193 21:42:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:01.193 21:42:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:01.193 21:42:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:29:01.193 21:42:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:29:01.193 21:42:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:29:01.193 21:42:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:29:01.193 21:42:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:29:01.193 21:42:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:29:01.193 21:42:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:29:01.193 21:42:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:29:01.193 21:42:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:29:01.193 21:42:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=146877 00:29:01.193 21:42:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 146877 /var/tmp/spdk-raid.sock 00:29:01.193 21:42:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:01.193 21:42:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@829 -- # '[' -z 146877 ']' 00:29:01.193 21:42:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:01.194 21:42:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:01.194 21:42:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:01.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:01.194 21:42:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:01.194 21:42:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:01.194 [2024-07-15 21:42:34.437821] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:29:01.194 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:01.194 Zero copy mechanism will not be used. 00:29:01.194 [2024-07-15 21:42:34.438037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146877 ] 00:29:01.452 [2024-07-15 21:42:34.576430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.452 [2024-07-15 21:42:34.772068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.710 [2024-07-15 21:42:34.956604] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:01.968 21:42:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:01.968 21:42:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # return 0 00:29:01.968 21:42:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:01.968 21:42:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:02.226 BaseBdev1_malloc 00:29:02.226 21:42:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:02.484 [2024-07-15 21:42:35.686424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:02.484 [2024-07-15 21:42:35.686641] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:02.484 [2024-07-15 21:42:35.686707] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:29:02.484 [2024-07-15 21:42:35.686749] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:02.484 [2024-07-15 21:42:35.688726] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:02.484 [2024-07-15 21:42:35.688810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:02.484 BaseBdev1 00:29:02.484 21:42:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:02.484 21:42:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:02.741 BaseBdev2_malloc 00:29:02.741 21:42:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:02.741 [2024-07-15 21:42:36.111548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:02.741 [2024-07-15 21:42:36.111725] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:02.741 [2024-07-15 21:42:36.111789] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:29:02.741 [2024-07-15 21:42:36.111831] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:02.741 [2024-07-15 21:42:36.113731] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:02.741 [2024-07-15 21:42:36.113816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:02.741 BaseBdev2 00:29:03.000 21:42:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:03.000 spare_malloc 00:29:03.000 21:42:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:03.257 spare_delay 00:29:03.257 21:42:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:03.516 [2024-07-15 21:42:36.698053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:03.516 [2024-07-15 21:42:36.698229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:03.516 [2024-07-15 21:42:36.698277] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:29:03.516 [2024-07-15 21:42:36.698323] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:03.516 [2024-07-15 21:42:36.700221] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:03.516 [2024-07-15 21:42:36.700318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:03.516 spare 00:29:03.516 21:42:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:29:03.774 [2024-07-15 21:42:36.893779] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:03.774 [2024-07-15 21:42:36.895535] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:03.774 [2024-07-15 21:42:36.895679] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:29:03.774 [2024-07-15 21:42:36.895705] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:29:03.774 [2024-07-15 21:42:36.895885] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:29:03.774 [2024-07-15 21:42:36.896214] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:29:03.774 [2024-07-15 21:42:36.896256] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:29:03.774 [2024-07-15 21:42:36.896439] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:03.774 21:42:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:03.774 21:42:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:03.774 21:42:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:03.774 21:42:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:03.774 21:42:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:03.774 21:42:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:03.774 21:42:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:03.774 21:42:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:03.774 21:42:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:03.774 21:42:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:03.774 21:42:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:03.774 21:42:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:03.774 21:42:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:03.774 "name": "raid_bdev1", 00:29:03.774 "uuid": "c5f6e06f-166a-4030-9bd2-046c8aa61923", 00:29:03.774 "strip_size_kb": 0, 00:29:03.774 "state": "online", 00:29:03.774 "raid_level": "raid1", 00:29:03.774 "superblock": false, 00:29:03.774 "num_base_bdevs": 2, 00:29:03.774 "num_base_bdevs_discovered": 2, 00:29:03.774 "num_base_bdevs_operational": 2, 00:29:03.774 "base_bdevs_list": [ 00:29:03.774 { 00:29:03.774 "name": "BaseBdev1", 00:29:03.774 "uuid": "09ee3a42-9eb8-5896-aad7-b40c9012ef4c", 00:29:03.774 "is_configured": true, 00:29:03.774 "data_offset": 0, 00:29:03.774 "data_size": 65536 00:29:03.774 }, 00:29:03.774 { 00:29:03.774 "name": "BaseBdev2", 00:29:03.774 "uuid": "707b9f22-7e31-5c37-881d-30eaa8c22719", 00:29:03.774 "is_configured": true, 00:29:03.774 "data_offset": 0, 00:29:03.774 "data_size": 65536 00:29:03.774 } 00:29:03.774 ] 00:29:03.774 }' 00:29:03.774 21:42:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:03.774 21:42:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:04.341 21:42:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:04.341 21:42:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:29:04.610 [2024-07-15 21:42:37.872265] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:04.610 21:42:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:29:04.610 21:42:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:04.610 21:42:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:04.881 21:42:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:29:04.881 21:42:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:29:04.881 21:42:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:29:04.881 21:42:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:04.881 [2024-07-15 21:42:38.138288] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:29:04.881 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:04.881 Zero copy mechanism will not be used. 00:29:04.881 Running I/O for 60 seconds... 00:29:04.881 [2024-07-15 21:42:38.230588] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:04.881 [2024-07-15 21:42:38.236324] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:29:05.140 21:42:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:05.140 21:42:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:05.140 21:42:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:05.140 21:42:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:05.140 21:42:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:05.140 21:42:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:05.140 21:42:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:05.140 21:42:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:05.140 21:42:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:05.140 21:42:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:05.140 21:42:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:05.140 21:42:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:05.140 21:42:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:05.140 "name": "raid_bdev1", 00:29:05.140 "uuid": "c5f6e06f-166a-4030-9bd2-046c8aa61923", 00:29:05.140 "strip_size_kb": 0, 00:29:05.140 "state": "online", 00:29:05.140 "raid_level": "raid1", 00:29:05.140 "superblock": false, 00:29:05.140 "num_base_bdevs": 2, 00:29:05.140 "num_base_bdevs_discovered": 1, 00:29:05.140 "num_base_bdevs_operational": 1, 00:29:05.140 "base_bdevs_list": [ 00:29:05.140 { 00:29:05.140 "name": null, 00:29:05.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:05.140 "is_configured": false, 00:29:05.140 "data_offset": 0, 00:29:05.140 "data_size": 65536 00:29:05.140 }, 00:29:05.140 { 00:29:05.140 "name": "BaseBdev2", 00:29:05.140 "uuid": "707b9f22-7e31-5c37-881d-30eaa8c22719", 00:29:05.140 "is_configured": true, 00:29:05.140 "data_offset": 0, 00:29:05.140 "data_size": 65536 00:29:05.140 } 00:29:05.140 ] 00:29:05.140 }' 00:29:05.140 21:42:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:05.140 21:42:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:05.708 21:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:05.967 [2024-07-15 21:42:39.251728] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:05.967 21:42:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:29:05.967 [2024-07-15 21:42:39.307313] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:29:05.967 [2024-07-15 21:42:39.308930] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:06.226 [2024-07-15 21:42:39.417125] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:06.226 [2024-07-15 21:42:39.417749] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:06.226 [2024-07-15 21:42:39.539699] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:06.226 [2024-07-15 21:42:39.540102] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:06.795 [2024-07-15 21:42:39.892825] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:06.795 [2024-07-15 21:42:39.893206] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:06.795 [2024-07-15 21:42:40.115604] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:06.795 [2024-07-15 21:42:40.116165] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:07.054 21:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:07.054 21:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:07.054 21:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:07.054 21:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:07.054 21:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:07.054 21:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:07.054 21:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:07.054 [2024-07-15 21:42:40.338500] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:29:07.313 21:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:07.313 "name": "raid_bdev1", 00:29:07.313 "uuid": "c5f6e06f-166a-4030-9bd2-046c8aa61923", 00:29:07.313 "strip_size_kb": 0, 00:29:07.313 "state": "online", 00:29:07.313 "raid_level": "raid1", 00:29:07.313 "superblock": false, 00:29:07.313 "num_base_bdevs": 2, 00:29:07.313 "num_base_bdevs_discovered": 2, 00:29:07.313 "num_base_bdevs_operational": 2, 00:29:07.313 "process": { 00:29:07.313 "type": "rebuild", 00:29:07.313 "target": "spare", 00:29:07.313 "progress": { 00:29:07.313 "blocks": 16384, 00:29:07.313 "percent": 25 00:29:07.313 } 00:29:07.313 }, 00:29:07.313 "base_bdevs_list": [ 00:29:07.313 { 00:29:07.313 "name": "spare", 00:29:07.313 "uuid": "f7ceeaab-88b0-511a-9856-2a6408a4544a", 00:29:07.313 "is_configured": true, 00:29:07.313 "data_offset": 0, 00:29:07.313 "data_size": 65536 00:29:07.313 }, 00:29:07.313 { 00:29:07.313 "name": "BaseBdev2", 00:29:07.313 "uuid": "707b9f22-7e31-5c37-881d-30eaa8c22719", 00:29:07.313 "is_configured": true, 00:29:07.313 "data_offset": 0, 00:29:07.313 "data_size": 65536 00:29:07.313 } 00:29:07.313 ] 00:29:07.313 }' 00:29:07.313 21:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:07.313 21:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:07.313 21:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:07.313 21:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:07.313 21:42:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:07.313 [2024-07-15 21:42:40.680356] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:29:07.313 [2024-07-15 21:42:40.681023] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:29:07.572 [2024-07-15 21:42:40.802509] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:07.572 [2024-07-15 21:42:40.904313] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:29:07.572 [2024-07-15 21:42:40.910982] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:29:07.831 [2024-07-15 21:42:41.024567] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:07.831 [2024-07-15 21:42:41.034195] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:07.831 [2024-07-15 21:42:41.034311] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:07.831 [2024-07-15 21:42:41.034342] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:07.831 [2024-07-15 21:42:41.071626] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:29:07.831 21:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:07.831 21:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:07.831 21:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:07.831 21:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:07.831 21:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:07.831 21:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:07.831 21:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:07.831 21:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:07.831 21:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:07.831 21:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:07.831 21:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:07.831 21:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:08.090 21:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:08.090 "name": "raid_bdev1", 00:29:08.090 "uuid": "c5f6e06f-166a-4030-9bd2-046c8aa61923", 00:29:08.090 "strip_size_kb": 0, 00:29:08.090 "state": "online", 00:29:08.090 "raid_level": "raid1", 00:29:08.090 "superblock": false, 00:29:08.090 "num_base_bdevs": 2, 00:29:08.090 "num_base_bdevs_discovered": 1, 00:29:08.091 "num_base_bdevs_operational": 1, 00:29:08.091 "base_bdevs_list": [ 00:29:08.091 { 00:29:08.091 "name": null, 00:29:08.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:08.091 "is_configured": false, 00:29:08.091 "data_offset": 0, 00:29:08.091 "data_size": 65536 00:29:08.091 }, 00:29:08.091 { 00:29:08.091 "name": "BaseBdev2", 00:29:08.091 "uuid": "707b9f22-7e31-5c37-881d-30eaa8c22719", 00:29:08.091 "is_configured": true, 00:29:08.091 "data_offset": 0, 00:29:08.091 "data_size": 65536 00:29:08.091 } 00:29:08.091 ] 00:29:08.091 }' 00:29:08.091 21:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:08.091 21:42:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:08.659 21:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:08.659 21:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:08.659 21:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:08.659 21:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:08.659 21:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:08.659 21:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:08.659 21:42:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:08.918 21:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:08.918 "name": "raid_bdev1", 00:29:08.918 "uuid": "c5f6e06f-166a-4030-9bd2-046c8aa61923", 00:29:08.918 "strip_size_kb": 0, 00:29:08.918 "state": "online", 00:29:08.918 "raid_level": "raid1", 00:29:08.918 "superblock": false, 00:29:08.918 "num_base_bdevs": 2, 00:29:08.918 "num_base_bdevs_discovered": 1, 00:29:08.918 "num_base_bdevs_operational": 1, 00:29:08.918 "base_bdevs_list": [ 00:29:08.918 { 00:29:08.918 "name": null, 00:29:08.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:08.918 "is_configured": false, 00:29:08.919 "data_offset": 0, 00:29:08.919 "data_size": 65536 00:29:08.919 }, 00:29:08.919 { 00:29:08.919 "name": "BaseBdev2", 00:29:08.919 "uuid": "707b9f22-7e31-5c37-881d-30eaa8c22719", 00:29:08.919 "is_configured": true, 00:29:08.919 "data_offset": 0, 00:29:08.919 "data_size": 65536 00:29:08.919 } 00:29:08.919 ] 00:29:08.919 }' 00:29:08.919 21:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:08.919 21:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:08.919 21:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:08.919 21:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:08.919 21:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:09.178 [2024-07-15 21:42:42.490065] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:09.178 21:42:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:09.178 [2024-07-15 21:42:42.530126] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:29:09.178 [2024-07-15 21:42:42.532091] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:09.436 [2024-07-15 21:42:42.632739] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:09.436 [2024-07-15 21:42:42.633398] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:09.436 [2024-07-15 21:42:42.759649] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:09.436 [2024-07-15 21:42:42.760024] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:09.695 [2024-07-15 21:42:43.004131] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:10.265 [2024-07-15 21:42:43.470097] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:10.265 21:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:10.265 21:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:10.265 21:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:10.265 21:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:10.265 21:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:10.265 21:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:10.265 21:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:10.265 [2024-07-15 21:42:43.595085] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:29:10.265 [2024-07-15 21:42:43.595477] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:29:10.523 21:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:10.523 "name": "raid_bdev1", 00:29:10.523 "uuid": "c5f6e06f-166a-4030-9bd2-046c8aa61923", 00:29:10.523 "strip_size_kb": 0, 00:29:10.524 "state": "online", 00:29:10.524 "raid_level": "raid1", 00:29:10.524 "superblock": false, 00:29:10.524 "num_base_bdevs": 2, 00:29:10.524 "num_base_bdevs_discovered": 2, 00:29:10.524 "num_base_bdevs_operational": 2, 00:29:10.524 "process": { 00:29:10.524 "type": "rebuild", 00:29:10.524 "target": "spare", 00:29:10.524 "progress": { 00:29:10.524 "blocks": 16384, 00:29:10.524 "percent": 25 00:29:10.524 } 00:29:10.524 }, 00:29:10.524 "base_bdevs_list": [ 00:29:10.524 { 00:29:10.524 "name": "spare", 00:29:10.524 "uuid": "f7ceeaab-88b0-511a-9856-2a6408a4544a", 00:29:10.524 "is_configured": true, 00:29:10.524 "data_offset": 0, 00:29:10.524 "data_size": 65536 00:29:10.524 }, 00:29:10.524 { 00:29:10.524 "name": "BaseBdev2", 00:29:10.524 "uuid": "707b9f22-7e31-5c37-881d-30eaa8c22719", 00:29:10.524 "is_configured": true, 00:29:10.524 "data_offset": 0, 00:29:10.524 "data_size": 65536 00:29:10.524 } 00:29:10.524 ] 00:29:10.524 }' 00:29:10.524 21:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:10.524 21:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:10.524 21:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:10.524 21:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:10.524 21:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:29:10.524 21:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:29:10.524 21:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:29:10.524 21:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:29:10.524 21:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=816 00:29:10.524 21:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:10.524 21:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:10.524 21:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:10.524 21:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:10.524 21:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:10.524 21:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:10.524 21:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:10.524 21:42:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:10.783 [2024-07-15 21:42:43.947219] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:29:10.783 [2024-07-15 21:42:44.080463] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:29:10.783 21:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:10.783 "name": "raid_bdev1", 00:29:10.783 "uuid": "c5f6e06f-166a-4030-9bd2-046c8aa61923", 00:29:10.783 "strip_size_kb": 0, 00:29:10.783 "state": "online", 00:29:10.783 "raid_level": "raid1", 00:29:10.783 "superblock": false, 00:29:10.783 "num_base_bdevs": 2, 00:29:10.783 "num_base_bdevs_discovered": 2, 00:29:10.783 "num_base_bdevs_operational": 2, 00:29:10.783 "process": { 00:29:10.783 "type": "rebuild", 00:29:10.783 "target": "spare", 00:29:10.783 "progress": { 00:29:10.783 "blocks": 20480, 00:29:10.783 "percent": 31 00:29:10.783 } 00:29:10.783 }, 00:29:10.783 "base_bdevs_list": [ 00:29:10.783 { 00:29:10.783 "name": "spare", 00:29:10.783 "uuid": "f7ceeaab-88b0-511a-9856-2a6408a4544a", 00:29:10.783 "is_configured": true, 00:29:10.783 "data_offset": 0, 00:29:10.783 "data_size": 65536 00:29:10.783 }, 00:29:10.783 { 00:29:10.783 "name": "BaseBdev2", 00:29:10.783 "uuid": "707b9f22-7e31-5c37-881d-30eaa8c22719", 00:29:10.783 "is_configured": true, 00:29:10.783 "data_offset": 0, 00:29:10.783 "data_size": 65536 00:29:10.783 } 00:29:10.783 ] 00:29:10.783 }' 00:29:10.783 21:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:10.783 21:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:10.783 21:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:11.042 21:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:11.042 21:42:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:11.042 [2024-07-15 21:42:44.408887] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:29:11.042 [2024-07-15 21:42:44.409548] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:29:11.301 [2024-07-15 21:42:44.610982] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:29:11.562 [2024-07-15 21:42:44.862371] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:29:12.130 21:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:12.130 21:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:12.130 21:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:12.130 21:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:12.130 21:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:12.130 21:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:12.130 21:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:12.130 21:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:12.130 21:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:12.130 "name": "raid_bdev1", 00:29:12.130 "uuid": "c5f6e06f-166a-4030-9bd2-046c8aa61923", 00:29:12.130 "strip_size_kb": 0, 00:29:12.130 "state": "online", 00:29:12.130 "raid_level": "raid1", 00:29:12.130 "superblock": false, 00:29:12.130 "num_base_bdevs": 2, 00:29:12.130 "num_base_bdevs_discovered": 2, 00:29:12.130 "num_base_bdevs_operational": 2, 00:29:12.130 "process": { 00:29:12.130 "type": "rebuild", 00:29:12.130 "target": "spare", 00:29:12.130 "progress": { 00:29:12.130 "blocks": 40960, 00:29:12.130 "percent": 62 00:29:12.130 } 00:29:12.130 }, 00:29:12.130 "base_bdevs_list": [ 00:29:12.130 { 00:29:12.130 "name": "spare", 00:29:12.130 "uuid": "f7ceeaab-88b0-511a-9856-2a6408a4544a", 00:29:12.130 "is_configured": true, 00:29:12.130 "data_offset": 0, 00:29:12.130 "data_size": 65536 00:29:12.130 }, 00:29:12.130 { 00:29:12.130 "name": "BaseBdev2", 00:29:12.130 "uuid": "707b9f22-7e31-5c37-881d-30eaa8c22719", 00:29:12.130 "is_configured": true, 00:29:12.130 "data_offset": 0, 00:29:12.130 "data_size": 65536 00:29:12.130 } 00:29:12.130 ] 00:29:12.130 }' 00:29:12.130 21:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:12.130 21:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:12.130 21:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:12.390 21:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:12.390 21:42:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:13.327 21:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:13.327 21:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:13.327 21:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:13.327 21:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:13.327 21:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:13.327 21:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:13.327 21:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:13.327 21:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:13.327 [2024-07-15 21:42:46.648824] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:13.587 21:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:13.587 "name": "raid_bdev1", 00:29:13.587 "uuid": "c5f6e06f-166a-4030-9bd2-046c8aa61923", 00:29:13.587 "strip_size_kb": 0, 00:29:13.587 "state": "online", 00:29:13.587 "raid_level": "raid1", 00:29:13.587 "superblock": false, 00:29:13.587 "num_base_bdevs": 2, 00:29:13.587 "num_base_bdevs_discovered": 2, 00:29:13.587 "num_base_bdevs_operational": 2, 00:29:13.587 "process": { 00:29:13.587 "type": "rebuild", 00:29:13.587 "target": "spare", 00:29:13.587 "progress": { 00:29:13.587 "blocks": 65536, 00:29:13.587 "percent": 100 00:29:13.587 } 00:29:13.587 }, 00:29:13.587 "base_bdevs_list": [ 00:29:13.587 { 00:29:13.587 "name": "spare", 00:29:13.587 "uuid": "f7ceeaab-88b0-511a-9856-2a6408a4544a", 00:29:13.587 "is_configured": true, 00:29:13.587 "data_offset": 0, 00:29:13.587 "data_size": 65536 00:29:13.587 }, 00:29:13.587 { 00:29:13.587 "name": "BaseBdev2", 00:29:13.587 "uuid": "707b9f22-7e31-5c37-881d-30eaa8c22719", 00:29:13.587 "is_configured": true, 00:29:13.587 "data_offset": 0, 00:29:13.587 "data_size": 65536 00:29:13.587 } 00:29:13.587 ] 00:29:13.587 }' 00:29:13.587 21:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:13.587 [2024-07-15 21:42:46.754541] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:13.587 [2024-07-15 21:42:46.757557] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:13.587 21:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:13.587 21:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:13.587 21:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:13.587 21:42:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:14.527 21:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:14.527 21:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:14.527 21:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:14.527 21:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:14.527 21:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:14.527 21:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:14.527 21:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:14.527 21:42:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:14.786 21:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:14.786 "name": "raid_bdev1", 00:29:14.786 "uuid": "c5f6e06f-166a-4030-9bd2-046c8aa61923", 00:29:14.786 "strip_size_kb": 0, 00:29:14.786 "state": "online", 00:29:14.786 "raid_level": "raid1", 00:29:14.786 "superblock": false, 00:29:14.786 "num_base_bdevs": 2, 00:29:14.786 "num_base_bdevs_discovered": 2, 00:29:14.786 "num_base_bdevs_operational": 2, 00:29:14.786 "base_bdevs_list": [ 00:29:14.786 { 00:29:14.786 "name": "spare", 00:29:14.786 "uuid": "f7ceeaab-88b0-511a-9856-2a6408a4544a", 00:29:14.786 "is_configured": true, 00:29:14.786 "data_offset": 0, 00:29:14.786 "data_size": 65536 00:29:14.786 }, 00:29:14.786 { 00:29:14.786 "name": "BaseBdev2", 00:29:14.786 "uuid": "707b9f22-7e31-5c37-881d-30eaa8c22719", 00:29:14.786 "is_configured": true, 00:29:14.786 "data_offset": 0, 00:29:14.786 "data_size": 65536 00:29:14.786 } 00:29:14.786 ] 00:29:14.786 }' 00:29:14.786 21:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:14.786 21:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:14.786 21:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:14.786 21:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:14.786 21:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:29:14.786 21:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:14.786 21:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:14.786 21:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:14.786 21:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:14.786 21:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:14.786 21:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:14.786 21:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:15.048 21:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:15.048 "name": "raid_bdev1", 00:29:15.048 "uuid": "c5f6e06f-166a-4030-9bd2-046c8aa61923", 00:29:15.048 "strip_size_kb": 0, 00:29:15.048 "state": "online", 00:29:15.048 "raid_level": "raid1", 00:29:15.048 "superblock": false, 00:29:15.048 "num_base_bdevs": 2, 00:29:15.048 "num_base_bdevs_discovered": 2, 00:29:15.048 "num_base_bdevs_operational": 2, 00:29:15.048 "base_bdevs_list": [ 00:29:15.048 { 00:29:15.048 "name": "spare", 00:29:15.048 "uuid": "f7ceeaab-88b0-511a-9856-2a6408a4544a", 00:29:15.048 "is_configured": true, 00:29:15.048 "data_offset": 0, 00:29:15.048 "data_size": 65536 00:29:15.048 }, 00:29:15.048 { 00:29:15.048 "name": "BaseBdev2", 00:29:15.048 "uuid": "707b9f22-7e31-5c37-881d-30eaa8c22719", 00:29:15.048 "is_configured": true, 00:29:15.048 "data_offset": 0, 00:29:15.048 "data_size": 65536 00:29:15.048 } 00:29:15.048 ] 00:29:15.048 }' 00:29:15.048 21:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:15.048 21:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:15.048 21:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:15.307 21:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:15.307 21:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:15.307 21:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:15.307 21:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:15.307 21:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:15.307 21:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:15.307 21:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:15.307 21:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:15.307 21:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:15.307 21:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:15.307 21:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:15.307 21:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:15.307 21:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:15.307 21:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:15.307 "name": "raid_bdev1", 00:29:15.307 "uuid": "c5f6e06f-166a-4030-9bd2-046c8aa61923", 00:29:15.307 "strip_size_kb": 0, 00:29:15.307 "state": "online", 00:29:15.307 "raid_level": "raid1", 00:29:15.307 "superblock": false, 00:29:15.307 "num_base_bdevs": 2, 00:29:15.307 "num_base_bdevs_discovered": 2, 00:29:15.307 "num_base_bdevs_operational": 2, 00:29:15.307 "base_bdevs_list": [ 00:29:15.307 { 00:29:15.307 "name": "spare", 00:29:15.307 "uuid": "f7ceeaab-88b0-511a-9856-2a6408a4544a", 00:29:15.307 "is_configured": true, 00:29:15.307 "data_offset": 0, 00:29:15.307 "data_size": 65536 00:29:15.307 }, 00:29:15.307 { 00:29:15.307 "name": "BaseBdev2", 00:29:15.307 "uuid": "707b9f22-7e31-5c37-881d-30eaa8c22719", 00:29:15.307 "is_configured": true, 00:29:15.307 "data_offset": 0, 00:29:15.307 "data_size": 65536 00:29:15.307 } 00:29:15.307 ] 00:29:15.307 }' 00:29:15.307 21:42:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:15.307 21:42:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:16.241 21:42:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:16.241 [2024-07-15 21:42:49.470262] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:16.241 [2024-07-15 21:42:49.470420] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:16.241 00:29:16.241 Latency(us) 00:29:16.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:16.241 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:29:16.241 raid_bdev1 : 11.45 107.71 323.12 0.00 0.00 13529.32 368.46 124547.02 00:29:16.241 =================================================================================================================== 00:29:16.241 Total : 107.71 323.12 0.00 0.00 13529.32 368.46 124547.02 00:29:16.241 [2024-07-15 21:42:49.591552] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:16.241 [2024-07-15 21:42:49.591737] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:16.241 [2024-07-15 21:42:49.591872] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:16.241 [2024-07-15 21:42:49.591917] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:29:16.241 0 00:29:16.500 21:42:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:29:16.500 21:42:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:16.500 21:42:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:29:16.500 21:42:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:29:16.500 21:42:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:29:16.500 21:42:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:29:16.500 21:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:16.500 21:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:29:16.500 21:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:16.500 21:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:29:16.500 21:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:16.500 21:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:29:16.500 21:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:16.500 21:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:16.500 21:42:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:29:16.759 /dev/nbd0 00:29:16.759 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:16.759 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:16.759 21:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:16.759 21:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:29:16.759 21:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:16.759 21:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:16.759 21:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:16.759 21:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:29:16.759 21:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:16.759 21:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:16.759 21:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:16.759 1+0 records in 00:29:16.759 1+0 records out 00:29:16.759 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529742 s, 7.7 MB/s 00:29:16.759 21:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:17.018 21:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:29:17.018 21:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:17.018 21:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:17.018 21:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:29:17.018 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:17.018 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:17.018 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:29:17.018 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:29:17.019 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:29:17.019 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:17.019 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:29:17.019 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:17.019 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:29:17.019 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:17.019 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:29:17.019 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:17.019 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:17.019 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:29:17.019 /dev/nbd1 00:29:17.019 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:17.019 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:17.019 21:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:29:17.019 21:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:29:17.019 21:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:17.019 21:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:17.019 21:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:29:17.019 21:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:29:17.019 21:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:17.019 21:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:17.019 21:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:17.019 1+0 records in 00:29:17.019 1+0 records out 00:29:17.019 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000544126 s, 7.5 MB/s 00:29:17.278 21:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:17.278 21:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:29:17.278 21:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:17.278 21:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:17.278 21:42:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:29:17.278 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:17.278 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:17.278 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:29:17.278 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:29:17.278 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:17.278 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:29:17.278 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:17.278 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:29:17.278 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:17.278 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:17.536 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:17.536 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:17.536 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:17.536 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:17.536 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:17.536 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:17.536 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:29:17.536 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:29:17.536 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:17.536 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:17.536 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:29:17.536 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:17.536 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:17.536 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:17.536 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:29:17.536 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:17.536 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:29:17.536 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:17.794 21:42:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:17.794 21:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:17.794 21:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:17.794 21:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:17.794 21:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:17.794 21:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:17.794 21:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:17.794 21:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:29:17.794 21:42:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:17.794 21:42:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:29:17.794 21:42:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 146877 00:29:17.794 21:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@948 -- # '[' -z 146877 ']' 00:29:17.794 21:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # kill -0 146877 00:29:17.794 21:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # uname 00:29:17.794 21:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:17.794 21:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 146877 00:29:17.794 21:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:17.794 21:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:17.794 21:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 146877' 00:29:17.794 killing process with pid 146877 00:29:17.794 21:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@967 -- # kill 146877 00:29:17.794 Received shutdown signal, test time was about 13.046912 seconds 00:29:17.794 00:29:17.794 Latency(us) 00:29:17.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.794 =================================================================================================================== 00:29:17.794 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:17.794 21:42:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # wait 146877 00:29:17.794 [2024-07-15 21:42:51.162156] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:18.051 [2024-07-15 21:42:51.407295] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:19.946 ************************************ 00:29:19.946 END TEST raid_rebuild_test_io 00:29:19.946 ************************************ 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:29:19.946 00:29:19.946 real 0m18.487s 00:29:19.946 user 0m27.518s 00:29:19.946 sys 0m2.117s 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:19.946 21:42:52 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:29:19.946 21:42:52 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:29:19.946 21:42:52 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:29:19.946 21:42:52 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:19.946 21:42:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:19.946 ************************************ 00:29:19.946 START TEST raid_rebuild_test_sb_io 00:29:19.946 ************************************ 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true true true 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:19.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=147380 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 147380 /var/tmp/spdk-raid.sock 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@829 -- # '[' -z 147380 ']' 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:19.946 21:42:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:19.946 [2024-07-15 21:42:52.942974] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:29:19.946 [2024-07-15 21:42:52.943242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147380 ] 00:29:19.946 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:19.946 Zero copy mechanism will not be used. 00:29:19.946 [2024-07-15 21:42:53.095756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.203 [2024-07-15 21:42:53.362397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.459 [2024-07-15 21:42:53.594841] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:20.459 21:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:20.459 21:42:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # return 0 00:29:20.459 21:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:20.459 21:42:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:20.717 BaseBdev1_malloc 00:29:20.717 21:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:20.974 [2024-07-15 21:42:54.228838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:20.974 [2024-07-15 21:42:54.229059] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:20.974 [2024-07-15 21:42:54.229114] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:29:20.974 [2024-07-15 21:42:54.229178] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:20.974 [2024-07-15 21:42:54.231636] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:20.974 [2024-07-15 21:42:54.231721] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:20.974 BaseBdev1 00:29:20.974 21:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:20.974 21:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:21.232 BaseBdev2_malloc 00:29:21.232 21:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:21.490 [2024-07-15 21:42:54.671350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:21.490 [2024-07-15 21:42:54.671575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:21.490 [2024-07-15 21:42:54.671628] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:29:21.490 [2024-07-15 21:42:54.671672] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:21.490 [2024-07-15 21:42:54.673956] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:21.490 [2024-07-15 21:42:54.674038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:21.490 BaseBdev2 00:29:21.490 21:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:21.756 spare_malloc 00:29:21.756 21:42:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:21.756 spare_delay 00:29:21.756 21:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:22.037 [2024-07-15 21:42:55.279714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:22.037 [2024-07-15 21:42:55.279908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:22.037 [2024-07-15 21:42:55.279954] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:29:22.037 [2024-07-15 21:42:55.279995] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:22.037 [2024-07-15 21:42:55.282271] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:22.037 [2024-07-15 21:42:55.282357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:22.037 spare 00:29:22.037 21:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:29:22.297 [2024-07-15 21:42:55.459523] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:22.297 [2024-07-15 21:42:55.461633] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:22.297 [2024-07-15 21:42:55.461878] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:29:22.297 [2024-07-15 21:42:55.461916] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:22.297 [2024-07-15 21:42:55.462087] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:29:22.297 [2024-07-15 21:42:55.462479] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:29:22.297 [2024-07-15 21:42:55.462523] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:29:22.297 [2024-07-15 21:42:55.462698] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:22.297 21:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:22.297 21:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:22.297 21:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:22.297 21:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:22.297 21:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:22.297 21:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:22.297 21:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:22.297 21:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:22.297 21:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:22.297 21:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:22.297 21:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:22.297 21:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:22.557 21:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:22.557 "name": "raid_bdev1", 00:29:22.557 "uuid": "040d6414-0ad2-4882-a04c-0f805ba55e6a", 00:29:22.557 "strip_size_kb": 0, 00:29:22.557 "state": "online", 00:29:22.557 "raid_level": "raid1", 00:29:22.557 "superblock": true, 00:29:22.557 "num_base_bdevs": 2, 00:29:22.557 "num_base_bdevs_discovered": 2, 00:29:22.557 "num_base_bdevs_operational": 2, 00:29:22.557 "base_bdevs_list": [ 00:29:22.557 { 00:29:22.557 "name": "BaseBdev1", 00:29:22.557 "uuid": "9274aa4e-ff94-5a53-aeb4-f0beac4096e8", 00:29:22.557 "is_configured": true, 00:29:22.557 "data_offset": 2048, 00:29:22.557 "data_size": 63488 00:29:22.557 }, 00:29:22.557 { 00:29:22.557 "name": "BaseBdev2", 00:29:22.557 "uuid": "317b8cd1-2a2c-515c-b452-92b849982d8b", 00:29:22.557 "is_configured": true, 00:29:22.557 "data_offset": 2048, 00:29:22.557 "data_size": 63488 00:29:22.557 } 00:29:22.557 ] 00:29:22.557 }' 00:29:22.557 21:42:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:22.557 21:42:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:23.125 21:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:23.125 21:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:29:23.125 [2024-07-15 21:42:56.438035] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:23.125 21:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:29:23.125 21:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:23.125 21:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:23.384 21:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:29:23.384 21:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:29:23.384 21:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:23.384 21:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:29:23.384 [2024-07-15 21:42:56.757266] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:29:23.643 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:23.643 Zero copy mechanism will not be used. 00:29:23.643 Running I/O for 60 seconds... 00:29:23.643 [2024-07-15 21:42:56.829969] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:23.643 [2024-07-15 21:42:56.835525] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:29:23.643 21:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:23.643 21:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:23.643 21:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:23.643 21:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:23.643 21:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:23.643 21:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:23.643 21:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:23.643 21:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:23.643 21:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:23.643 21:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:23.643 21:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:23.643 21:42:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:23.902 21:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:23.902 "name": "raid_bdev1", 00:29:23.902 "uuid": "040d6414-0ad2-4882-a04c-0f805ba55e6a", 00:29:23.902 "strip_size_kb": 0, 00:29:23.902 "state": "online", 00:29:23.902 "raid_level": "raid1", 00:29:23.902 "superblock": true, 00:29:23.902 "num_base_bdevs": 2, 00:29:23.902 "num_base_bdevs_discovered": 1, 00:29:23.902 "num_base_bdevs_operational": 1, 00:29:23.902 "base_bdevs_list": [ 00:29:23.902 { 00:29:23.902 "name": null, 00:29:23.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:23.902 "is_configured": false, 00:29:23.902 "data_offset": 2048, 00:29:23.902 "data_size": 63488 00:29:23.902 }, 00:29:23.902 { 00:29:23.902 "name": "BaseBdev2", 00:29:23.902 "uuid": "317b8cd1-2a2c-515c-b452-92b849982d8b", 00:29:23.902 "is_configured": true, 00:29:23.902 "data_offset": 2048, 00:29:23.902 "data_size": 63488 00:29:23.902 } 00:29:23.902 ] 00:29:23.902 }' 00:29:23.902 21:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:23.902 21:42:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:24.470 21:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:24.730 [2024-07-15 21:42:57.861366] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:24.730 21:42:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:29:24.730 [2024-07-15 21:42:57.929136] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:29:24.730 [2024-07-15 21:42:57.931200] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:24.730 [2024-07-15 21:42:58.056899] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:24.730 [2024-07-15 21:42:58.057593] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:25.039 [2024-07-15 21:42:58.179715] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:25.039 [2024-07-15 21:42:58.180085] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:25.298 [2024-07-15 21:42:58.508025] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:25.557 [2024-07-15 21:42:58.722872] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:25.557 [2024-07-15 21:42:58.723402] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:25.557 21:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:25.557 21:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:25.557 21:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:25.557 21:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:25.557 21:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:25.557 21:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:25.557 21:42:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:25.816 [2024-07-15 21:42:59.061944] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:25.816 [2024-07-15 21:42:59.062769] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:25.816 21:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:25.816 "name": "raid_bdev1", 00:29:25.816 "uuid": "040d6414-0ad2-4882-a04c-0f805ba55e6a", 00:29:25.816 "strip_size_kb": 0, 00:29:25.816 "state": "online", 00:29:25.816 "raid_level": "raid1", 00:29:25.816 "superblock": true, 00:29:25.816 "num_base_bdevs": 2, 00:29:25.816 "num_base_bdevs_discovered": 2, 00:29:25.816 "num_base_bdevs_operational": 2, 00:29:25.816 "process": { 00:29:25.816 "type": "rebuild", 00:29:25.816 "target": "spare", 00:29:25.816 "progress": { 00:29:25.816 "blocks": 14336, 00:29:25.816 "percent": 22 00:29:25.816 } 00:29:25.816 }, 00:29:25.816 "base_bdevs_list": [ 00:29:25.816 { 00:29:25.816 "name": "spare", 00:29:25.816 "uuid": "0f834c66-1f86-581c-9a29-32108a511134", 00:29:25.816 "is_configured": true, 00:29:25.816 "data_offset": 2048, 00:29:25.816 "data_size": 63488 00:29:25.816 }, 00:29:25.816 { 00:29:25.816 "name": "BaseBdev2", 00:29:25.816 "uuid": "317b8cd1-2a2c-515c-b452-92b849982d8b", 00:29:25.816 "is_configured": true, 00:29:25.816 "data_offset": 2048, 00:29:25.816 "data_size": 63488 00:29:25.816 } 00:29:25.816 ] 00:29:25.816 }' 00:29:25.816 21:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:25.816 21:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:25.816 21:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:26.075 21:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:26.075 21:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:26.075 [2024-07-15 21:42:59.287676] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:29:26.075 [2024-07-15 21:42:59.414909] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:26.334 [2024-07-15 21:42:59.624758] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:26.334 [2024-07-15 21:42:59.628517] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:26.335 [2024-07-15 21:42:59.628622] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:26.335 [2024-07-15 21:42:59.628645] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:26.335 [2024-07-15 21:42:59.677237] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:29:26.594 21:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:26.594 21:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:26.594 21:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:26.594 21:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:26.594 21:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:26.594 21:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:26.594 21:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:26.594 21:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:26.594 21:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:26.594 21:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:26.594 21:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:26.594 21:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:26.594 21:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:26.594 "name": "raid_bdev1", 00:29:26.594 "uuid": "040d6414-0ad2-4882-a04c-0f805ba55e6a", 00:29:26.594 "strip_size_kb": 0, 00:29:26.594 "state": "online", 00:29:26.594 "raid_level": "raid1", 00:29:26.594 "superblock": true, 00:29:26.594 "num_base_bdevs": 2, 00:29:26.594 "num_base_bdevs_discovered": 1, 00:29:26.594 "num_base_bdevs_operational": 1, 00:29:26.594 "base_bdevs_list": [ 00:29:26.594 { 00:29:26.594 "name": null, 00:29:26.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:26.594 "is_configured": false, 00:29:26.594 "data_offset": 2048, 00:29:26.594 "data_size": 63488 00:29:26.594 }, 00:29:26.594 { 00:29:26.594 "name": "BaseBdev2", 00:29:26.594 "uuid": "317b8cd1-2a2c-515c-b452-92b849982d8b", 00:29:26.594 "is_configured": true, 00:29:26.594 "data_offset": 2048, 00:29:26.594 "data_size": 63488 00:29:26.594 } 00:29:26.594 ] 00:29:26.594 }' 00:29:26.594 21:42:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:26.594 21:42:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:27.531 21:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:27.531 21:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:27.531 21:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:27.531 21:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:27.531 21:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:27.531 21:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:27.531 21:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:27.531 21:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:27.531 "name": "raid_bdev1", 00:29:27.531 "uuid": "040d6414-0ad2-4882-a04c-0f805ba55e6a", 00:29:27.531 "strip_size_kb": 0, 00:29:27.531 "state": "online", 00:29:27.531 "raid_level": "raid1", 00:29:27.531 "superblock": true, 00:29:27.531 "num_base_bdevs": 2, 00:29:27.531 "num_base_bdevs_discovered": 1, 00:29:27.531 "num_base_bdevs_operational": 1, 00:29:27.531 "base_bdevs_list": [ 00:29:27.531 { 00:29:27.531 "name": null, 00:29:27.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:27.531 "is_configured": false, 00:29:27.531 "data_offset": 2048, 00:29:27.531 "data_size": 63488 00:29:27.531 }, 00:29:27.531 { 00:29:27.531 "name": "BaseBdev2", 00:29:27.531 "uuid": "317b8cd1-2a2c-515c-b452-92b849982d8b", 00:29:27.531 "is_configured": true, 00:29:27.531 "data_offset": 2048, 00:29:27.531 "data_size": 63488 00:29:27.531 } 00:29:27.531 ] 00:29:27.531 }' 00:29:27.531 21:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:27.531 21:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:27.531 21:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:27.531 21:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:27.531 21:43:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:27.790 [2024-07-15 21:43:01.027077] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:27.790 21:43:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:27.790 [2024-07-15 21:43:01.084062] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:29:27.790 [2024-07-15 21:43:01.086060] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:28.048 [2024-07-15 21:43:01.326429] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:28.048 [2024-07-15 21:43:01.326966] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:29.011 [2024-07-15 21:43:02.021324] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:29.011 21:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:29.011 21:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:29.011 21:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:29.011 21:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:29.011 21:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:29.011 21:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:29.011 21:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:29.011 [2024-07-15 21:43:02.131150] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:29:29.011 21:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:29.011 "name": "raid_bdev1", 00:29:29.011 "uuid": "040d6414-0ad2-4882-a04c-0f805ba55e6a", 00:29:29.011 "strip_size_kb": 0, 00:29:29.011 "state": "online", 00:29:29.011 "raid_level": "raid1", 00:29:29.011 "superblock": true, 00:29:29.011 "num_base_bdevs": 2, 00:29:29.011 "num_base_bdevs_discovered": 2, 00:29:29.011 "num_base_bdevs_operational": 2, 00:29:29.011 "process": { 00:29:29.011 "type": "rebuild", 00:29:29.011 "target": "spare", 00:29:29.011 "progress": { 00:29:29.011 "blocks": 18432, 00:29:29.011 "percent": 29 00:29:29.011 } 00:29:29.011 }, 00:29:29.011 "base_bdevs_list": [ 00:29:29.011 { 00:29:29.011 "name": "spare", 00:29:29.011 "uuid": "0f834c66-1f86-581c-9a29-32108a511134", 00:29:29.011 "is_configured": true, 00:29:29.011 "data_offset": 2048, 00:29:29.011 "data_size": 63488 00:29:29.011 }, 00:29:29.011 { 00:29:29.011 "name": "BaseBdev2", 00:29:29.011 "uuid": "317b8cd1-2a2c-515c-b452-92b849982d8b", 00:29:29.011 "is_configured": true, 00:29:29.011 "data_offset": 2048, 00:29:29.011 "data_size": 63488 00:29:29.011 } 00:29:29.011 ] 00:29:29.011 }' 00:29:29.011 21:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:29.011 21:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:29.011 21:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:29.011 [2024-07-15 21:43:02.353699] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:29:29.270 21:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:29.270 21:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:29:29.270 21:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:29:29.270 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:29:29.270 21:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:29:29.270 21:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:29:29.270 21:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:29:29.270 21:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=835 00:29:29.270 21:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:29.270 21:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:29.270 21:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:29.270 21:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:29.270 21:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:29.270 21:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:29.270 21:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:29.270 21:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:29.270 [2024-07-15 21:43:02.456904] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:29:29.270 [2024-07-15 21:43:02.457438] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:29:29.270 21:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:29.270 "name": "raid_bdev1", 00:29:29.270 "uuid": "040d6414-0ad2-4882-a04c-0f805ba55e6a", 00:29:29.270 "strip_size_kb": 0, 00:29:29.270 "state": "online", 00:29:29.270 "raid_level": "raid1", 00:29:29.270 "superblock": true, 00:29:29.270 "num_base_bdevs": 2, 00:29:29.270 "num_base_bdevs_discovered": 2, 00:29:29.270 "num_base_bdevs_operational": 2, 00:29:29.270 "process": { 00:29:29.270 "type": "rebuild", 00:29:29.270 "target": "spare", 00:29:29.270 "progress": { 00:29:29.270 "blocks": 22528, 00:29:29.270 "percent": 35 00:29:29.270 } 00:29:29.270 }, 00:29:29.270 "base_bdevs_list": [ 00:29:29.270 { 00:29:29.270 "name": "spare", 00:29:29.270 "uuid": "0f834c66-1f86-581c-9a29-32108a511134", 00:29:29.270 "is_configured": true, 00:29:29.270 "data_offset": 2048, 00:29:29.270 "data_size": 63488 00:29:29.270 }, 00:29:29.270 { 00:29:29.270 "name": "BaseBdev2", 00:29:29.270 "uuid": "317b8cd1-2a2c-515c-b452-92b849982d8b", 00:29:29.270 "is_configured": true, 00:29:29.270 "data_offset": 2048, 00:29:29.270 "data_size": 63488 00:29:29.270 } 00:29:29.270 ] 00:29:29.270 }' 00:29:29.270 21:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:29.529 21:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:29.529 21:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:29.529 21:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:29.529 21:43:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:29.786 [2024-07-15 21:43:02.907250] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:29:29.786 [2024-07-15 21:43:03.148980] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:29:30.044 [2024-07-15 21:43:03.357178] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:29:30.302 [2024-07-15 21:43:03.595703] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:29:30.560 [2024-07-15 21:43:03.712411] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:29:30.560 21:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:30.560 21:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:30.560 21:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:30.560 21:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:30.560 21:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:30.560 21:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:30.560 21:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:30.560 21:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:30.560 21:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:30.560 "name": "raid_bdev1", 00:29:30.560 "uuid": "040d6414-0ad2-4882-a04c-0f805ba55e6a", 00:29:30.560 "strip_size_kb": 0, 00:29:30.560 "state": "online", 00:29:30.560 "raid_level": "raid1", 00:29:30.560 "superblock": true, 00:29:30.560 "num_base_bdevs": 2, 00:29:30.560 "num_base_bdevs_discovered": 2, 00:29:30.560 "num_base_bdevs_operational": 2, 00:29:30.560 "process": { 00:29:30.560 "type": "rebuild", 00:29:30.560 "target": "spare", 00:29:30.560 "progress": { 00:29:30.560 "blocks": 43008, 00:29:30.560 "percent": 67 00:29:30.560 } 00:29:30.560 }, 00:29:30.560 "base_bdevs_list": [ 00:29:30.560 { 00:29:30.560 "name": "spare", 00:29:30.560 "uuid": "0f834c66-1f86-581c-9a29-32108a511134", 00:29:30.560 "is_configured": true, 00:29:30.560 "data_offset": 2048, 00:29:30.560 "data_size": 63488 00:29:30.560 }, 00:29:30.560 { 00:29:30.560 "name": "BaseBdev2", 00:29:30.560 "uuid": "317b8cd1-2a2c-515c-b452-92b849982d8b", 00:29:30.560 "is_configured": true, 00:29:30.560 "data_offset": 2048, 00:29:30.560 "data_size": 63488 00:29:30.560 } 00:29:30.560 ] 00:29:30.560 }' 00:29:30.560 21:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:30.818 21:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:30.818 21:43:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:30.818 21:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:30.818 21:43:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:31.386 [2024-07-15 21:43:04.695597] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:29:31.953 [2024-07-15 21:43:05.027205] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:31.953 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:31.953 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:31.953 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:31.953 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:31.953 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:31.953 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:31.953 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:31.953 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:31.953 [2024-07-15 21:43:05.127028] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:31.953 [2024-07-15 21:43:05.130016] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:31.953 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:31.953 "name": "raid_bdev1", 00:29:31.953 "uuid": "040d6414-0ad2-4882-a04c-0f805ba55e6a", 00:29:31.953 "strip_size_kb": 0, 00:29:31.953 "state": "online", 00:29:31.953 "raid_level": "raid1", 00:29:31.953 "superblock": true, 00:29:31.953 "num_base_bdevs": 2, 00:29:31.953 "num_base_bdevs_discovered": 2, 00:29:31.953 "num_base_bdevs_operational": 2, 00:29:31.953 "base_bdevs_list": [ 00:29:31.953 { 00:29:31.953 "name": "spare", 00:29:31.953 "uuid": "0f834c66-1f86-581c-9a29-32108a511134", 00:29:31.953 "is_configured": true, 00:29:31.953 "data_offset": 2048, 00:29:31.953 "data_size": 63488 00:29:31.953 }, 00:29:31.953 { 00:29:31.953 "name": "BaseBdev2", 00:29:31.953 "uuid": "317b8cd1-2a2c-515c-b452-92b849982d8b", 00:29:31.953 "is_configured": true, 00:29:31.953 "data_offset": 2048, 00:29:31.953 "data_size": 63488 00:29:31.953 } 00:29:31.953 ] 00:29:31.953 }' 00:29:31.953 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:31.953 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:31.953 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:32.211 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:32.211 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:29:32.211 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:32.211 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:32.211 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:32.211 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:32.211 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:32.211 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:32.211 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:32.211 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:32.211 "name": "raid_bdev1", 00:29:32.211 "uuid": "040d6414-0ad2-4882-a04c-0f805ba55e6a", 00:29:32.211 "strip_size_kb": 0, 00:29:32.211 "state": "online", 00:29:32.211 "raid_level": "raid1", 00:29:32.211 "superblock": true, 00:29:32.211 "num_base_bdevs": 2, 00:29:32.211 "num_base_bdevs_discovered": 2, 00:29:32.211 "num_base_bdevs_operational": 2, 00:29:32.211 "base_bdevs_list": [ 00:29:32.211 { 00:29:32.211 "name": "spare", 00:29:32.211 "uuid": "0f834c66-1f86-581c-9a29-32108a511134", 00:29:32.211 "is_configured": true, 00:29:32.211 "data_offset": 2048, 00:29:32.211 "data_size": 63488 00:29:32.211 }, 00:29:32.211 { 00:29:32.211 "name": "BaseBdev2", 00:29:32.211 "uuid": "317b8cd1-2a2c-515c-b452-92b849982d8b", 00:29:32.211 "is_configured": true, 00:29:32.211 "data_offset": 2048, 00:29:32.211 "data_size": 63488 00:29:32.211 } 00:29:32.211 ] 00:29:32.211 }' 00:29:32.211 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:32.211 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:32.211 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:32.470 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:32.470 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:32.470 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:32.470 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:32.470 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:32.470 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:32.470 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:32.470 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:32.470 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:32.470 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:32.470 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:32.470 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:32.470 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:32.471 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:32.471 "name": "raid_bdev1", 00:29:32.471 "uuid": "040d6414-0ad2-4882-a04c-0f805ba55e6a", 00:29:32.471 "strip_size_kb": 0, 00:29:32.471 "state": "online", 00:29:32.471 "raid_level": "raid1", 00:29:32.471 "superblock": true, 00:29:32.471 "num_base_bdevs": 2, 00:29:32.471 "num_base_bdevs_discovered": 2, 00:29:32.471 "num_base_bdevs_operational": 2, 00:29:32.471 "base_bdevs_list": [ 00:29:32.471 { 00:29:32.471 "name": "spare", 00:29:32.471 "uuid": "0f834c66-1f86-581c-9a29-32108a511134", 00:29:32.471 "is_configured": true, 00:29:32.471 "data_offset": 2048, 00:29:32.471 "data_size": 63488 00:29:32.471 }, 00:29:32.471 { 00:29:32.471 "name": "BaseBdev2", 00:29:32.471 "uuid": "317b8cd1-2a2c-515c-b452-92b849982d8b", 00:29:32.471 "is_configured": true, 00:29:32.471 "data_offset": 2048, 00:29:32.471 "data_size": 63488 00:29:32.471 } 00:29:32.471 ] 00:29:32.471 }' 00:29:32.471 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:32.471 21:43:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:33.407 21:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:33.407 [2024-07-15 21:43:06.600791] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:33.407 [2024-07-15 21:43:06.600934] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:33.407 00:29:33.407 Latency(us) 00:29:33.407 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:33.407 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:29:33.407 raid_bdev1 : 9.91 118.24 354.72 0.00 0.00 11865.87 368.46 111268.11 00:29:33.407 =================================================================================================================== 00:29:33.407 Total : 118.24 354.72 0.00 0.00 11865.87 368.46 111268.11 00:29:33.407 [2024-07-15 21:43:06.674631] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:33.407 [2024-07-15 21:43:06.674727] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:33.407 0 00:29:33.407 [2024-07-15 21:43:06.674855] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:33.407 [2024-07-15 21:43:06.674872] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:29:33.407 21:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:33.407 21:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:29:33.666 21:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:29:33.666 21:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:29:33.666 21:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:29:33.666 21:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:29:33.666 21:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:33.666 21:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:29:33.666 21:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:33.666 21:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:29:33.666 21:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:33.666 21:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:29:33.666 21:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:33.666 21:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:33.666 21:43:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:29:33.925 /dev/nbd0 00:29:33.925 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:33.925 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:33.925 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:33.925 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:29:33.925 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:33.925 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:33.925 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:33.925 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:29:33.925 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:33.925 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:33.925 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:33.925 1+0 records in 00:29:33.925 1+0 records out 00:29:33.925 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000952757 s, 4.3 MB/s 00:29:33.925 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:33.925 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:29:33.925 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:33.925 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:33.925 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:29:33.925 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:33.925 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:33.925 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:29:33.925 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:29:33.925 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:29:33.925 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:33.925 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:29:33.925 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:33.925 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:29:33.925 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:33.925 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:29:33.925 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:33.925 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:33.925 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:29:34.184 /dev/nbd1 00:29:34.184 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:34.184 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:34.184 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:29:34.184 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:29:34.184 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:34.184 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:34.184 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:29:34.184 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:29:34.184 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:34.184 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:34.184 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:34.184 1+0 records in 00:29:34.184 1+0 records out 00:29:34.184 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420954 s, 9.7 MB/s 00:29:34.184 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:34.184 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:29:34.184 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:34.184 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:34.184 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:29:34.184 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:34.184 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:34.184 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:29:34.184 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:29:34.184 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:34.184 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:29:34.184 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:34.184 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:29:34.184 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:34.184 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:34.443 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:34.443 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:34.443 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:34.443 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:34.443 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:34.443 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:34.443 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:29:34.443 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:34.443 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:34.443 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:34.443 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:29:34.443 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:34.443 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:29:34.443 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:34.443 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:34.702 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:34.702 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:34.702 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:34.702 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:34.702 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:34.702 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:34.702 21:43:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:29:34.702 21:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:29:34.702 21:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:34.702 21:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:34.702 21:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:29:34.702 21:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:34.702 21:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:29:34.702 21:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:34.960 21:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:35.218 [2024-07-15 21:43:08.450804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:35.218 [2024-07-15 21:43:08.450991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:35.218 [2024-07-15 21:43:08.451085] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:29:35.218 [2024-07-15 21:43:08.451158] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:35.218 [2024-07-15 21:43:08.453565] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:35.218 [2024-07-15 21:43:08.453643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:35.218 [2024-07-15 21:43:08.453824] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:35.218 [2024-07-15 21:43:08.453921] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:35.218 [2024-07-15 21:43:08.454122] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:35.218 spare 00:29:35.219 21:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:35.219 21:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:35.219 21:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:35.219 21:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:35.219 21:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:35.219 21:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:35.219 21:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:35.219 21:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:35.219 21:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:35.219 21:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:35.219 21:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:35.219 21:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:35.219 [2024-07-15 21:43:08.554086] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:29:35.219 [2024-07-15 21:43:08.554237] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:35.219 [2024-07-15 21:43:08.554516] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d150 00:29:35.219 [2024-07-15 21:43:08.555045] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:29:35.219 [2024-07-15 21:43:08.555101] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:29:35.219 [2024-07-15 21:43:08.555347] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:35.477 21:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:35.477 "name": "raid_bdev1", 00:29:35.477 "uuid": "040d6414-0ad2-4882-a04c-0f805ba55e6a", 00:29:35.477 "strip_size_kb": 0, 00:29:35.477 "state": "online", 00:29:35.477 "raid_level": "raid1", 00:29:35.477 "superblock": true, 00:29:35.477 "num_base_bdevs": 2, 00:29:35.477 "num_base_bdevs_discovered": 2, 00:29:35.477 "num_base_bdevs_operational": 2, 00:29:35.477 "base_bdevs_list": [ 00:29:35.477 { 00:29:35.477 "name": "spare", 00:29:35.477 "uuid": "0f834c66-1f86-581c-9a29-32108a511134", 00:29:35.477 "is_configured": true, 00:29:35.477 "data_offset": 2048, 00:29:35.477 "data_size": 63488 00:29:35.477 }, 00:29:35.477 { 00:29:35.477 "name": "BaseBdev2", 00:29:35.477 "uuid": "317b8cd1-2a2c-515c-b452-92b849982d8b", 00:29:35.477 "is_configured": true, 00:29:35.477 "data_offset": 2048, 00:29:35.477 "data_size": 63488 00:29:35.477 } 00:29:35.477 ] 00:29:35.477 }' 00:29:35.477 21:43:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:35.477 21:43:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:36.045 21:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:36.045 21:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:36.045 21:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:36.045 21:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:36.045 21:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:36.045 21:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:36.045 21:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:36.303 21:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:36.303 "name": "raid_bdev1", 00:29:36.303 "uuid": "040d6414-0ad2-4882-a04c-0f805ba55e6a", 00:29:36.303 "strip_size_kb": 0, 00:29:36.303 "state": "online", 00:29:36.303 "raid_level": "raid1", 00:29:36.303 "superblock": true, 00:29:36.303 "num_base_bdevs": 2, 00:29:36.303 "num_base_bdevs_discovered": 2, 00:29:36.303 "num_base_bdevs_operational": 2, 00:29:36.303 "base_bdevs_list": [ 00:29:36.303 { 00:29:36.303 "name": "spare", 00:29:36.303 "uuid": "0f834c66-1f86-581c-9a29-32108a511134", 00:29:36.303 "is_configured": true, 00:29:36.303 "data_offset": 2048, 00:29:36.303 "data_size": 63488 00:29:36.303 }, 00:29:36.303 { 00:29:36.303 "name": "BaseBdev2", 00:29:36.303 "uuid": "317b8cd1-2a2c-515c-b452-92b849982d8b", 00:29:36.303 "is_configured": true, 00:29:36.303 "data_offset": 2048, 00:29:36.303 "data_size": 63488 00:29:36.303 } 00:29:36.303 ] 00:29:36.303 }' 00:29:36.303 21:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:36.303 21:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:36.303 21:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:36.561 21:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:36.561 21:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:36.561 21:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:29:36.561 21:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:29:36.561 21:43:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:36.820 [2024-07-15 21:43:10.117090] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:36.820 21:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:36.820 21:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:36.820 21:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:36.820 21:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:36.820 21:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:36.820 21:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:36.820 21:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:36.820 21:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:36.820 21:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:36.820 21:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:36.820 21:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:36.820 21:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:37.079 21:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:37.079 "name": "raid_bdev1", 00:29:37.079 "uuid": "040d6414-0ad2-4882-a04c-0f805ba55e6a", 00:29:37.079 "strip_size_kb": 0, 00:29:37.079 "state": "online", 00:29:37.079 "raid_level": "raid1", 00:29:37.079 "superblock": true, 00:29:37.079 "num_base_bdevs": 2, 00:29:37.079 "num_base_bdevs_discovered": 1, 00:29:37.079 "num_base_bdevs_operational": 1, 00:29:37.079 "base_bdevs_list": [ 00:29:37.079 { 00:29:37.079 "name": null, 00:29:37.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:37.079 "is_configured": false, 00:29:37.079 "data_offset": 2048, 00:29:37.079 "data_size": 63488 00:29:37.079 }, 00:29:37.079 { 00:29:37.079 "name": "BaseBdev2", 00:29:37.079 "uuid": "317b8cd1-2a2c-515c-b452-92b849982d8b", 00:29:37.079 "is_configured": true, 00:29:37.079 "data_offset": 2048, 00:29:37.079 "data_size": 63488 00:29:37.079 } 00:29:37.079 ] 00:29:37.079 }' 00:29:37.079 21:43:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:37.079 21:43:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:38.014 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:38.014 [2024-07-15 21:43:11.231431] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:38.014 [2024-07-15 21:43:11.231889] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:29:38.014 [2024-07-15 21:43:11.231965] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:38.014 [2024-07-15 21:43:11.232082] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:38.014 [2024-07-15 21:43:11.251811] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d2f0 00:29:38.014 [2024-07-15 21:43:11.254204] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:38.014 21:43:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:29:38.954 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:38.954 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:38.954 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:38.954 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:38.954 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:38.954 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:38.954 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:39.212 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:39.212 "name": "raid_bdev1", 00:29:39.212 "uuid": "040d6414-0ad2-4882-a04c-0f805ba55e6a", 00:29:39.212 "strip_size_kb": 0, 00:29:39.212 "state": "online", 00:29:39.212 "raid_level": "raid1", 00:29:39.212 "superblock": true, 00:29:39.212 "num_base_bdevs": 2, 00:29:39.212 "num_base_bdevs_discovered": 2, 00:29:39.212 "num_base_bdevs_operational": 2, 00:29:39.212 "process": { 00:29:39.212 "type": "rebuild", 00:29:39.212 "target": "spare", 00:29:39.212 "progress": { 00:29:39.212 "blocks": 24576, 00:29:39.212 "percent": 38 00:29:39.212 } 00:29:39.212 }, 00:29:39.212 "base_bdevs_list": [ 00:29:39.212 { 00:29:39.212 "name": "spare", 00:29:39.212 "uuid": "0f834c66-1f86-581c-9a29-32108a511134", 00:29:39.212 "is_configured": true, 00:29:39.212 "data_offset": 2048, 00:29:39.212 "data_size": 63488 00:29:39.212 }, 00:29:39.212 { 00:29:39.212 "name": "BaseBdev2", 00:29:39.212 "uuid": "317b8cd1-2a2c-515c-b452-92b849982d8b", 00:29:39.212 "is_configured": true, 00:29:39.212 "data_offset": 2048, 00:29:39.212 "data_size": 63488 00:29:39.212 } 00:29:39.212 ] 00:29:39.212 }' 00:29:39.212 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:39.212 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:39.212 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:39.469 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:39.469 21:43:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:39.727 [2024-07-15 21:43:12.865802] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:39.727 [2024-07-15 21:43:12.965717] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:39.727 [2024-07-15 21:43:12.965965] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:39.727 [2024-07-15 21:43:12.966005] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:39.727 [2024-07-15 21:43:12.966057] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:39.727 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:39.727 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:39.727 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:39.727 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:39.727 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:39.727 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:39.727 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:39.727 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:39.727 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:39.727 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:39.727 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:39.727 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:39.986 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:39.986 "name": "raid_bdev1", 00:29:39.986 "uuid": "040d6414-0ad2-4882-a04c-0f805ba55e6a", 00:29:39.986 "strip_size_kb": 0, 00:29:39.986 "state": "online", 00:29:39.986 "raid_level": "raid1", 00:29:39.986 "superblock": true, 00:29:39.986 "num_base_bdevs": 2, 00:29:39.986 "num_base_bdevs_discovered": 1, 00:29:39.986 "num_base_bdevs_operational": 1, 00:29:39.986 "base_bdevs_list": [ 00:29:39.986 { 00:29:39.986 "name": null, 00:29:39.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:39.986 "is_configured": false, 00:29:39.986 "data_offset": 2048, 00:29:39.986 "data_size": 63488 00:29:39.986 }, 00:29:39.986 { 00:29:39.986 "name": "BaseBdev2", 00:29:39.986 "uuid": "317b8cd1-2a2c-515c-b452-92b849982d8b", 00:29:39.986 "is_configured": true, 00:29:39.986 "data_offset": 2048, 00:29:39.986 "data_size": 63488 00:29:39.986 } 00:29:39.986 ] 00:29:39.986 }' 00:29:39.986 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:39.986 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:40.553 21:43:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:40.812 [2024-07-15 21:43:14.048005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:40.812 [2024-07-15 21:43:14.048176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:40.812 [2024-07-15 21:43:14.048228] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:29:40.812 [2024-07-15 21:43:14.048271] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:40.812 [2024-07-15 21:43:14.048833] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:40.812 [2024-07-15 21:43:14.048905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:40.812 [2024-07-15 21:43:14.049051] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:40.812 [2024-07-15 21:43:14.049083] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:29:40.812 [2024-07-15 21:43:14.049106] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:40.812 [2024-07-15 21:43:14.049154] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:40.812 [2024-07-15 21:43:14.065506] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d630 00:29:40.812 spare 00:29:40.812 [2024-07-15 21:43:14.067611] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:40.812 21:43:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:29:41.749 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:41.749 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:41.749 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:41.749 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:41.749 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:41.749 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:41.749 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:42.012 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:42.012 "name": "raid_bdev1", 00:29:42.012 "uuid": "040d6414-0ad2-4882-a04c-0f805ba55e6a", 00:29:42.012 "strip_size_kb": 0, 00:29:42.012 "state": "online", 00:29:42.012 "raid_level": "raid1", 00:29:42.012 "superblock": true, 00:29:42.012 "num_base_bdevs": 2, 00:29:42.012 "num_base_bdevs_discovered": 2, 00:29:42.012 "num_base_bdevs_operational": 2, 00:29:42.012 "process": { 00:29:42.012 "type": "rebuild", 00:29:42.012 "target": "spare", 00:29:42.012 "progress": { 00:29:42.012 "blocks": 22528, 00:29:42.012 "percent": 35 00:29:42.012 } 00:29:42.012 }, 00:29:42.012 "base_bdevs_list": [ 00:29:42.012 { 00:29:42.012 "name": "spare", 00:29:42.012 "uuid": "0f834c66-1f86-581c-9a29-32108a511134", 00:29:42.012 "is_configured": true, 00:29:42.012 "data_offset": 2048, 00:29:42.012 "data_size": 63488 00:29:42.012 }, 00:29:42.012 { 00:29:42.012 "name": "BaseBdev2", 00:29:42.012 "uuid": "317b8cd1-2a2c-515c-b452-92b849982d8b", 00:29:42.012 "is_configured": true, 00:29:42.012 "data_offset": 2048, 00:29:42.012 "data_size": 63488 00:29:42.012 } 00:29:42.012 ] 00:29:42.012 }' 00:29:42.012 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:42.012 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:42.012 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:42.282 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:42.282 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:42.282 [2024-07-15 21:43:15.583807] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:42.282 [2024-07-15 21:43:15.586215] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:42.282 [2024-07-15 21:43:15.586316] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:42.282 [2024-07-15 21:43:15.586346] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:42.282 [2024-07-15 21:43:15.586370] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:42.282 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:42.282 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:42.282 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:42.282 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:42.282 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:42.282 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:42.282 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:42.282 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:42.282 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:42.282 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:42.282 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:42.282 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:42.543 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:42.543 "name": "raid_bdev1", 00:29:42.543 "uuid": "040d6414-0ad2-4882-a04c-0f805ba55e6a", 00:29:42.543 "strip_size_kb": 0, 00:29:42.543 "state": "online", 00:29:42.543 "raid_level": "raid1", 00:29:42.543 "superblock": true, 00:29:42.543 "num_base_bdevs": 2, 00:29:42.543 "num_base_bdevs_discovered": 1, 00:29:42.543 "num_base_bdevs_operational": 1, 00:29:42.543 "base_bdevs_list": [ 00:29:42.543 { 00:29:42.543 "name": null, 00:29:42.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:42.543 "is_configured": false, 00:29:42.543 "data_offset": 2048, 00:29:42.543 "data_size": 63488 00:29:42.543 }, 00:29:42.543 { 00:29:42.543 "name": "BaseBdev2", 00:29:42.543 "uuid": "317b8cd1-2a2c-515c-b452-92b849982d8b", 00:29:42.543 "is_configured": true, 00:29:42.543 "data_offset": 2048, 00:29:42.543 "data_size": 63488 00:29:42.543 } 00:29:42.543 ] 00:29:42.543 }' 00:29:42.543 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:42.543 21:43:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:43.481 21:43:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:43.481 21:43:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:43.481 21:43:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:43.481 21:43:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:43.481 21:43:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:43.481 21:43:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:43.481 21:43:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:43.481 21:43:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:43.481 "name": "raid_bdev1", 00:29:43.481 "uuid": "040d6414-0ad2-4882-a04c-0f805ba55e6a", 00:29:43.481 "strip_size_kb": 0, 00:29:43.481 "state": "online", 00:29:43.481 "raid_level": "raid1", 00:29:43.481 "superblock": true, 00:29:43.481 "num_base_bdevs": 2, 00:29:43.481 "num_base_bdevs_discovered": 1, 00:29:43.481 "num_base_bdevs_operational": 1, 00:29:43.481 "base_bdevs_list": [ 00:29:43.482 { 00:29:43.482 "name": null, 00:29:43.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:43.482 "is_configured": false, 00:29:43.482 "data_offset": 2048, 00:29:43.482 "data_size": 63488 00:29:43.482 }, 00:29:43.482 { 00:29:43.482 "name": "BaseBdev2", 00:29:43.482 "uuid": "317b8cd1-2a2c-515c-b452-92b849982d8b", 00:29:43.482 "is_configured": true, 00:29:43.482 "data_offset": 2048, 00:29:43.482 "data_size": 63488 00:29:43.482 } 00:29:43.482 ] 00:29:43.482 }' 00:29:43.482 21:43:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:43.482 21:43:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:43.482 21:43:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:43.482 21:43:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:43.482 21:43:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:29:43.741 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:44.001 [2024-07-15 21:43:17.203301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:44.001 [2024-07-15 21:43:17.203504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:44.001 [2024-07-15 21:43:17.203563] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:29:44.001 [2024-07-15 21:43:17.203600] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:44.001 [2024-07-15 21:43:17.204186] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:44.001 [2024-07-15 21:43:17.204261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:44.001 [2024-07-15 21:43:17.204454] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:29:44.001 [2024-07-15 21:43:17.204507] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:44.001 [2024-07-15 21:43:17.204531] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:44.001 BaseBdev1 00:29:44.001 21:43:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:29:45.022 21:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:45.022 21:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:45.022 21:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:45.022 21:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:45.022 21:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:45.022 21:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:45.022 21:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:45.022 21:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:45.022 21:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:45.022 21:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:45.022 21:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:45.022 21:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:45.293 21:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:45.293 "name": "raid_bdev1", 00:29:45.293 "uuid": "040d6414-0ad2-4882-a04c-0f805ba55e6a", 00:29:45.293 "strip_size_kb": 0, 00:29:45.293 "state": "online", 00:29:45.293 "raid_level": "raid1", 00:29:45.293 "superblock": true, 00:29:45.293 "num_base_bdevs": 2, 00:29:45.293 "num_base_bdevs_discovered": 1, 00:29:45.293 "num_base_bdevs_operational": 1, 00:29:45.293 "base_bdevs_list": [ 00:29:45.293 { 00:29:45.293 "name": null, 00:29:45.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:45.293 "is_configured": false, 00:29:45.293 "data_offset": 2048, 00:29:45.293 "data_size": 63488 00:29:45.293 }, 00:29:45.293 { 00:29:45.293 "name": "BaseBdev2", 00:29:45.293 "uuid": "317b8cd1-2a2c-515c-b452-92b849982d8b", 00:29:45.293 "is_configured": true, 00:29:45.293 "data_offset": 2048, 00:29:45.293 "data_size": 63488 00:29:45.293 } 00:29:45.293 ] 00:29:45.293 }' 00:29:45.293 21:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:45.293 21:43:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:45.862 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:45.862 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:45.862 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:45.862 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:45.862 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:45.862 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:45.862 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:46.121 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:46.121 "name": "raid_bdev1", 00:29:46.121 "uuid": "040d6414-0ad2-4882-a04c-0f805ba55e6a", 00:29:46.121 "strip_size_kb": 0, 00:29:46.121 "state": "online", 00:29:46.121 "raid_level": "raid1", 00:29:46.121 "superblock": true, 00:29:46.121 "num_base_bdevs": 2, 00:29:46.121 "num_base_bdevs_discovered": 1, 00:29:46.121 "num_base_bdevs_operational": 1, 00:29:46.121 "base_bdevs_list": [ 00:29:46.121 { 00:29:46.121 "name": null, 00:29:46.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:46.121 "is_configured": false, 00:29:46.121 "data_offset": 2048, 00:29:46.121 "data_size": 63488 00:29:46.121 }, 00:29:46.121 { 00:29:46.121 "name": "BaseBdev2", 00:29:46.121 "uuid": "317b8cd1-2a2c-515c-b452-92b849982d8b", 00:29:46.121 "is_configured": true, 00:29:46.121 "data_offset": 2048, 00:29:46.121 "data_size": 63488 00:29:46.121 } 00:29:46.121 ] 00:29:46.121 }' 00:29:46.121 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:46.121 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:46.121 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:46.121 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:46.121 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:46.121 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:29:46.121 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:46.121 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:46.121 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:46.121 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:46.121 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:46.121 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:46.121 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:46.121 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:46.121 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:29:46.121 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:46.380 [2024-07-15 21:43:19.593371] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:46.380 [2024-07-15 21:43:19.593755] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:46.380 [2024-07-15 21:43:19.593804] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:46.380 request: 00:29:46.380 { 00:29:46.380 "base_bdev": "BaseBdev1", 00:29:46.381 "raid_bdev": "raid_bdev1", 00:29:46.381 "method": "bdev_raid_add_base_bdev", 00:29:46.381 "req_id": 1 00:29:46.381 } 00:29:46.381 Got JSON-RPC error response 00:29:46.381 response: 00:29:46.381 { 00:29:46.381 "code": -22, 00:29:46.381 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:29:46.381 } 00:29:46.381 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:29:46.381 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:46.381 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:46.381 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:46.381 21:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:29:47.318 21:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:47.318 21:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:47.318 21:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:47.318 21:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:47.318 21:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:47.318 21:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:47.318 21:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:47.318 21:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:47.318 21:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:47.318 21:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:47.318 21:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:47.318 21:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:47.576 21:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:47.576 "name": "raid_bdev1", 00:29:47.576 "uuid": "040d6414-0ad2-4882-a04c-0f805ba55e6a", 00:29:47.576 "strip_size_kb": 0, 00:29:47.576 "state": "online", 00:29:47.576 "raid_level": "raid1", 00:29:47.576 "superblock": true, 00:29:47.576 "num_base_bdevs": 2, 00:29:47.576 "num_base_bdevs_discovered": 1, 00:29:47.576 "num_base_bdevs_operational": 1, 00:29:47.576 "base_bdevs_list": [ 00:29:47.576 { 00:29:47.576 "name": null, 00:29:47.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:47.576 "is_configured": false, 00:29:47.576 "data_offset": 2048, 00:29:47.576 "data_size": 63488 00:29:47.576 }, 00:29:47.576 { 00:29:47.576 "name": "BaseBdev2", 00:29:47.576 "uuid": "317b8cd1-2a2c-515c-b452-92b849982d8b", 00:29:47.576 "is_configured": true, 00:29:47.576 "data_offset": 2048, 00:29:47.576 "data_size": 63488 00:29:47.576 } 00:29:47.576 ] 00:29:47.576 }' 00:29:47.576 21:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:47.576 21:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:48.143 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:48.143 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:48.143 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:48.143 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:48.143 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:48.143 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:48.143 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:48.402 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:48.402 "name": "raid_bdev1", 00:29:48.402 "uuid": "040d6414-0ad2-4882-a04c-0f805ba55e6a", 00:29:48.402 "strip_size_kb": 0, 00:29:48.402 "state": "online", 00:29:48.402 "raid_level": "raid1", 00:29:48.402 "superblock": true, 00:29:48.402 "num_base_bdevs": 2, 00:29:48.402 "num_base_bdevs_discovered": 1, 00:29:48.402 "num_base_bdevs_operational": 1, 00:29:48.402 "base_bdevs_list": [ 00:29:48.402 { 00:29:48.402 "name": null, 00:29:48.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:48.402 "is_configured": false, 00:29:48.402 "data_offset": 2048, 00:29:48.402 "data_size": 63488 00:29:48.402 }, 00:29:48.402 { 00:29:48.402 "name": "BaseBdev2", 00:29:48.402 "uuid": "317b8cd1-2a2c-515c-b452-92b849982d8b", 00:29:48.402 "is_configured": true, 00:29:48.402 "data_offset": 2048, 00:29:48.402 "data_size": 63488 00:29:48.402 } 00:29:48.402 ] 00:29:48.402 }' 00:29:48.402 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:48.402 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:48.402 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:48.402 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:48.402 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 147380 00:29:48.402 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@948 -- # '[' -z 147380 ']' 00:29:48.402 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # kill -0 147380 00:29:48.402 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # uname 00:29:48.402 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:48.402 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 147380 00:29:48.661 killing process with pid 147380 00:29:48.661 Received shutdown signal, test time was about 25.070774 seconds 00:29:48.661 00:29:48.661 Latency(us) 00:29:48.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:48.661 =================================================================================================================== 00:29:48.661 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:48.661 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:48.661 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:48.661 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 147380' 00:29:48.661 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@967 -- # kill 147380 00:29:48.661 21:43:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # wait 147380 00:29:48.661 [2024-07-15 21:43:21.782348] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:48.661 [2024-07-15 21:43:21.782540] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:48.661 [2024-07-15 21:43:21.782676] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:48.661 [2024-07-15 21:43:21.782707] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:29:48.661 [2024-07-15 21:43:22.028039] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:50.562 ************************************ 00:29:50.563 END TEST raid_rebuild_test_sb_io 00:29:50.563 ************************************ 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:29:50.563 00:29:50.563 real 0m30.604s 00:29:50.563 user 0m48.437s 00:29:50.563 sys 0m3.285s 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:50.563 21:43:23 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:29:50.563 21:43:23 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:29:50.563 21:43:23 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:29:50.563 21:43:23 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:29:50.563 21:43:23 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:50.563 21:43:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:50.563 ************************************ 00:29:50.563 START TEST raid_rebuild_test 00:29:50.563 ************************************ 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 false false true 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=148278 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 148278 /var/tmp/spdk-raid.sock 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 148278 ']' 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:50.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:50.563 21:43:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:50.563 [2024-07-15 21:43:23.628325] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:29:50.563 [2024-07-15 21:43:23.629045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148278 ] 00:29:50.563 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:50.563 Zero copy mechanism will not be used. 00:29:50.563 [2024-07-15 21:43:23.784655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.822 [2024-07-15 21:43:23.989447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.822 [2024-07-15 21:43:24.184233] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:51.391 21:43:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:51.391 21:43:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:29:51.391 21:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:51.391 21:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:51.391 BaseBdev1_malloc 00:29:51.391 21:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:51.650 [2024-07-15 21:43:24.899689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:51.650 [2024-07-15 21:43:24.899942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:51.650 [2024-07-15 21:43:24.900016] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:29:51.650 [2024-07-15 21:43:24.900059] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:51.651 [2024-07-15 21:43:24.902555] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:51.651 [2024-07-15 21:43:24.902649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:51.651 BaseBdev1 00:29:51.651 21:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:51.651 21:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:51.910 BaseBdev2_malloc 00:29:51.910 21:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:52.170 [2024-07-15 21:43:25.320270] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:52.170 [2024-07-15 21:43:25.320502] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:52.170 [2024-07-15 21:43:25.320563] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:29:52.170 [2024-07-15 21:43:25.320617] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:52.170 [2024-07-15 21:43:25.323018] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:52.170 [2024-07-15 21:43:25.323102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:52.170 BaseBdev2 00:29:52.170 21:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:52.170 21:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:52.170 BaseBdev3_malloc 00:29:52.170 21:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:52.431 [2024-07-15 21:43:25.689130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:52.431 [2024-07-15 21:43:25.689342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:52.431 [2024-07-15 21:43:25.689397] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:29:52.431 [2024-07-15 21:43:25.689441] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:52.431 [2024-07-15 21:43:25.691803] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:52.431 [2024-07-15 21:43:25.691892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:52.431 BaseBdev3 00:29:52.431 21:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:52.431 21:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:52.691 BaseBdev4_malloc 00:29:52.691 21:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:29:52.951 [2024-07-15 21:43:26.100575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:29:52.951 [2024-07-15 21:43:26.100796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:52.951 [2024-07-15 21:43:26.100854] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:29:52.951 [2024-07-15 21:43:26.100899] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:52.952 [2024-07-15 21:43:26.103347] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:52.952 [2024-07-15 21:43:26.103432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:52.952 BaseBdev4 00:29:52.952 21:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:52.952 spare_malloc 00:29:53.211 21:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:53.211 spare_delay 00:29:53.211 21:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:53.470 [2024-07-15 21:43:26.718075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:53.470 [2024-07-15 21:43:26.718308] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:53.470 [2024-07-15 21:43:26.718364] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:29:53.470 [2024-07-15 21:43:26.718437] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:53.470 [2024-07-15 21:43:26.721192] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:53.470 [2024-07-15 21:43:26.721296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:53.470 spare 00:29:53.470 21:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:29:53.729 [2024-07-15 21:43:26.917928] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:53.729 [2024-07-15 21:43:26.920144] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:53.729 [2024-07-15 21:43:26.920285] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:53.729 [2024-07-15 21:43:26.920358] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:53.729 [2024-07-15 21:43:26.920500] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:29:53.729 [2024-07-15 21:43:26.920539] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:29:53.729 [2024-07-15 21:43:26.920749] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:29:53.729 [2024-07-15 21:43:26.921161] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:29:53.729 [2024-07-15 21:43:26.921205] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:29:53.729 [2024-07-15 21:43:26.921459] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:53.729 21:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:53.729 21:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:53.729 21:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:53.729 21:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:53.729 21:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:53.729 21:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:53.729 21:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:53.729 21:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:53.729 21:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:53.729 21:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:53.729 21:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:53.729 21:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:54.001 21:43:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:54.001 "name": "raid_bdev1", 00:29:54.001 "uuid": "7f9d2007-2981-47f9-b414-e30fc8e31f68", 00:29:54.001 "strip_size_kb": 0, 00:29:54.001 "state": "online", 00:29:54.001 "raid_level": "raid1", 00:29:54.001 "superblock": false, 00:29:54.001 "num_base_bdevs": 4, 00:29:54.001 "num_base_bdevs_discovered": 4, 00:29:54.001 "num_base_bdevs_operational": 4, 00:29:54.001 "base_bdevs_list": [ 00:29:54.001 { 00:29:54.001 "name": "BaseBdev1", 00:29:54.001 "uuid": "038ac67b-9b3b-54d9-9f62-3bf9b523ba18", 00:29:54.001 "is_configured": true, 00:29:54.001 "data_offset": 0, 00:29:54.001 "data_size": 65536 00:29:54.001 }, 00:29:54.001 { 00:29:54.001 "name": "BaseBdev2", 00:29:54.001 "uuid": "5308c70e-7116-535e-b368-b3b9d1df20e4", 00:29:54.001 "is_configured": true, 00:29:54.001 "data_offset": 0, 00:29:54.001 "data_size": 65536 00:29:54.001 }, 00:29:54.001 { 00:29:54.001 "name": "BaseBdev3", 00:29:54.001 "uuid": "e296d438-f613-55e1-9c36-b8b13bf64393", 00:29:54.001 "is_configured": true, 00:29:54.001 "data_offset": 0, 00:29:54.001 "data_size": 65536 00:29:54.001 }, 00:29:54.001 { 00:29:54.001 "name": "BaseBdev4", 00:29:54.001 "uuid": "f864b99c-5f55-50d4-9065-ee6d7a09d359", 00:29:54.001 "is_configured": true, 00:29:54.001 "data_offset": 0, 00:29:54.001 "data_size": 65536 00:29:54.001 } 00:29:54.001 ] 00:29:54.001 }' 00:29:54.001 21:43:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:54.001 21:43:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.569 21:43:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:54.569 21:43:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:29:54.569 [2024-07-15 21:43:27.880574] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:54.569 21:43:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:29:54.569 21:43:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:54.569 21:43:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:54.829 21:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:29:54.829 21:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:29:54.829 21:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:29:54.829 21:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:29:54.829 21:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:29:54.829 21:43:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:54.829 21:43:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:29:54.829 21:43:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:54.829 21:43:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:29:54.829 21:43:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:54.829 21:43:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:29:54.829 21:43:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:54.829 21:43:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:54.829 21:43:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:29:55.089 [2024-07-15 21:43:28.251729] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:29:55.089 /dev/nbd0 00:29:55.089 21:43:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:55.089 21:43:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:55.089 21:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:55.089 21:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:29:55.089 21:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:55.089 21:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:55.089 21:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:55.089 21:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:29:55.089 21:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:55.089 21:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:55.089 21:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:55.089 1+0 records in 00:29:55.089 1+0 records out 00:29:55.089 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389329 s, 10.5 MB/s 00:29:55.089 21:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:55.089 21:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:29:55.089 21:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:55.089 21:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:55.089 21:43:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:29:55.089 21:43:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:55.089 21:43:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:55.089 21:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:29:55.089 21:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:29:55.089 21:43:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:30:01.698 65536+0 records in 00:30:01.698 65536+0 records out 00:30:01.698 33554432 bytes (34 MB, 32 MiB) copied, 5.92869 s, 5.7 MB/s 00:30:01.698 21:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:30:01.698 21:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:01.698 21:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:30:01.698 21:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:01.698 21:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:30:01.698 21:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:01.698 21:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:01.698 21:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:01.698 [2024-07-15 21:43:34.468937] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:01.698 21:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:01.698 21:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:01.698 21:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:01.698 21:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:01.698 21:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:01.698 21:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:30:01.698 21:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:30:01.698 21:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:30:01.698 [2024-07-15 21:43:34.704148] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:01.698 21:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:01.698 21:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:01.698 21:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:01.698 21:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:01.698 21:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:01.698 21:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:01.698 21:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:01.698 21:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:01.698 21:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:01.698 21:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:01.698 21:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:01.698 21:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:01.698 21:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:01.698 "name": "raid_bdev1", 00:30:01.698 "uuid": "7f9d2007-2981-47f9-b414-e30fc8e31f68", 00:30:01.698 "strip_size_kb": 0, 00:30:01.698 "state": "online", 00:30:01.698 "raid_level": "raid1", 00:30:01.698 "superblock": false, 00:30:01.698 "num_base_bdevs": 4, 00:30:01.698 "num_base_bdevs_discovered": 3, 00:30:01.698 "num_base_bdevs_operational": 3, 00:30:01.698 "base_bdevs_list": [ 00:30:01.698 { 00:30:01.698 "name": null, 00:30:01.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:01.698 "is_configured": false, 00:30:01.698 "data_offset": 0, 00:30:01.698 "data_size": 65536 00:30:01.698 }, 00:30:01.698 { 00:30:01.698 "name": "BaseBdev2", 00:30:01.698 "uuid": "5308c70e-7116-535e-b368-b3b9d1df20e4", 00:30:01.698 "is_configured": true, 00:30:01.698 "data_offset": 0, 00:30:01.698 "data_size": 65536 00:30:01.698 }, 00:30:01.698 { 00:30:01.698 "name": "BaseBdev3", 00:30:01.698 "uuid": "e296d438-f613-55e1-9c36-b8b13bf64393", 00:30:01.698 "is_configured": true, 00:30:01.698 "data_offset": 0, 00:30:01.698 "data_size": 65536 00:30:01.698 }, 00:30:01.698 { 00:30:01.698 "name": "BaseBdev4", 00:30:01.698 "uuid": "f864b99c-5f55-50d4-9065-ee6d7a09d359", 00:30:01.698 "is_configured": true, 00:30:01.698 "data_offset": 0, 00:30:01.698 "data_size": 65536 00:30:01.698 } 00:30:01.698 ] 00:30:01.698 }' 00:30:01.698 21:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:01.698 21:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:02.267 21:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:02.529 [2024-07-15 21:43:35.814280] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:02.529 [2024-07-15 21:43:35.832009] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0bc50 00:30:02.529 [2024-07-15 21:43:35.834060] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:02.529 21:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:30:03.898 21:43:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:03.898 21:43:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:03.898 21:43:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:03.898 21:43:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:03.898 21:43:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:03.898 21:43:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:03.898 21:43:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:03.898 21:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:03.898 "name": "raid_bdev1", 00:30:03.898 "uuid": "7f9d2007-2981-47f9-b414-e30fc8e31f68", 00:30:03.898 "strip_size_kb": 0, 00:30:03.898 "state": "online", 00:30:03.898 "raid_level": "raid1", 00:30:03.898 "superblock": false, 00:30:03.898 "num_base_bdevs": 4, 00:30:03.898 "num_base_bdevs_discovered": 4, 00:30:03.898 "num_base_bdevs_operational": 4, 00:30:03.898 "process": { 00:30:03.898 "type": "rebuild", 00:30:03.898 "target": "spare", 00:30:03.898 "progress": { 00:30:03.899 "blocks": 24576, 00:30:03.899 "percent": 37 00:30:03.899 } 00:30:03.899 }, 00:30:03.899 "base_bdevs_list": [ 00:30:03.899 { 00:30:03.899 "name": "spare", 00:30:03.899 "uuid": "59fd62c3-2871-5310-84ca-5beff2eb99bb", 00:30:03.899 "is_configured": true, 00:30:03.899 "data_offset": 0, 00:30:03.899 "data_size": 65536 00:30:03.899 }, 00:30:03.899 { 00:30:03.899 "name": "BaseBdev2", 00:30:03.899 "uuid": "5308c70e-7116-535e-b368-b3b9d1df20e4", 00:30:03.899 "is_configured": true, 00:30:03.899 "data_offset": 0, 00:30:03.899 "data_size": 65536 00:30:03.899 }, 00:30:03.899 { 00:30:03.899 "name": "BaseBdev3", 00:30:03.899 "uuid": "e296d438-f613-55e1-9c36-b8b13bf64393", 00:30:03.899 "is_configured": true, 00:30:03.899 "data_offset": 0, 00:30:03.899 "data_size": 65536 00:30:03.899 }, 00:30:03.899 { 00:30:03.899 "name": "BaseBdev4", 00:30:03.899 "uuid": "f864b99c-5f55-50d4-9065-ee6d7a09d359", 00:30:03.899 "is_configured": true, 00:30:03.899 "data_offset": 0, 00:30:03.899 "data_size": 65536 00:30:03.899 } 00:30:03.899 ] 00:30:03.899 }' 00:30:03.899 21:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:03.899 21:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:03.899 21:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:03.899 21:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:03.899 21:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:04.156 [2024-07-15 21:43:37.441044] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:04.156 [2024-07-15 21:43:37.442932] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:04.156 [2024-07-15 21:43:37.443087] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:04.156 [2024-07-15 21:43:37.443145] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:04.156 [2024-07-15 21:43:37.443177] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:04.156 21:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:04.156 21:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:04.156 21:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:04.156 21:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:04.156 21:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:04.156 21:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:04.156 21:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:04.156 21:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:04.156 21:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:04.156 21:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:04.156 21:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:04.156 21:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:04.414 21:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:04.414 "name": "raid_bdev1", 00:30:04.414 "uuid": "7f9d2007-2981-47f9-b414-e30fc8e31f68", 00:30:04.414 "strip_size_kb": 0, 00:30:04.414 "state": "online", 00:30:04.414 "raid_level": "raid1", 00:30:04.414 "superblock": false, 00:30:04.414 "num_base_bdevs": 4, 00:30:04.414 "num_base_bdevs_discovered": 3, 00:30:04.414 "num_base_bdevs_operational": 3, 00:30:04.414 "base_bdevs_list": [ 00:30:04.414 { 00:30:04.414 "name": null, 00:30:04.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:04.414 "is_configured": false, 00:30:04.414 "data_offset": 0, 00:30:04.414 "data_size": 65536 00:30:04.414 }, 00:30:04.414 { 00:30:04.414 "name": "BaseBdev2", 00:30:04.414 "uuid": "5308c70e-7116-535e-b368-b3b9d1df20e4", 00:30:04.414 "is_configured": true, 00:30:04.414 "data_offset": 0, 00:30:04.414 "data_size": 65536 00:30:04.414 }, 00:30:04.414 { 00:30:04.414 "name": "BaseBdev3", 00:30:04.414 "uuid": "e296d438-f613-55e1-9c36-b8b13bf64393", 00:30:04.414 "is_configured": true, 00:30:04.414 "data_offset": 0, 00:30:04.414 "data_size": 65536 00:30:04.414 }, 00:30:04.414 { 00:30:04.414 "name": "BaseBdev4", 00:30:04.414 "uuid": "f864b99c-5f55-50d4-9065-ee6d7a09d359", 00:30:04.414 "is_configured": true, 00:30:04.414 "data_offset": 0, 00:30:04.414 "data_size": 65536 00:30:04.414 } 00:30:04.414 ] 00:30:04.414 }' 00:30:04.414 21:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:04.414 21:43:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:05.348 21:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:05.348 21:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:05.348 21:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:05.348 21:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:05.348 21:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:05.348 21:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:05.348 21:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:05.348 21:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:05.348 "name": "raid_bdev1", 00:30:05.348 "uuid": "7f9d2007-2981-47f9-b414-e30fc8e31f68", 00:30:05.348 "strip_size_kb": 0, 00:30:05.348 "state": "online", 00:30:05.348 "raid_level": "raid1", 00:30:05.348 "superblock": false, 00:30:05.348 "num_base_bdevs": 4, 00:30:05.348 "num_base_bdevs_discovered": 3, 00:30:05.348 "num_base_bdevs_operational": 3, 00:30:05.348 "base_bdevs_list": [ 00:30:05.348 { 00:30:05.348 "name": null, 00:30:05.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:05.348 "is_configured": false, 00:30:05.348 "data_offset": 0, 00:30:05.348 "data_size": 65536 00:30:05.348 }, 00:30:05.348 { 00:30:05.348 "name": "BaseBdev2", 00:30:05.348 "uuid": "5308c70e-7116-535e-b368-b3b9d1df20e4", 00:30:05.348 "is_configured": true, 00:30:05.348 "data_offset": 0, 00:30:05.348 "data_size": 65536 00:30:05.348 }, 00:30:05.348 { 00:30:05.348 "name": "BaseBdev3", 00:30:05.348 "uuid": "e296d438-f613-55e1-9c36-b8b13bf64393", 00:30:05.348 "is_configured": true, 00:30:05.348 "data_offset": 0, 00:30:05.348 "data_size": 65536 00:30:05.348 }, 00:30:05.348 { 00:30:05.348 "name": "BaseBdev4", 00:30:05.348 "uuid": "f864b99c-5f55-50d4-9065-ee6d7a09d359", 00:30:05.348 "is_configured": true, 00:30:05.348 "data_offset": 0, 00:30:05.348 "data_size": 65536 00:30:05.348 } 00:30:05.348 ] 00:30:05.348 }' 00:30:05.348 21:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:05.609 21:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:05.609 21:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:05.609 21:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:05.609 21:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:05.609 [2024-07-15 21:43:38.965627] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:05.609 [2024-07-15 21:43:38.982042] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0bdf0 00:30:05.867 [2024-07-15 21:43:38.984479] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:05.867 21:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:30:06.804 21:43:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:06.804 21:43:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:06.804 21:43:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:06.804 21:43:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:06.804 21:43:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:06.804 21:43:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:06.805 21:43:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:07.062 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:07.062 "name": "raid_bdev1", 00:30:07.062 "uuid": "7f9d2007-2981-47f9-b414-e30fc8e31f68", 00:30:07.062 "strip_size_kb": 0, 00:30:07.062 "state": "online", 00:30:07.062 "raid_level": "raid1", 00:30:07.062 "superblock": false, 00:30:07.062 "num_base_bdevs": 4, 00:30:07.062 "num_base_bdevs_discovered": 4, 00:30:07.062 "num_base_bdevs_operational": 4, 00:30:07.062 "process": { 00:30:07.062 "type": "rebuild", 00:30:07.062 "target": "spare", 00:30:07.062 "progress": { 00:30:07.062 "blocks": 22528, 00:30:07.062 "percent": 34 00:30:07.062 } 00:30:07.062 }, 00:30:07.062 "base_bdevs_list": [ 00:30:07.062 { 00:30:07.062 "name": "spare", 00:30:07.062 "uuid": "59fd62c3-2871-5310-84ca-5beff2eb99bb", 00:30:07.062 "is_configured": true, 00:30:07.062 "data_offset": 0, 00:30:07.062 "data_size": 65536 00:30:07.062 }, 00:30:07.062 { 00:30:07.062 "name": "BaseBdev2", 00:30:07.062 "uuid": "5308c70e-7116-535e-b368-b3b9d1df20e4", 00:30:07.062 "is_configured": true, 00:30:07.062 "data_offset": 0, 00:30:07.062 "data_size": 65536 00:30:07.062 }, 00:30:07.062 { 00:30:07.062 "name": "BaseBdev3", 00:30:07.062 "uuid": "e296d438-f613-55e1-9c36-b8b13bf64393", 00:30:07.062 "is_configured": true, 00:30:07.062 "data_offset": 0, 00:30:07.062 "data_size": 65536 00:30:07.062 }, 00:30:07.062 { 00:30:07.062 "name": "BaseBdev4", 00:30:07.062 "uuid": "f864b99c-5f55-50d4-9065-ee6d7a09d359", 00:30:07.062 "is_configured": true, 00:30:07.062 "data_offset": 0, 00:30:07.062 "data_size": 65536 00:30:07.062 } 00:30:07.062 ] 00:30:07.062 }' 00:30:07.062 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:07.062 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:07.062 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:07.062 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:07.062 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:30:07.062 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:30:07.062 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:30:07.062 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:30:07.062 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:30:07.319 [2024-07-15 21:43:40.508134] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:07.319 [2024-07-15 21:43:40.595096] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d0bdf0 00:30:07.319 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:30:07.319 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:30:07.319 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:07.319 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:07.319 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:07.319 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:07.319 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:07.319 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:07.319 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:07.578 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:07.578 "name": "raid_bdev1", 00:30:07.578 "uuid": "7f9d2007-2981-47f9-b414-e30fc8e31f68", 00:30:07.578 "strip_size_kb": 0, 00:30:07.578 "state": "online", 00:30:07.578 "raid_level": "raid1", 00:30:07.578 "superblock": false, 00:30:07.578 "num_base_bdevs": 4, 00:30:07.578 "num_base_bdevs_discovered": 3, 00:30:07.578 "num_base_bdevs_operational": 3, 00:30:07.578 "process": { 00:30:07.578 "type": "rebuild", 00:30:07.578 "target": "spare", 00:30:07.578 "progress": { 00:30:07.578 "blocks": 36864, 00:30:07.578 "percent": 56 00:30:07.578 } 00:30:07.578 }, 00:30:07.578 "base_bdevs_list": [ 00:30:07.578 { 00:30:07.578 "name": "spare", 00:30:07.578 "uuid": "59fd62c3-2871-5310-84ca-5beff2eb99bb", 00:30:07.578 "is_configured": true, 00:30:07.578 "data_offset": 0, 00:30:07.578 "data_size": 65536 00:30:07.578 }, 00:30:07.578 { 00:30:07.578 "name": null, 00:30:07.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.578 "is_configured": false, 00:30:07.578 "data_offset": 0, 00:30:07.578 "data_size": 65536 00:30:07.578 }, 00:30:07.578 { 00:30:07.578 "name": "BaseBdev3", 00:30:07.578 "uuid": "e296d438-f613-55e1-9c36-b8b13bf64393", 00:30:07.578 "is_configured": true, 00:30:07.578 "data_offset": 0, 00:30:07.578 "data_size": 65536 00:30:07.578 }, 00:30:07.578 { 00:30:07.578 "name": "BaseBdev4", 00:30:07.578 "uuid": "f864b99c-5f55-50d4-9065-ee6d7a09d359", 00:30:07.578 "is_configured": true, 00:30:07.578 "data_offset": 0, 00:30:07.578 "data_size": 65536 00:30:07.578 } 00:30:07.578 ] 00:30:07.578 }' 00:30:07.578 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:07.578 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:07.578 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:07.835 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:07.835 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=873 00:30:07.835 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:07.835 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:07.836 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:07.836 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:07.836 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:07.836 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:07.836 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:07.836 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:07.836 21:43:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:07.836 "name": "raid_bdev1", 00:30:07.836 "uuid": "7f9d2007-2981-47f9-b414-e30fc8e31f68", 00:30:07.836 "strip_size_kb": 0, 00:30:07.836 "state": "online", 00:30:07.836 "raid_level": "raid1", 00:30:07.836 "superblock": false, 00:30:07.836 "num_base_bdevs": 4, 00:30:07.836 "num_base_bdevs_discovered": 3, 00:30:07.836 "num_base_bdevs_operational": 3, 00:30:07.836 "process": { 00:30:07.836 "type": "rebuild", 00:30:07.836 "target": "spare", 00:30:07.836 "progress": { 00:30:07.836 "blocks": 43008, 00:30:07.836 "percent": 65 00:30:07.836 } 00:30:07.836 }, 00:30:07.836 "base_bdevs_list": [ 00:30:07.836 { 00:30:07.836 "name": "spare", 00:30:07.836 "uuid": "59fd62c3-2871-5310-84ca-5beff2eb99bb", 00:30:07.836 "is_configured": true, 00:30:07.836 "data_offset": 0, 00:30:07.836 "data_size": 65536 00:30:07.836 }, 00:30:07.836 { 00:30:07.836 "name": null, 00:30:07.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.836 "is_configured": false, 00:30:07.836 "data_offset": 0, 00:30:07.836 "data_size": 65536 00:30:07.836 }, 00:30:07.836 { 00:30:07.836 "name": "BaseBdev3", 00:30:07.836 "uuid": "e296d438-f613-55e1-9c36-b8b13bf64393", 00:30:07.836 "is_configured": true, 00:30:07.836 "data_offset": 0, 00:30:07.836 "data_size": 65536 00:30:07.836 }, 00:30:07.836 { 00:30:07.836 "name": "BaseBdev4", 00:30:07.836 "uuid": "f864b99c-5f55-50d4-9065-ee6d7a09d359", 00:30:07.836 "is_configured": true, 00:30:07.836 "data_offset": 0, 00:30:07.836 "data_size": 65536 00:30:07.836 } 00:30:07.836 ] 00:30:07.836 }' 00:30:07.836 21:43:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:07.836 21:43:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:07.836 21:43:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:08.094 21:43:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:08.094 21:43:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:09.031 [2024-07-15 21:43:42.206427] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:09.031 [2024-07-15 21:43:42.206623] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:09.031 [2024-07-15 21:43:42.206716] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:09.031 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:09.031 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:09.031 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:09.031 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:09.031 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:09.031 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:09.032 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:09.032 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:09.290 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:09.290 "name": "raid_bdev1", 00:30:09.290 "uuid": "7f9d2007-2981-47f9-b414-e30fc8e31f68", 00:30:09.290 "strip_size_kb": 0, 00:30:09.290 "state": "online", 00:30:09.290 "raid_level": "raid1", 00:30:09.290 "superblock": false, 00:30:09.290 "num_base_bdevs": 4, 00:30:09.290 "num_base_bdevs_discovered": 3, 00:30:09.290 "num_base_bdevs_operational": 3, 00:30:09.290 "base_bdevs_list": [ 00:30:09.290 { 00:30:09.290 "name": "spare", 00:30:09.290 "uuid": "59fd62c3-2871-5310-84ca-5beff2eb99bb", 00:30:09.290 "is_configured": true, 00:30:09.290 "data_offset": 0, 00:30:09.290 "data_size": 65536 00:30:09.290 }, 00:30:09.290 { 00:30:09.290 "name": null, 00:30:09.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.290 "is_configured": false, 00:30:09.290 "data_offset": 0, 00:30:09.290 "data_size": 65536 00:30:09.290 }, 00:30:09.290 { 00:30:09.290 "name": "BaseBdev3", 00:30:09.290 "uuid": "e296d438-f613-55e1-9c36-b8b13bf64393", 00:30:09.290 "is_configured": true, 00:30:09.290 "data_offset": 0, 00:30:09.290 "data_size": 65536 00:30:09.290 }, 00:30:09.290 { 00:30:09.290 "name": "BaseBdev4", 00:30:09.290 "uuid": "f864b99c-5f55-50d4-9065-ee6d7a09d359", 00:30:09.290 "is_configured": true, 00:30:09.290 "data_offset": 0, 00:30:09.290 "data_size": 65536 00:30:09.290 } 00:30:09.290 ] 00:30:09.290 }' 00:30:09.290 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:09.290 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:09.290 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:09.290 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:30:09.290 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:30:09.290 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:09.290 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:09.290 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:09.290 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:09.290 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:09.290 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:09.290 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:09.549 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:09.549 "name": "raid_bdev1", 00:30:09.549 "uuid": "7f9d2007-2981-47f9-b414-e30fc8e31f68", 00:30:09.549 "strip_size_kb": 0, 00:30:09.549 "state": "online", 00:30:09.549 "raid_level": "raid1", 00:30:09.549 "superblock": false, 00:30:09.549 "num_base_bdevs": 4, 00:30:09.549 "num_base_bdevs_discovered": 3, 00:30:09.549 "num_base_bdevs_operational": 3, 00:30:09.549 "base_bdevs_list": [ 00:30:09.549 { 00:30:09.549 "name": "spare", 00:30:09.549 "uuid": "59fd62c3-2871-5310-84ca-5beff2eb99bb", 00:30:09.549 "is_configured": true, 00:30:09.549 "data_offset": 0, 00:30:09.549 "data_size": 65536 00:30:09.549 }, 00:30:09.549 { 00:30:09.549 "name": null, 00:30:09.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.549 "is_configured": false, 00:30:09.549 "data_offset": 0, 00:30:09.549 "data_size": 65536 00:30:09.549 }, 00:30:09.549 { 00:30:09.549 "name": "BaseBdev3", 00:30:09.549 "uuid": "e296d438-f613-55e1-9c36-b8b13bf64393", 00:30:09.549 "is_configured": true, 00:30:09.549 "data_offset": 0, 00:30:09.549 "data_size": 65536 00:30:09.549 }, 00:30:09.549 { 00:30:09.550 "name": "BaseBdev4", 00:30:09.550 "uuid": "f864b99c-5f55-50d4-9065-ee6d7a09d359", 00:30:09.550 "is_configured": true, 00:30:09.550 "data_offset": 0, 00:30:09.550 "data_size": 65536 00:30:09.550 } 00:30:09.550 ] 00:30:09.550 }' 00:30:09.550 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:09.550 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:09.550 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:09.550 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:09.550 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:09.550 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:09.550 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:09.550 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:09.550 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:09.550 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:09.550 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:09.550 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:09.550 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:09.550 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:09.550 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:09.550 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:09.809 21:43:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:09.809 "name": "raid_bdev1", 00:30:09.809 "uuid": "7f9d2007-2981-47f9-b414-e30fc8e31f68", 00:30:09.809 "strip_size_kb": 0, 00:30:09.809 "state": "online", 00:30:09.809 "raid_level": "raid1", 00:30:09.809 "superblock": false, 00:30:09.809 "num_base_bdevs": 4, 00:30:09.809 "num_base_bdevs_discovered": 3, 00:30:09.809 "num_base_bdevs_operational": 3, 00:30:09.809 "base_bdevs_list": [ 00:30:09.809 { 00:30:09.809 "name": "spare", 00:30:09.809 "uuid": "59fd62c3-2871-5310-84ca-5beff2eb99bb", 00:30:09.809 "is_configured": true, 00:30:09.809 "data_offset": 0, 00:30:09.809 "data_size": 65536 00:30:09.809 }, 00:30:09.809 { 00:30:09.809 "name": null, 00:30:09.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.809 "is_configured": false, 00:30:09.809 "data_offset": 0, 00:30:09.809 "data_size": 65536 00:30:09.809 }, 00:30:09.809 { 00:30:09.809 "name": "BaseBdev3", 00:30:09.809 "uuid": "e296d438-f613-55e1-9c36-b8b13bf64393", 00:30:09.809 "is_configured": true, 00:30:09.809 "data_offset": 0, 00:30:09.809 "data_size": 65536 00:30:09.809 }, 00:30:09.809 { 00:30:09.809 "name": "BaseBdev4", 00:30:09.809 "uuid": "f864b99c-5f55-50d4-9065-ee6d7a09d359", 00:30:09.809 "is_configured": true, 00:30:09.809 "data_offset": 0, 00:30:09.809 "data_size": 65536 00:30:09.809 } 00:30:09.809 ] 00:30:09.809 }' 00:30:09.809 21:43:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:09.809 21:43:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:10.744 21:43:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:10.744 [2024-07-15 21:43:43.929650] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:10.744 [2024-07-15 21:43:43.929731] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:10.744 [2024-07-15 21:43:43.929862] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:10.744 [2024-07-15 21:43:43.929963] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:10.744 [2024-07-15 21:43:43.930031] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:30:10.744 21:43:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:10.744 21:43:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:30:11.004 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:30:11.004 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:30:11.004 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:30:11.004 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:30:11.004 21:43:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:11.004 21:43:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:30:11.004 21:43:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:11.004 21:43:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:30:11.004 21:43:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:11.004 21:43:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:30:11.004 21:43:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:11.004 21:43:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:11.004 21:43:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:30:11.263 /dev/nbd0 00:30:11.263 21:43:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:11.263 21:43:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:11.263 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:30:11.263 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:30:11.263 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:11.263 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:11.263 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:30:11.263 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:30:11.263 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:11.263 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:11.263 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:11.263 1+0 records in 00:30:11.263 1+0 records out 00:30:11.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304713 s, 13.4 MB/s 00:30:11.263 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:11.263 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:30:11.263 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:11.263 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:11.263 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:30:11.263 21:43:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:11.263 21:43:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:11.263 21:43:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:30:11.263 /dev/nbd1 00:30:11.521 21:43:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:11.521 21:43:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:11.521 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:30:11.522 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:30:11.522 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:11.522 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:11.522 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:30:11.522 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:30:11.522 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:11.522 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:11.522 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:11.522 1+0 records in 00:30:11.522 1+0 records out 00:30:11.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397806 s, 10.3 MB/s 00:30:11.522 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:11.522 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:30:11.522 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:11.522 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:11.522 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:30:11.522 21:43:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:11.522 21:43:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:11.522 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:30:11.522 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:30:11.522 21:43:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:11.522 21:43:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:30:11.522 21:43:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:11.522 21:43:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:30:11.522 21:43:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:11.522 21:43:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:11.781 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:11.781 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:11.781 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:11.781 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:11.781 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:11.781 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:11.781 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:30:12.041 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:30:12.041 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:12.041 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:12.041 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:30:12.041 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:30:12.041 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:12.041 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:12.041 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:12.041 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:12.041 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:12.041 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:12.041 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:12.041 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:12.041 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:30:12.041 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:30:12.041 21:43:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:30:12.041 21:43:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 148278 00:30:12.041 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 148278 ']' 00:30:12.041 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 148278 00:30:12.041 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:30:12.041 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:12.041 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 148278 00:30:12.041 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:12.041 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:12.041 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 148278' 00:30:12.041 killing process with pid 148278 00:30:12.041 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@967 -- # kill 148278 00:30:12.041 Received shutdown signal, test time was about 60.000000 seconds 00:30:12.041 00:30:12.041 Latency(us) 00:30:12.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.041 =================================================================================================================== 00:30:12.041 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:12.041 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # wait 148278 00:30:12.041 [2024-07-15 21:43:45.411314] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:12.611 [2024-07-15 21:43:45.944081] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:13.989 ************************************ 00:30:13.989 END TEST raid_rebuild_test 00:30:13.989 ************************************ 00:30:13.989 21:43:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:30:13.989 00:30:13.989 real 0m23.791s 00:30:13.989 user 0m32.657s 00:30:13.989 sys 0m3.989s 00:30:13.989 21:43:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:13.989 21:43:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:14.248 21:43:47 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:30:14.248 21:43:47 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:30:14.248 21:43:47 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:30:14.248 21:43:47 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:14.248 21:43:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:14.248 ************************************ 00:30:14.248 START TEST raid_rebuild_test_sb 00:30:14.248 ************************************ 00:30:14.248 21:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 true false true 00:30:14.248 21:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:30:14.248 21:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:30:14.248 21:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:30:14.248 21:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:30:14.248 21:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:30:14.248 21:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:30:14.248 21:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:30:14.248 21:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:14.248 21:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:30:14.248 21:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:14.248 21:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:14.248 21:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:30:14.248 21:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:14.249 21:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:14.249 21:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:30:14.249 21:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:14.249 21:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:14.249 21:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:30:14.249 21:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:14.249 21:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:14.249 21:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:30:14.249 21:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:30:14.249 21:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:30:14.249 21:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:30:14.249 21:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:30:14.249 21:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:30:14.249 21:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:30:14.249 21:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:30:14.249 21:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:30:14.249 21:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:30:14.249 21:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=148904 00:30:14.249 21:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 148904 /var/tmp/spdk-raid.sock 00:30:14.249 21:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:14.249 21:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 148904 ']' 00:30:14.249 21:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:14.249 21:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:14.249 21:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:14.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:14.249 21:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:14.249 21:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:14.249 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:14.249 Zero copy mechanism will not be used. 00:30:14.249 [2024-07-15 21:43:47.500168] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:30:14.249 [2024-07-15 21:43:47.500325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148904 ] 00:30:14.507 [2024-07-15 21:43:47.666164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:14.766 [2024-07-15 21:43:47.923184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.025 [2024-07-15 21:43:48.184237] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:15.025 21:43:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:15.025 21:43:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:30:15.025 21:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:15.025 21:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:15.284 BaseBdev1_malloc 00:30:15.284 21:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:15.541 [2024-07-15 21:43:48.760457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:15.541 [2024-07-15 21:43:48.760613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:15.541 [2024-07-15 21:43:48.760655] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:30:15.541 [2024-07-15 21:43:48.760674] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:15.541 [2024-07-15 21:43:48.763307] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:15.541 [2024-07-15 21:43:48.763358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:15.541 BaseBdev1 00:30:15.541 21:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:15.541 21:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:15.797 BaseBdev2_malloc 00:30:15.797 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:16.053 [2024-07-15 21:43:49.276141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:16.053 [2024-07-15 21:43:49.276272] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:16.053 [2024-07-15 21:43:49.276308] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:30:16.053 [2024-07-15 21:43:49.276325] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:16.053 [2024-07-15 21:43:49.278653] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:16.053 [2024-07-15 21:43:49.278699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:16.053 BaseBdev2 00:30:16.053 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:16.053 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:16.310 BaseBdev3_malloc 00:30:16.310 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:30:16.568 [2024-07-15 21:43:49.708299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:30:16.568 [2024-07-15 21:43:49.708431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:16.568 [2024-07-15 21:43:49.708470] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:30:16.568 [2024-07-15 21:43:49.708495] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:16.568 [2024-07-15 21:43:49.710960] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:16.568 [2024-07-15 21:43:49.711014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:16.568 BaseBdev3 00:30:16.568 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:16.568 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:30:16.825 BaseBdev4_malloc 00:30:16.825 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:30:16.825 [2024-07-15 21:43:50.124673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:30:16.825 [2024-07-15 21:43:50.124786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:16.825 [2024-07-15 21:43:50.124819] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:30:16.825 [2024-07-15 21:43:50.124842] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:16.825 [2024-07-15 21:43:50.127120] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:16.825 [2024-07-15 21:43:50.127169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:30:16.825 BaseBdev4 00:30:16.825 21:43:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:30:17.083 spare_malloc 00:30:17.083 21:43:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:17.342 spare_delay 00:30:17.342 21:43:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:17.599 [2024-07-15 21:43:50.811630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:17.599 [2024-07-15 21:43:50.811798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:17.599 [2024-07-15 21:43:50.811829] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:30:17.599 [2024-07-15 21:43:50.811856] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:17.599 [2024-07-15 21:43:50.813986] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:17.599 [2024-07-15 21:43:50.814038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:17.599 spare 00:30:17.599 21:43:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:30:17.856 [2024-07-15 21:43:51.019333] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:17.856 [2024-07-15 21:43:51.021094] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:17.856 [2024-07-15 21:43:51.021165] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:17.856 [2024-07-15 21:43:51.021211] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:17.856 [2024-07-15 21:43:51.021444] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:30:17.856 [2024-07-15 21:43:51.021464] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:17.856 [2024-07-15 21:43:51.021607] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:30:17.856 [2024-07-15 21:43:51.021937] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:30:17.856 [2024-07-15 21:43:51.021957] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:30:17.856 [2024-07-15 21:43:51.022123] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:17.856 21:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:30:17.856 21:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:17.856 21:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:17.856 21:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:17.856 21:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:17.856 21:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:17.856 21:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:17.856 21:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:17.856 21:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:17.856 21:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:17.856 21:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:17.856 21:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:18.113 21:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:18.113 "name": "raid_bdev1", 00:30:18.113 "uuid": "8bd853e4-f6a9-4ea6-8f27-91ee698c8903", 00:30:18.113 "strip_size_kb": 0, 00:30:18.113 "state": "online", 00:30:18.113 "raid_level": "raid1", 00:30:18.113 "superblock": true, 00:30:18.113 "num_base_bdevs": 4, 00:30:18.113 "num_base_bdevs_discovered": 4, 00:30:18.113 "num_base_bdevs_operational": 4, 00:30:18.113 "base_bdevs_list": [ 00:30:18.113 { 00:30:18.113 "name": "BaseBdev1", 00:30:18.113 "uuid": "63e7ef6e-3b90-58ab-90da-07f7e107b0b3", 00:30:18.113 "is_configured": true, 00:30:18.113 "data_offset": 2048, 00:30:18.113 "data_size": 63488 00:30:18.113 }, 00:30:18.113 { 00:30:18.113 "name": "BaseBdev2", 00:30:18.113 "uuid": "2b335eb5-7d51-5963-95e5-f94bb5fe7631", 00:30:18.113 "is_configured": true, 00:30:18.113 "data_offset": 2048, 00:30:18.113 "data_size": 63488 00:30:18.114 }, 00:30:18.114 { 00:30:18.114 "name": "BaseBdev3", 00:30:18.114 "uuid": "62b30b43-e76a-5b56-adf9-3668bd10e275", 00:30:18.114 "is_configured": true, 00:30:18.114 "data_offset": 2048, 00:30:18.114 "data_size": 63488 00:30:18.114 }, 00:30:18.114 { 00:30:18.114 "name": "BaseBdev4", 00:30:18.114 "uuid": "b46f5947-26e6-578d-9cb4-9f27f555f99b", 00:30:18.114 "is_configured": true, 00:30:18.114 "data_offset": 2048, 00:30:18.114 "data_size": 63488 00:30:18.114 } 00:30:18.114 ] 00:30:18.114 }' 00:30:18.114 21:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:18.114 21:43:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:18.676 21:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:30:18.676 21:43:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:18.676 [2024-07-15 21:43:52.021911] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:18.676 21:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:30:18.676 21:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:18.676 21:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:18.933 21:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:30:18.933 21:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:30:18.933 21:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:30:18.933 21:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:30:18.933 21:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:30:18.933 21:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:18.933 21:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:30:18.933 21:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:18.933 21:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:30:18.933 21:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:18.933 21:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:30:18.933 21:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:18.933 21:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:18.933 21:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:30:19.192 [2024-07-15 21:43:52.392970] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:30:19.192 /dev/nbd0 00:30:19.192 21:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:19.192 21:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:19.192 21:43:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:30:19.192 21:43:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:30:19.192 21:43:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:19.192 21:43:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:19.192 21:43:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:30:19.192 21:43:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:30:19.192 21:43:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:19.192 21:43:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:19.192 21:43:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:19.192 1+0 records in 00:30:19.192 1+0 records out 00:30:19.192 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340421 s, 12.0 MB/s 00:30:19.192 21:43:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:19.192 21:43:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:30:19.192 21:43:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:19.192 21:43:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:19.192 21:43:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:30:19.192 21:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:19.192 21:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:19.192 21:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:30:19.192 21:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:30:19.192 21:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:30:25.752 63488+0 records in 00:30:25.752 63488+0 records out 00:30:25.752 32505856 bytes (33 MB, 31 MiB) copied, 5.44368 s, 6.0 MB/s 00:30:25.752 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:30:25.752 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:25.752 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:30:25.752 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:25.752 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:30:25.752 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:25.752 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:25.752 21:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:25.752 21:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:25.752 21:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:25.752 21:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:25.752 21:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:25.752 21:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:25.752 [2024-07-15 21:43:58.096082] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:25.752 21:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:25.752 21:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:25.752 21:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:30:25.752 [2024-07-15 21:43:58.283330] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:25.752 21:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:25.752 21:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:25.752 21:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:25.752 21:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:25.752 21:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:25.752 21:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:25.752 21:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:25.752 21:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:25.752 21:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:25.752 21:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:25.752 21:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:25.752 21:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:25.752 21:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:25.752 "name": "raid_bdev1", 00:30:25.752 "uuid": "8bd853e4-f6a9-4ea6-8f27-91ee698c8903", 00:30:25.752 "strip_size_kb": 0, 00:30:25.752 "state": "online", 00:30:25.752 "raid_level": "raid1", 00:30:25.752 "superblock": true, 00:30:25.752 "num_base_bdevs": 4, 00:30:25.752 "num_base_bdevs_discovered": 3, 00:30:25.752 "num_base_bdevs_operational": 3, 00:30:25.752 "base_bdevs_list": [ 00:30:25.752 { 00:30:25.752 "name": null, 00:30:25.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:25.752 "is_configured": false, 00:30:25.752 "data_offset": 2048, 00:30:25.752 "data_size": 63488 00:30:25.752 }, 00:30:25.752 { 00:30:25.752 "name": "BaseBdev2", 00:30:25.752 "uuid": "2b335eb5-7d51-5963-95e5-f94bb5fe7631", 00:30:25.752 "is_configured": true, 00:30:25.752 "data_offset": 2048, 00:30:25.752 "data_size": 63488 00:30:25.752 }, 00:30:25.752 { 00:30:25.753 "name": "BaseBdev3", 00:30:25.753 "uuid": "62b30b43-e76a-5b56-adf9-3668bd10e275", 00:30:25.753 "is_configured": true, 00:30:25.753 "data_offset": 2048, 00:30:25.753 "data_size": 63488 00:30:25.753 }, 00:30:25.753 { 00:30:25.753 "name": "BaseBdev4", 00:30:25.753 "uuid": "b46f5947-26e6-578d-9cb4-9f27f555f99b", 00:30:25.753 "is_configured": true, 00:30:25.753 "data_offset": 2048, 00:30:25.753 "data_size": 63488 00:30:25.753 } 00:30:25.753 ] 00:30:25.753 }' 00:30:25.753 21:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:25.753 21:43:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:25.753 21:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:26.011 [2024-07-15 21:43:59.261676] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:26.011 [2024-07-15 21:43:59.276220] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca53e0 00:30:26.011 [2024-07-15 21:43:59.278390] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:26.011 21:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:30:26.947 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:26.947 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:26.947 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:26.947 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:26.947 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:26.947 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:26.947 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:27.206 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:27.206 "name": "raid_bdev1", 00:30:27.206 "uuid": "8bd853e4-f6a9-4ea6-8f27-91ee698c8903", 00:30:27.206 "strip_size_kb": 0, 00:30:27.206 "state": "online", 00:30:27.206 "raid_level": "raid1", 00:30:27.206 "superblock": true, 00:30:27.206 "num_base_bdevs": 4, 00:30:27.206 "num_base_bdevs_discovered": 4, 00:30:27.206 "num_base_bdevs_operational": 4, 00:30:27.206 "process": { 00:30:27.206 "type": "rebuild", 00:30:27.206 "target": "spare", 00:30:27.206 "progress": { 00:30:27.206 "blocks": 24576, 00:30:27.206 "percent": 38 00:30:27.206 } 00:30:27.206 }, 00:30:27.206 "base_bdevs_list": [ 00:30:27.206 { 00:30:27.206 "name": "spare", 00:30:27.206 "uuid": "72a07576-0301-537d-8cfa-aa3f300f50df", 00:30:27.206 "is_configured": true, 00:30:27.206 "data_offset": 2048, 00:30:27.206 "data_size": 63488 00:30:27.206 }, 00:30:27.206 { 00:30:27.206 "name": "BaseBdev2", 00:30:27.206 "uuid": "2b335eb5-7d51-5963-95e5-f94bb5fe7631", 00:30:27.206 "is_configured": true, 00:30:27.206 "data_offset": 2048, 00:30:27.206 "data_size": 63488 00:30:27.206 }, 00:30:27.206 { 00:30:27.206 "name": "BaseBdev3", 00:30:27.206 "uuid": "62b30b43-e76a-5b56-adf9-3668bd10e275", 00:30:27.206 "is_configured": true, 00:30:27.206 "data_offset": 2048, 00:30:27.206 "data_size": 63488 00:30:27.206 }, 00:30:27.206 { 00:30:27.206 "name": "BaseBdev4", 00:30:27.206 "uuid": "b46f5947-26e6-578d-9cb4-9f27f555f99b", 00:30:27.206 "is_configured": true, 00:30:27.206 "data_offset": 2048, 00:30:27.206 "data_size": 63488 00:30:27.206 } 00:30:27.206 ] 00:30:27.206 }' 00:30:27.206 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:27.206 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:27.206 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:27.465 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:27.465 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:27.465 [2024-07-15 21:44:00.774872] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:27.465 [2024-07-15 21:44:00.788828] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:27.465 [2024-07-15 21:44:00.788930] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:27.465 [2024-07-15 21:44:00.788957] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:27.465 [2024-07-15 21:44:00.788965] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:27.465 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:27.465 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:27.465 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:27.465 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:27.465 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:27.465 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:27.465 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:27.465 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:27.465 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:27.465 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:27.465 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:27.465 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:27.723 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:27.723 "name": "raid_bdev1", 00:30:27.723 "uuid": "8bd853e4-f6a9-4ea6-8f27-91ee698c8903", 00:30:27.723 "strip_size_kb": 0, 00:30:27.723 "state": "online", 00:30:27.723 "raid_level": "raid1", 00:30:27.723 "superblock": true, 00:30:27.723 "num_base_bdevs": 4, 00:30:27.723 "num_base_bdevs_discovered": 3, 00:30:27.723 "num_base_bdevs_operational": 3, 00:30:27.723 "base_bdevs_list": [ 00:30:27.723 { 00:30:27.723 "name": null, 00:30:27.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:27.723 "is_configured": false, 00:30:27.723 "data_offset": 2048, 00:30:27.723 "data_size": 63488 00:30:27.723 }, 00:30:27.723 { 00:30:27.723 "name": "BaseBdev2", 00:30:27.723 "uuid": "2b335eb5-7d51-5963-95e5-f94bb5fe7631", 00:30:27.723 "is_configured": true, 00:30:27.723 "data_offset": 2048, 00:30:27.723 "data_size": 63488 00:30:27.723 }, 00:30:27.723 { 00:30:27.723 "name": "BaseBdev3", 00:30:27.723 "uuid": "62b30b43-e76a-5b56-adf9-3668bd10e275", 00:30:27.723 "is_configured": true, 00:30:27.723 "data_offset": 2048, 00:30:27.723 "data_size": 63488 00:30:27.723 }, 00:30:27.723 { 00:30:27.723 "name": "BaseBdev4", 00:30:27.723 "uuid": "b46f5947-26e6-578d-9cb4-9f27f555f99b", 00:30:27.723 "is_configured": true, 00:30:27.723 "data_offset": 2048, 00:30:27.723 "data_size": 63488 00:30:27.723 } 00:30:27.723 ] 00:30:27.723 }' 00:30:27.723 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:27.723 21:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:28.290 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:28.290 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:28.290 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:28.290 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:28.291 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:28.291 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:28.291 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:28.549 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:28.549 "name": "raid_bdev1", 00:30:28.549 "uuid": "8bd853e4-f6a9-4ea6-8f27-91ee698c8903", 00:30:28.549 "strip_size_kb": 0, 00:30:28.549 "state": "online", 00:30:28.549 "raid_level": "raid1", 00:30:28.549 "superblock": true, 00:30:28.549 "num_base_bdevs": 4, 00:30:28.549 "num_base_bdevs_discovered": 3, 00:30:28.549 "num_base_bdevs_operational": 3, 00:30:28.549 "base_bdevs_list": [ 00:30:28.549 { 00:30:28.549 "name": null, 00:30:28.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:28.549 "is_configured": false, 00:30:28.549 "data_offset": 2048, 00:30:28.549 "data_size": 63488 00:30:28.549 }, 00:30:28.549 { 00:30:28.549 "name": "BaseBdev2", 00:30:28.549 "uuid": "2b335eb5-7d51-5963-95e5-f94bb5fe7631", 00:30:28.549 "is_configured": true, 00:30:28.549 "data_offset": 2048, 00:30:28.549 "data_size": 63488 00:30:28.549 }, 00:30:28.549 { 00:30:28.549 "name": "BaseBdev3", 00:30:28.549 "uuid": "62b30b43-e76a-5b56-adf9-3668bd10e275", 00:30:28.549 "is_configured": true, 00:30:28.549 "data_offset": 2048, 00:30:28.549 "data_size": 63488 00:30:28.549 }, 00:30:28.549 { 00:30:28.549 "name": "BaseBdev4", 00:30:28.549 "uuid": "b46f5947-26e6-578d-9cb4-9f27f555f99b", 00:30:28.549 "is_configured": true, 00:30:28.549 "data_offset": 2048, 00:30:28.549 "data_size": 63488 00:30:28.549 } 00:30:28.549 ] 00:30:28.549 }' 00:30:28.549 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:28.549 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:28.549 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:28.808 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:28.808 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:28.808 [2024-07-15 21:44:02.135745] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:28.808 [2024-07-15 21:44:02.153275] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca5580 00:30:28.808 [2024-07-15 21:44:02.155553] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:28.808 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:30:30.184 21:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:30.184 21:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:30.184 21:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:30.184 21:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:30.184 21:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:30.184 21:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:30.184 21:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:30.184 21:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:30.184 "name": "raid_bdev1", 00:30:30.184 "uuid": "8bd853e4-f6a9-4ea6-8f27-91ee698c8903", 00:30:30.184 "strip_size_kb": 0, 00:30:30.184 "state": "online", 00:30:30.184 "raid_level": "raid1", 00:30:30.184 "superblock": true, 00:30:30.184 "num_base_bdevs": 4, 00:30:30.184 "num_base_bdevs_discovered": 4, 00:30:30.184 "num_base_bdevs_operational": 4, 00:30:30.184 "process": { 00:30:30.184 "type": "rebuild", 00:30:30.184 "target": "spare", 00:30:30.184 "progress": { 00:30:30.184 "blocks": 22528, 00:30:30.184 "percent": 35 00:30:30.184 } 00:30:30.184 }, 00:30:30.184 "base_bdevs_list": [ 00:30:30.184 { 00:30:30.184 "name": "spare", 00:30:30.184 "uuid": "72a07576-0301-537d-8cfa-aa3f300f50df", 00:30:30.184 "is_configured": true, 00:30:30.184 "data_offset": 2048, 00:30:30.184 "data_size": 63488 00:30:30.184 }, 00:30:30.184 { 00:30:30.184 "name": "BaseBdev2", 00:30:30.184 "uuid": "2b335eb5-7d51-5963-95e5-f94bb5fe7631", 00:30:30.184 "is_configured": true, 00:30:30.184 "data_offset": 2048, 00:30:30.184 "data_size": 63488 00:30:30.184 }, 00:30:30.184 { 00:30:30.184 "name": "BaseBdev3", 00:30:30.184 "uuid": "62b30b43-e76a-5b56-adf9-3668bd10e275", 00:30:30.184 "is_configured": true, 00:30:30.184 "data_offset": 2048, 00:30:30.184 "data_size": 63488 00:30:30.184 }, 00:30:30.184 { 00:30:30.184 "name": "BaseBdev4", 00:30:30.184 "uuid": "b46f5947-26e6-578d-9cb4-9f27f555f99b", 00:30:30.184 "is_configured": true, 00:30:30.184 "data_offset": 2048, 00:30:30.184 "data_size": 63488 00:30:30.184 } 00:30:30.184 ] 00:30:30.184 }' 00:30:30.184 21:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:30.184 21:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:30.184 21:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:30.184 21:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:30.184 21:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:30:30.184 21:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:30:30.184 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:30:30.184 21:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:30:30.184 21:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:30:30.184 21:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:30:30.184 21:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:30:30.442 [2024-07-15 21:44:03.667222] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:30.701 [2024-07-15 21:44:03.866098] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca5580 00:30:30.701 21:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:30:30.701 21:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:30:30.701 21:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:30.701 21:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:30.701 21:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:30.701 21:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:30.701 21:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:30.701 21:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:30.701 21:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:30.959 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:30.960 "name": "raid_bdev1", 00:30:30.960 "uuid": "8bd853e4-f6a9-4ea6-8f27-91ee698c8903", 00:30:30.960 "strip_size_kb": 0, 00:30:30.960 "state": "online", 00:30:30.960 "raid_level": "raid1", 00:30:30.960 "superblock": true, 00:30:30.960 "num_base_bdevs": 4, 00:30:30.960 "num_base_bdevs_discovered": 3, 00:30:30.960 "num_base_bdevs_operational": 3, 00:30:30.960 "process": { 00:30:30.960 "type": "rebuild", 00:30:30.960 "target": "spare", 00:30:30.960 "progress": { 00:30:30.960 "blocks": 36864, 00:30:30.960 "percent": 58 00:30:30.960 } 00:30:30.960 }, 00:30:30.960 "base_bdevs_list": [ 00:30:30.960 { 00:30:30.960 "name": "spare", 00:30:30.960 "uuid": "72a07576-0301-537d-8cfa-aa3f300f50df", 00:30:30.960 "is_configured": true, 00:30:30.960 "data_offset": 2048, 00:30:30.960 "data_size": 63488 00:30:30.960 }, 00:30:30.960 { 00:30:30.960 "name": null, 00:30:30.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:30.960 "is_configured": false, 00:30:30.960 "data_offset": 2048, 00:30:30.960 "data_size": 63488 00:30:30.960 }, 00:30:30.960 { 00:30:30.960 "name": "BaseBdev3", 00:30:30.960 "uuid": "62b30b43-e76a-5b56-adf9-3668bd10e275", 00:30:30.960 "is_configured": true, 00:30:30.960 "data_offset": 2048, 00:30:30.960 "data_size": 63488 00:30:30.960 }, 00:30:30.960 { 00:30:30.960 "name": "BaseBdev4", 00:30:30.960 "uuid": "b46f5947-26e6-578d-9cb4-9f27f555f99b", 00:30:30.960 "is_configured": true, 00:30:30.960 "data_offset": 2048, 00:30:30.960 "data_size": 63488 00:30:30.960 } 00:30:30.960 ] 00:30:30.960 }' 00:30:30.960 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:30.960 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:30.960 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:30.960 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:30.960 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=897 00:30:30.960 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:30.960 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:30.960 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:30.960 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:30.960 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:30.960 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:30.960 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:30.960 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:31.218 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:31.218 "name": "raid_bdev1", 00:30:31.218 "uuid": "8bd853e4-f6a9-4ea6-8f27-91ee698c8903", 00:30:31.218 "strip_size_kb": 0, 00:30:31.218 "state": "online", 00:30:31.218 "raid_level": "raid1", 00:30:31.218 "superblock": true, 00:30:31.218 "num_base_bdevs": 4, 00:30:31.218 "num_base_bdevs_discovered": 3, 00:30:31.218 "num_base_bdevs_operational": 3, 00:30:31.218 "process": { 00:30:31.218 "type": "rebuild", 00:30:31.218 "target": "spare", 00:30:31.218 "progress": { 00:30:31.218 "blocks": 40960, 00:30:31.218 "percent": 64 00:30:31.218 } 00:30:31.218 }, 00:30:31.218 "base_bdevs_list": [ 00:30:31.218 { 00:30:31.218 "name": "spare", 00:30:31.219 "uuid": "72a07576-0301-537d-8cfa-aa3f300f50df", 00:30:31.219 "is_configured": true, 00:30:31.219 "data_offset": 2048, 00:30:31.219 "data_size": 63488 00:30:31.219 }, 00:30:31.219 { 00:30:31.219 "name": null, 00:30:31.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:31.219 "is_configured": false, 00:30:31.219 "data_offset": 2048, 00:30:31.219 "data_size": 63488 00:30:31.219 }, 00:30:31.219 { 00:30:31.219 "name": "BaseBdev3", 00:30:31.219 "uuid": "62b30b43-e76a-5b56-adf9-3668bd10e275", 00:30:31.219 "is_configured": true, 00:30:31.219 "data_offset": 2048, 00:30:31.219 "data_size": 63488 00:30:31.219 }, 00:30:31.219 { 00:30:31.219 "name": "BaseBdev4", 00:30:31.219 "uuid": "b46f5947-26e6-578d-9cb4-9f27f555f99b", 00:30:31.219 "is_configured": true, 00:30:31.219 "data_offset": 2048, 00:30:31.219 "data_size": 63488 00:30:31.219 } 00:30:31.219 ] 00:30:31.219 }' 00:30:31.219 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:31.219 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:31.219 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:31.219 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:31.219 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:32.154 [2024-07-15 21:44:05.375947] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:32.154 [2024-07-15 21:44:05.376056] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:32.154 [2024-07-15 21:44:05.376225] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:32.154 21:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:32.154 21:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:32.154 21:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:32.154 21:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:32.154 21:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:32.154 21:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:32.154 21:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:32.154 21:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:32.414 21:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:32.414 "name": "raid_bdev1", 00:30:32.414 "uuid": "8bd853e4-f6a9-4ea6-8f27-91ee698c8903", 00:30:32.414 "strip_size_kb": 0, 00:30:32.414 "state": "online", 00:30:32.414 "raid_level": "raid1", 00:30:32.414 "superblock": true, 00:30:32.414 "num_base_bdevs": 4, 00:30:32.414 "num_base_bdevs_discovered": 3, 00:30:32.414 "num_base_bdevs_operational": 3, 00:30:32.414 "base_bdevs_list": [ 00:30:32.414 { 00:30:32.414 "name": "spare", 00:30:32.414 "uuid": "72a07576-0301-537d-8cfa-aa3f300f50df", 00:30:32.414 "is_configured": true, 00:30:32.414 "data_offset": 2048, 00:30:32.414 "data_size": 63488 00:30:32.414 }, 00:30:32.414 { 00:30:32.414 "name": null, 00:30:32.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:32.414 "is_configured": false, 00:30:32.414 "data_offset": 2048, 00:30:32.414 "data_size": 63488 00:30:32.414 }, 00:30:32.414 { 00:30:32.414 "name": "BaseBdev3", 00:30:32.414 "uuid": "62b30b43-e76a-5b56-adf9-3668bd10e275", 00:30:32.414 "is_configured": true, 00:30:32.414 "data_offset": 2048, 00:30:32.414 "data_size": 63488 00:30:32.414 }, 00:30:32.414 { 00:30:32.414 "name": "BaseBdev4", 00:30:32.414 "uuid": "b46f5947-26e6-578d-9cb4-9f27f555f99b", 00:30:32.414 "is_configured": true, 00:30:32.414 "data_offset": 2048, 00:30:32.414 "data_size": 63488 00:30:32.414 } 00:30:32.414 ] 00:30:32.414 }' 00:30:32.414 21:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:32.414 21:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:32.414 21:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:32.414 21:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:30:32.414 21:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:30:32.414 21:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:32.414 21:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:32.414 21:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:32.414 21:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:32.414 21:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:32.414 21:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:32.414 21:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:32.672 21:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:32.672 "name": "raid_bdev1", 00:30:32.672 "uuid": "8bd853e4-f6a9-4ea6-8f27-91ee698c8903", 00:30:32.672 "strip_size_kb": 0, 00:30:32.672 "state": "online", 00:30:32.672 "raid_level": "raid1", 00:30:32.672 "superblock": true, 00:30:32.672 "num_base_bdevs": 4, 00:30:32.672 "num_base_bdevs_discovered": 3, 00:30:32.672 "num_base_bdevs_operational": 3, 00:30:32.672 "base_bdevs_list": [ 00:30:32.672 { 00:30:32.672 "name": "spare", 00:30:32.672 "uuid": "72a07576-0301-537d-8cfa-aa3f300f50df", 00:30:32.672 "is_configured": true, 00:30:32.672 "data_offset": 2048, 00:30:32.672 "data_size": 63488 00:30:32.672 }, 00:30:32.672 { 00:30:32.672 "name": null, 00:30:32.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:32.672 "is_configured": false, 00:30:32.672 "data_offset": 2048, 00:30:32.672 "data_size": 63488 00:30:32.672 }, 00:30:32.672 { 00:30:32.672 "name": "BaseBdev3", 00:30:32.672 "uuid": "62b30b43-e76a-5b56-adf9-3668bd10e275", 00:30:32.672 "is_configured": true, 00:30:32.672 "data_offset": 2048, 00:30:32.672 "data_size": 63488 00:30:32.672 }, 00:30:32.672 { 00:30:32.672 "name": "BaseBdev4", 00:30:32.672 "uuid": "b46f5947-26e6-578d-9cb4-9f27f555f99b", 00:30:32.672 "is_configured": true, 00:30:32.672 "data_offset": 2048, 00:30:32.672 "data_size": 63488 00:30:32.672 } 00:30:32.672 ] 00:30:32.672 }' 00:30:32.672 21:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:32.672 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:32.672 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:32.929 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:32.929 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:32.929 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:32.929 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:32.929 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:32.929 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:32.929 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:32.930 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:32.930 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:32.930 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:32.930 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:32.930 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:32.930 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:32.930 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:32.930 "name": "raid_bdev1", 00:30:32.930 "uuid": "8bd853e4-f6a9-4ea6-8f27-91ee698c8903", 00:30:32.930 "strip_size_kb": 0, 00:30:32.930 "state": "online", 00:30:32.930 "raid_level": "raid1", 00:30:32.930 "superblock": true, 00:30:32.930 "num_base_bdevs": 4, 00:30:32.930 "num_base_bdevs_discovered": 3, 00:30:32.930 "num_base_bdevs_operational": 3, 00:30:32.930 "base_bdevs_list": [ 00:30:32.930 { 00:30:32.930 "name": "spare", 00:30:32.930 "uuid": "72a07576-0301-537d-8cfa-aa3f300f50df", 00:30:32.930 "is_configured": true, 00:30:32.930 "data_offset": 2048, 00:30:32.930 "data_size": 63488 00:30:32.930 }, 00:30:32.930 { 00:30:32.930 "name": null, 00:30:32.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:32.930 "is_configured": false, 00:30:32.930 "data_offset": 2048, 00:30:32.930 "data_size": 63488 00:30:32.930 }, 00:30:32.930 { 00:30:32.930 "name": "BaseBdev3", 00:30:32.930 "uuid": "62b30b43-e76a-5b56-adf9-3668bd10e275", 00:30:32.930 "is_configured": true, 00:30:32.930 "data_offset": 2048, 00:30:32.930 "data_size": 63488 00:30:32.930 }, 00:30:32.930 { 00:30:32.930 "name": "BaseBdev4", 00:30:32.930 "uuid": "b46f5947-26e6-578d-9cb4-9f27f555f99b", 00:30:32.930 "is_configured": true, 00:30:32.930 "data_offset": 2048, 00:30:32.930 "data_size": 63488 00:30:32.930 } 00:30:32.930 ] 00:30:32.930 }' 00:30:32.930 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:32.930 21:44:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:33.864 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:33.864 [2024-07-15 21:44:07.094631] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:33.864 [2024-07-15 21:44:07.094698] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:33.864 [2024-07-15 21:44:07.094832] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:33.864 [2024-07-15 21:44:07.094927] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:33.864 [2024-07-15 21:44:07.094938] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:30:33.864 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:30:33.864 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:34.123 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:30:34.123 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:30:34.123 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:30:34.123 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:30:34.123 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:34.123 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:30:34.123 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:34.123 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:30:34.123 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:34.123 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:30:34.123 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:34.123 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:34.123 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:30:34.382 /dev/nbd0 00:30:34.382 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:34.382 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:34.382 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:30:34.382 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:30:34.382 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:34.382 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:34.382 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:30:34.382 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:30:34.382 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:34.382 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:34.382 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:34.382 1+0 records in 00:30:34.382 1+0 records out 00:30:34.382 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449866 s, 9.1 MB/s 00:30:34.382 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:34.382 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:30:34.382 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:34.382 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:34.382 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:30:34.382 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:34.382 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:34.382 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:30:34.641 /dev/nbd1 00:30:34.641 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:34.641 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:34.641 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:30:34.641 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:30:34.641 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:34.641 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:34.641 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:30:34.641 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:30:34.641 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:34.641 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:34.641 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:34.641 1+0 records in 00:30:34.641 1+0 records out 00:30:34.641 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521675 s, 7.9 MB/s 00:30:34.641 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:34.641 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:30:34.641 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:34.641 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:34.641 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:30:34.641 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:34.641 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:34.641 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:30:34.641 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:30:34.641 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:34.641 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:30:34.641 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:34.641 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:30:34.641 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:34.641 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:34.900 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:34.900 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:34.900 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:34.901 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:34.901 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:34.901 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:34.901 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:34.901 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:34.901 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:34.901 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:35.159 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:35.159 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:35.159 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:35.159 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:35.159 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:35.159 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:35.159 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:30:35.418 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:30:35.418 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:35.418 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:35.418 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:35.418 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:35.418 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:30:35.418 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:35.418 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:35.678 [2024-07-15 21:44:08.915968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:35.678 [2024-07-15 21:44:08.916065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:35.678 [2024-07-15 21:44:08.916105] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:30:35.678 [2024-07-15 21:44:08.916129] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:35.678 [2024-07-15 21:44:08.918192] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:35.678 [2024-07-15 21:44:08.918249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:35.678 [2024-07-15 21:44:08.918357] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:35.678 [2024-07-15 21:44:08.918424] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:35.678 [2024-07-15 21:44:08.918581] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:35.678 [2024-07-15 21:44:08.918723] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:35.678 spare 00:30:35.678 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:35.678 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:35.678 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:35.678 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:35.678 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:35.678 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:35.678 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:35.678 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:35.678 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:35.678 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:35.678 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:35.678 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:35.678 [2024-07-15 21:44:09.018647] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:30:35.678 [2024-07-15 21:44:09.018695] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:35.678 [2024-07-15 21:44:09.018954] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc53c0 00:30:35.678 [2024-07-15 21:44:09.019491] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:30:35.678 [2024-07-15 21:44:09.019518] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:30:35.678 [2024-07-15 21:44:09.019692] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:35.936 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:35.936 "name": "raid_bdev1", 00:30:35.936 "uuid": "8bd853e4-f6a9-4ea6-8f27-91ee698c8903", 00:30:35.936 "strip_size_kb": 0, 00:30:35.936 "state": "online", 00:30:35.936 "raid_level": "raid1", 00:30:35.936 "superblock": true, 00:30:35.936 "num_base_bdevs": 4, 00:30:35.936 "num_base_bdevs_discovered": 3, 00:30:35.936 "num_base_bdevs_operational": 3, 00:30:35.936 "base_bdevs_list": [ 00:30:35.936 { 00:30:35.936 "name": "spare", 00:30:35.936 "uuid": "72a07576-0301-537d-8cfa-aa3f300f50df", 00:30:35.936 "is_configured": true, 00:30:35.936 "data_offset": 2048, 00:30:35.936 "data_size": 63488 00:30:35.936 }, 00:30:35.936 { 00:30:35.936 "name": null, 00:30:35.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:35.936 "is_configured": false, 00:30:35.936 "data_offset": 2048, 00:30:35.936 "data_size": 63488 00:30:35.936 }, 00:30:35.936 { 00:30:35.936 "name": "BaseBdev3", 00:30:35.936 "uuid": "62b30b43-e76a-5b56-adf9-3668bd10e275", 00:30:35.936 "is_configured": true, 00:30:35.936 "data_offset": 2048, 00:30:35.936 "data_size": 63488 00:30:35.936 }, 00:30:35.936 { 00:30:35.936 "name": "BaseBdev4", 00:30:35.936 "uuid": "b46f5947-26e6-578d-9cb4-9f27f555f99b", 00:30:35.936 "is_configured": true, 00:30:35.936 "data_offset": 2048, 00:30:35.936 "data_size": 63488 00:30:35.936 } 00:30:35.936 ] 00:30:35.936 }' 00:30:35.936 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:35.937 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:36.504 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:36.504 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:36.504 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:36.504 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:36.504 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:36.504 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:36.504 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:36.762 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:36.762 "name": "raid_bdev1", 00:30:36.762 "uuid": "8bd853e4-f6a9-4ea6-8f27-91ee698c8903", 00:30:36.762 "strip_size_kb": 0, 00:30:36.762 "state": "online", 00:30:36.762 "raid_level": "raid1", 00:30:36.762 "superblock": true, 00:30:36.762 "num_base_bdevs": 4, 00:30:36.762 "num_base_bdevs_discovered": 3, 00:30:36.762 "num_base_bdevs_operational": 3, 00:30:36.762 "base_bdevs_list": [ 00:30:36.762 { 00:30:36.762 "name": "spare", 00:30:36.762 "uuid": "72a07576-0301-537d-8cfa-aa3f300f50df", 00:30:36.762 "is_configured": true, 00:30:36.762 "data_offset": 2048, 00:30:36.762 "data_size": 63488 00:30:36.762 }, 00:30:36.762 { 00:30:36.762 "name": null, 00:30:36.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.762 "is_configured": false, 00:30:36.762 "data_offset": 2048, 00:30:36.762 "data_size": 63488 00:30:36.762 }, 00:30:36.762 { 00:30:36.762 "name": "BaseBdev3", 00:30:36.762 "uuid": "62b30b43-e76a-5b56-adf9-3668bd10e275", 00:30:36.762 "is_configured": true, 00:30:36.762 "data_offset": 2048, 00:30:36.762 "data_size": 63488 00:30:36.762 }, 00:30:36.762 { 00:30:36.762 "name": "BaseBdev4", 00:30:36.762 "uuid": "b46f5947-26e6-578d-9cb4-9f27f555f99b", 00:30:36.762 "is_configured": true, 00:30:36.762 "data_offset": 2048, 00:30:36.762 "data_size": 63488 00:30:36.762 } 00:30:36.762 ] 00:30:36.762 }' 00:30:36.762 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:36.762 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:36.762 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:36.762 21:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:36.762 21:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:36.762 21:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:30:37.056 21:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:30:37.056 21:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:37.056 [2024-07-15 21:44:10.429184] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:37.313 21:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:37.313 21:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:37.313 21:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:37.313 21:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:37.313 21:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:37.313 21:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:37.313 21:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:37.313 21:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:37.313 21:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:37.313 21:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:37.313 21:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:37.313 21:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:37.313 21:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:37.313 "name": "raid_bdev1", 00:30:37.313 "uuid": "8bd853e4-f6a9-4ea6-8f27-91ee698c8903", 00:30:37.313 "strip_size_kb": 0, 00:30:37.313 "state": "online", 00:30:37.313 "raid_level": "raid1", 00:30:37.313 "superblock": true, 00:30:37.313 "num_base_bdevs": 4, 00:30:37.313 "num_base_bdevs_discovered": 2, 00:30:37.313 "num_base_bdevs_operational": 2, 00:30:37.313 "base_bdevs_list": [ 00:30:37.313 { 00:30:37.313 "name": null, 00:30:37.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:37.313 "is_configured": false, 00:30:37.313 "data_offset": 2048, 00:30:37.313 "data_size": 63488 00:30:37.313 }, 00:30:37.313 { 00:30:37.313 "name": null, 00:30:37.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:37.313 "is_configured": false, 00:30:37.313 "data_offset": 2048, 00:30:37.313 "data_size": 63488 00:30:37.313 }, 00:30:37.313 { 00:30:37.313 "name": "BaseBdev3", 00:30:37.313 "uuid": "62b30b43-e76a-5b56-adf9-3668bd10e275", 00:30:37.313 "is_configured": true, 00:30:37.313 "data_offset": 2048, 00:30:37.313 "data_size": 63488 00:30:37.313 }, 00:30:37.313 { 00:30:37.313 "name": "BaseBdev4", 00:30:37.313 "uuid": "b46f5947-26e6-578d-9cb4-9f27f555f99b", 00:30:37.313 "is_configured": true, 00:30:37.313 "data_offset": 2048, 00:30:37.313 "data_size": 63488 00:30:37.313 } 00:30:37.313 ] 00:30:37.313 }' 00:30:37.313 21:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:37.313 21:44:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.244 21:44:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:38.244 [2024-07-15 21:44:11.435485] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:38.244 [2024-07-15 21:44:11.435750] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:30:38.244 [2024-07-15 21:44:11.435770] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:38.244 [2024-07-15 21:44:11.435841] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:38.244 [2024-07-15 21:44:11.451397] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc5560 00:30:38.244 [2024-07-15 21:44:11.453498] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:38.244 21:44:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:30:39.176 21:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:39.176 21:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:39.176 21:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:39.176 21:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:39.176 21:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:39.176 21:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:39.176 21:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:39.435 21:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:39.435 "name": "raid_bdev1", 00:30:39.435 "uuid": "8bd853e4-f6a9-4ea6-8f27-91ee698c8903", 00:30:39.435 "strip_size_kb": 0, 00:30:39.435 "state": "online", 00:30:39.435 "raid_level": "raid1", 00:30:39.435 "superblock": true, 00:30:39.435 "num_base_bdevs": 4, 00:30:39.435 "num_base_bdevs_discovered": 3, 00:30:39.435 "num_base_bdevs_operational": 3, 00:30:39.435 "process": { 00:30:39.435 "type": "rebuild", 00:30:39.435 "target": "spare", 00:30:39.435 "progress": { 00:30:39.435 "blocks": 22528, 00:30:39.435 "percent": 35 00:30:39.435 } 00:30:39.435 }, 00:30:39.435 "base_bdevs_list": [ 00:30:39.435 { 00:30:39.435 "name": "spare", 00:30:39.435 "uuid": "72a07576-0301-537d-8cfa-aa3f300f50df", 00:30:39.435 "is_configured": true, 00:30:39.435 "data_offset": 2048, 00:30:39.435 "data_size": 63488 00:30:39.435 }, 00:30:39.435 { 00:30:39.435 "name": null, 00:30:39.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:39.435 "is_configured": false, 00:30:39.435 "data_offset": 2048, 00:30:39.435 "data_size": 63488 00:30:39.435 }, 00:30:39.435 { 00:30:39.435 "name": "BaseBdev3", 00:30:39.435 "uuid": "62b30b43-e76a-5b56-adf9-3668bd10e275", 00:30:39.435 "is_configured": true, 00:30:39.435 "data_offset": 2048, 00:30:39.435 "data_size": 63488 00:30:39.435 }, 00:30:39.435 { 00:30:39.435 "name": "BaseBdev4", 00:30:39.435 "uuid": "b46f5947-26e6-578d-9cb4-9f27f555f99b", 00:30:39.435 "is_configured": true, 00:30:39.435 "data_offset": 2048, 00:30:39.435 "data_size": 63488 00:30:39.435 } 00:30:39.435 ] 00:30:39.435 }' 00:30:39.435 21:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:39.435 21:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:39.435 21:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:39.435 21:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:39.435 21:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:39.693 [2024-07-15 21:44:12.949380] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:39.693 [2024-07-15 21:44:12.967265] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:39.693 [2024-07-15 21:44:12.967346] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:39.693 [2024-07-15 21:44:12.967365] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:39.693 [2024-07-15 21:44:12.967372] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:39.693 21:44:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:39.693 21:44:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:39.693 21:44:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:39.693 21:44:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:39.693 21:44:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:39.693 21:44:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:39.693 21:44:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:39.693 21:44:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:39.693 21:44:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:39.693 21:44:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:39.693 21:44:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:39.693 21:44:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:39.950 21:44:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:39.950 "name": "raid_bdev1", 00:30:39.950 "uuid": "8bd853e4-f6a9-4ea6-8f27-91ee698c8903", 00:30:39.950 "strip_size_kb": 0, 00:30:39.950 "state": "online", 00:30:39.950 "raid_level": "raid1", 00:30:39.950 "superblock": true, 00:30:39.950 "num_base_bdevs": 4, 00:30:39.950 "num_base_bdevs_discovered": 2, 00:30:39.950 "num_base_bdevs_operational": 2, 00:30:39.950 "base_bdevs_list": [ 00:30:39.950 { 00:30:39.950 "name": null, 00:30:39.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:39.950 "is_configured": false, 00:30:39.950 "data_offset": 2048, 00:30:39.950 "data_size": 63488 00:30:39.950 }, 00:30:39.950 { 00:30:39.950 "name": null, 00:30:39.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:39.950 "is_configured": false, 00:30:39.950 "data_offset": 2048, 00:30:39.950 "data_size": 63488 00:30:39.950 }, 00:30:39.950 { 00:30:39.950 "name": "BaseBdev3", 00:30:39.950 "uuid": "62b30b43-e76a-5b56-adf9-3668bd10e275", 00:30:39.950 "is_configured": true, 00:30:39.950 "data_offset": 2048, 00:30:39.950 "data_size": 63488 00:30:39.950 }, 00:30:39.951 { 00:30:39.951 "name": "BaseBdev4", 00:30:39.951 "uuid": "b46f5947-26e6-578d-9cb4-9f27f555f99b", 00:30:39.951 "is_configured": true, 00:30:39.951 "data_offset": 2048, 00:30:39.951 "data_size": 63488 00:30:39.951 } 00:30:39.951 ] 00:30:39.951 }' 00:30:39.951 21:44:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:39.951 21:44:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.515 21:44:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:40.773 [2024-07-15 21:44:14.051334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:40.773 [2024-07-15 21:44:14.051455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:40.773 [2024-07-15 21:44:14.051495] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:30:40.773 [2024-07-15 21:44:14.051514] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:40.773 [2024-07-15 21:44:14.051997] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:40.773 [2024-07-15 21:44:14.052035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:40.773 [2024-07-15 21:44:14.052161] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:40.773 [2024-07-15 21:44:14.052181] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:30:40.773 [2024-07-15 21:44:14.052189] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:40.773 [2024-07-15 21:44:14.052217] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:40.773 [2024-07-15 21:44:14.066683] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc58a0 00:30:40.773 spare 00:30:40.773 [2024-07-15 21:44:14.068544] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:40.773 21:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:30:42.149 21:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:42.149 21:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:42.149 21:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:42.149 21:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:42.149 21:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:42.149 21:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:42.149 21:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:42.149 21:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:42.149 "name": "raid_bdev1", 00:30:42.149 "uuid": "8bd853e4-f6a9-4ea6-8f27-91ee698c8903", 00:30:42.149 "strip_size_kb": 0, 00:30:42.149 "state": "online", 00:30:42.149 "raid_level": "raid1", 00:30:42.149 "superblock": true, 00:30:42.149 "num_base_bdevs": 4, 00:30:42.149 "num_base_bdevs_discovered": 3, 00:30:42.149 "num_base_bdevs_operational": 3, 00:30:42.149 "process": { 00:30:42.149 "type": "rebuild", 00:30:42.149 "target": "spare", 00:30:42.149 "progress": { 00:30:42.149 "blocks": 22528, 00:30:42.149 "percent": 35 00:30:42.149 } 00:30:42.149 }, 00:30:42.149 "base_bdevs_list": [ 00:30:42.149 { 00:30:42.149 "name": "spare", 00:30:42.149 "uuid": "72a07576-0301-537d-8cfa-aa3f300f50df", 00:30:42.149 "is_configured": true, 00:30:42.149 "data_offset": 2048, 00:30:42.149 "data_size": 63488 00:30:42.149 }, 00:30:42.149 { 00:30:42.149 "name": null, 00:30:42.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:42.149 "is_configured": false, 00:30:42.149 "data_offset": 2048, 00:30:42.149 "data_size": 63488 00:30:42.149 }, 00:30:42.149 { 00:30:42.149 "name": "BaseBdev3", 00:30:42.149 "uuid": "62b30b43-e76a-5b56-adf9-3668bd10e275", 00:30:42.149 "is_configured": true, 00:30:42.149 "data_offset": 2048, 00:30:42.149 "data_size": 63488 00:30:42.149 }, 00:30:42.149 { 00:30:42.149 "name": "BaseBdev4", 00:30:42.149 "uuid": "b46f5947-26e6-578d-9cb4-9f27f555f99b", 00:30:42.149 "is_configured": true, 00:30:42.149 "data_offset": 2048, 00:30:42.149 "data_size": 63488 00:30:42.149 } 00:30:42.149 ] 00:30:42.149 }' 00:30:42.149 21:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:42.149 21:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:42.149 21:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:42.149 21:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:42.149 21:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:42.407 [2024-07-15 21:44:15.636875] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:42.407 [2024-07-15 21:44:15.676671] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:42.407 [2024-07-15 21:44:15.676733] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:42.407 [2024-07-15 21:44:15.676765] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:42.407 [2024-07-15 21:44:15.676771] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:42.407 21:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:42.407 21:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:42.407 21:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:42.407 21:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:42.407 21:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:42.407 21:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:42.407 21:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:42.407 21:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:42.407 21:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:42.407 21:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:42.407 21:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:42.407 21:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:42.665 21:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:42.665 "name": "raid_bdev1", 00:30:42.665 "uuid": "8bd853e4-f6a9-4ea6-8f27-91ee698c8903", 00:30:42.665 "strip_size_kb": 0, 00:30:42.665 "state": "online", 00:30:42.665 "raid_level": "raid1", 00:30:42.665 "superblock": true, 00:30:42.665 "num_base_bdevs": 4, 00:30:42.665 "num_base_bdevs_discovered": 2, 00:30:42.665 "num_base_bdevs_operational": 2, 00:30:42.665 "base_bdevs_list": [ 00:30:42.665 { 00:30:42.665 "name": null, 00:30:42.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:42.665 "is_configured": false, 00:30:42.665 "data_offset": 2048, 00:30:42.665 "data_size": 63488 00:30:42.665 }, 00:30:42.665 { 00:30:42.665 "name": null, 00:30:42.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:42.665 "is_configured": false, 00:30:42.665 "data_offset": 2048, 00:30:42.665 "data_size": 63488 00:30:42.665 }, 00:30:42.665 { 00:30:42.665 "name": "BaseBdev3", 00:30:42.665 "uuid": "62b30b43-e76a-5b56-adf9-3668bd10e275", 00:30:42.665 "is_configured": true, 00:30:42.665 "data_offset": 2048, 00:30:42.665 "data_size": 63488 00:30:42.665 }, 00:30:42.665 { 00:30:42.665 "name": "BaseBdev4", 00:30:42.665 "uuid": "b46f5947-26e6-578d-9cb4-9f27f555f99b", 00:30:42.665 "is_configured": true, 00:30:42.665 "data_offset": 2048, 00:30:42.665 "data_size": 63488 00:30:42.665 } 00:30:42.665 ] 00:30:42.665 }' 00:30:42.665 21:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:42.665 21:44:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:43.231 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:43.231 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:43.231 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:43.231 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:43.231 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:43.231 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:43.231 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:43.490 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:43.490 "name": "raid_bdev1", 00:30:43.490 "uuid": "8bd853e4-f6a9-4ea6-8f27-91ee698c8903", 00:30:43.490 "strip_size_kb": 0, 00:30:43.490 "state": "online", 00:30:43.490 "raid_level": "raid1", 00:30:43.490 "superblock": true, 00:30:43.490 "num_base_bdevs": 4, 00:30:43.490 "num_base_bdevs_discovered": 2, 00:30:43.490 "num_base_bdevs_operational": 2, 00:30:43.490 "base_bdevs_list": [ 00:30:43.490 { 00:30:43.490 "name": null, 00:30:43.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:43.490 "is_configured": false, 00:30:43.490 "data_offset": 2048, 00:30:43.490 "data_size": 63488 00:30:43.490 }, 00:30:43.490 { 00:30:43.490 "name": null, 00:30:43.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:43.490 "is_configured": false, 00:30:43.490 "data_offset": 2048, 00:30:43.490 "data_size": 63488 00:30:43.490 }, 00:30:43.490 { 00:30:43.490 "name": "BaseBdev3", 00:30:43.490 "uuid": "62b30b43-e76a-5b56-adf9-3668bd10e275", 00:30:43.490 "is_configured": true, 00:30:43.490 "data_offset": 2048, 00:30:43.490 "data_size": 63488 00:30:43.490 }, 00:30:43.490 { 00:30:43.490 "name": "BaseBdev4", 00:30:43.490 "uuid": "b46f5947-26e6-578d-9cb4-9f27f555f99b", 00:30:43.490 "is_configured": true, 00:30:43.490 "data_offset": 2048, 00:30:43.490 "data_size": 63488 00:30:43.490 } 00:30:43.490 ] 00:30:43.490 }' 00:30:43.490 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:43.490 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:43.490 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:43.490 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:43.490 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:30:43.748 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:44.007 [2024-07-15 21:44:17.170288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:44.007 [2024-07-15 21:44:17.170372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:44.007 [2024-07-15 21:44:17.170419] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:30:44.007 [2024-07-15 21:44:17.170442] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:44.007 [2024-07-15 21:44:17.170923] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:44.007 [2024-07-15 21:44:17.170957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:44.007 [2024-07-15 21:44:17.171069] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:30:44.007 [2024-07-15 21:44:17.171088] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:30:44.007 [2024-07-15 21:44:17.171094] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:44.007 BaseBdev1 00:30:44.007 21:44:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:30:44.944 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:44.944 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:44.944 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:44.944 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:44.944 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:44.944 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:44.944 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:44.944 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:44.944 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:44.944 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:44.945 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:44.945 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:45.204 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:45.204 "name": "raid_bdev1", 00:30:45.204 "uuid": "8bd853e4-f6a9-4ea6-8f27-91ee698c8903", 00:30:45.204 "strip_size_kb": 0, 00:30:45.204 "state": "online", 00:30:45.204 "raid_level": "raid1", 00:30:45.204 "superblock": true, 00:30:45.204 "num_base_bdevs": 4, 00:30:45.204 "num_base_bdevs_discovered": 2, 00:30:45.204 "num_base_bdevs_operational": 2, 00:30:45.204 "base_bdevs_list": [ 00:30:45.204 { 00:30:45.204 "name": null, 00:30:45.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:45.204 "is_configured": false, 00:30:45.204 "data_offset": 2048, 00:30:45.204 "data_size": 63488 00:30:45.204 }, 00:30:45.204 { 00:30:45.204 "name": null, 00:30:45.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:45.204 "is_configured": false, 00:30:45.204 "data_offset": 2048, 00:30:45.204 "data_size": 63488 00:30:45.204 }, 00:30:45.204 { 00:30:45.204 "name": "BaseBdev3", 00:30:45.204 "uuid": "62b30b43-e76a-5b56-adf9-3668bd10e275", 00:30:45.204 "is_configured": true, 00:30:45.204 "data_offset": 2048, 00:30:45.204 "data_size": 63488 00:30:45.204 }, 00:30:45.204 { 00:30:45.204 "name": "BaseBdev4", 00:30:45.204 "uuid": "b46f5947-26e6-578d-9cb4-9f27f555f99b", 00:30:45.204 "is_configured": true, 00:30:45.204 "data_offset": 2048, 00:30:45.204 "data_size": 63488 00:30:45.204 } 00:30:45.204 ] 00:30:45.204 }' 00:30:45.204 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:45.204 21:44:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:45.772 21:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:45.772 21:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:45.772 21:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:45.772 21:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:45.772 21:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:45.772 21:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:45.772 21:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:46.031 21:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:46.031 "name": "raid_bdev1", 00:30:46.031 "uuid": "8bd853e4-f6a9-4ea6-8f27-91ee698c8903", 00:30:46.031 "strip_size_kb": 0, 00:30:46.031 "state": "online", 00:30:46.031 "raid_level": "raid1", 00:30:46.031 "superblock": true, 00:30:46.031 "num_base_bdevs": 4, 00:30:46.031 "num_base_bdevs_discovered": 2, 00:30:46.031 "num_base_bdevs_operational": 2, 00:30:46.031 "base_bdevs_list": [ 00:30:46.031 { 00:30:46.031 "name": null, 00:30:46.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:46.031 "is_configured": false, 00:30:46.031 "data_offset": 2048, 00:30:46.031 "data_size": 63488 00:30:46.031 }, 00:30:46.031 { 00:30:46.031 "name": null, 00:30:46.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:46.031 "is_configured": false, 00:30:46.031 "data_offset": 2048, 00:30:46.031 "data_size": 63488 00:30:46.032 }, 00:30:46.032 { 00:30:46.032 "name": "BaseBdev3", 00:30:46.032 "uuid": "62b30b43-e76a-5b56-adf9-3668bd10e275", 00:30:46.032 "is_configured": true, 00:30:46.032 "data_offset": 2048, 00:30:46.032 "data_size": 63488 00:30:46.032 }, 00:30:46.032 { 00:30:46.032 "name": "BaseBdev4", 00:30:46.032 "uuid": "b46f5947-26e6-578d-9cb4-9f27f555f99b", 00:30:46.032 "is_configured": true, 00:30:46.032 "data_offset": 2048, 00:30:46.032 "data_size": 63488 00:30:46.032 } 00:30:46.032 ] 00:30:46.032 }' 00:30:46.032 21:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:46.032 21:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:46.032 21:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:46.032 21:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:46.032 21:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:46.032 21:44:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:30:46.032 21:44:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:46.032 21:44:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:46.032 21:44:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:46.032 21:44:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:46.032 21:44:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:46.032 21:44:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:46.032 21:44:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:46.032 21:44:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:46.032 21:44:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:30:46.032 21:44:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:46.291 [2024-07-15 21:44:19.490346] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:46.291 [2024-07-15 21:44:19.490493] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:30:46.291 [2024-07-15 21:44:19.490504] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:46.291 request: 00:30:46.291 { 00:30:46.291 "base_bdev": "BaseBdev1", 00:30:46.291 "raid_bdev": "raid_bdev1", 00:30:46.291 "method": "bdev_raid_add_base_bdev", 00:30:46.291 "req_id": 1 00:30:46.291 } 00:30:46.291 Got JSON-RPC error response 00:30:46.291 response: 00:30:46.291 { 00:30:46.291 "code": -22, 00:30:46.291 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:30:46.291 } 00:30:46.291 21:44:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:30:46.291 21:44:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:46.291 21:44:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:46.291 21:44:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:46.291 21:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:30:47.229 21:44:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:47.229 21:44:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:47.229 21:44:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:47.229 21:44:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:47.229 21:44:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:47.229 21:44:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:47.229 21:44:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:47.229 21:44:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:47.229 21:44:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:47.229 21:44:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:47.229 21:44:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:47.229 21:44:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:47.488 21:44:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:47.488 "name": "raid_bdev1", 00:30:47.488 "uuid": "8bd853e4-f6a9-4ea6-8f27-91ee698c8903", 00:30:47.488 "strip_size_kb": 0, 00:30:47.488 "state": "online", 00:30:47.488 "raid_level": "raid1", 00:30:47.488 "superblock": true, 00:30:47.488 "num_base_bdevs": 4, 00:30:47.488 "num_base_bdevs_discovered": 2, 00:30:47.488 "num_base_bdevs_operational": 2, 00:30:47.488 "base_bdevs_list": [ 00:30:47.488 { 00:30:47.488 "name": null, 00:30:47.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:47.488 "is_configured": false, 00:30:47.488 "data_offset": 2048, 00:30:47.488 "data_size": 63488 00:30:47.488 }, 00:30:47.488 { 00:30:47.488 "name": null, 00:30:47.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:47.488 "is_configured": false, 00:30:47.488 "data_offset": 2048, 00:30:47.488 "data_size": 63488 00:30:47.488 }, 00:30:47.488 { 00:30:47.488 "name": "BaseBdev3", 00:30:47.488 "uuid": "62b30b43-e76a-5b56-adf9-3668bd10e275", 00:30:47.488 "is_configured": true, 00:30:47.488 "data_offset": 2048, 00:30:47.488 "data_size": 63488 00:30:47.488 }, 00:30:47.488 { 00:30:47.488 "name": "BaseBdev4", 00:30:47.488 "uuid": "b46f5947-26e6-578d-9cb4-9f27f555f99b", 00:30:47.488 "is_configured": true, 00:30:47.488 "data_offset": 2048, 00:30:47.488 "data_size": 63488 00:30:47.488 } 00:30:47.488 ] 00:30:47.488 }' 00:30:47.488 21:44:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:47.488 21:44:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:48.056 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:48.056 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:48.056 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:48.056 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:48.056 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:48.056 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:48.056 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:48.315 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:48.315 "name": "raid_bdev1", 00:30:48.315 "uuid": "8bd853e4-f6a9-4ea6-8f27-91ee698c8903", 00:30:48.315 "strip_size_kb": 0, 00:30:48.315 "state": "online", 00:30:48.315 "raid_level": "raid1", 00:30:48.316 "superblock": true, 00:30:48.316 "num_base_bdevs": 4, 00:30:48.316 "num_base_bdevs_discovered": 2, 00:30:48.316 "num_base_bdevs_operational": 2, 00:30:48.316 "base_bdevs_list": [ 00:30:48.316 { 00:30:48.316 "name": null, 00:30:48.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:48.316 "is_configured": false, 00:30:48.316 "data_offset": 2048, 00:30:48.316 "data_size": 63488 00:30:48.316 }, 00:30:48.316 { 00:30:48.316 "name": null, 00:30:48.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:48.316 "is_configured": false, 00:30:48.316 "data_offset": 2048, 00:30:48.316 "data_size": 63488 00:30:48.316 }, 00:30:48.316 { 00:30:48.316 "name": "BaseBdev3", 00:30:48.316 "uuid": "62b30b43-e76a-5b56-adf9-3668bd10e275", 00:30:48.316 "is_configured": true, 00:30:48.316 "data_offset": 2048, 00:30:48.316 "data_size": 63488 00:30:48.316 }, 00:30:48.316 { 00:30:48.316 "name": "BaseBdev4", 00:30:48.316 "uuid": "b46f5947-26e6-578d-9cb4-9f27f555f99b", 00:30:48.316 "is_configured": true, 00:30:48.316 "data_offset": 2048, 00:30:48.316 "data_size": 63488 00:30:48.316 } 00:30:48.316 ] 00:30:48.316 }' 00:30:48.316 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:48.316 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:48.316 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:48.316 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:48.316 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 148904 00:30:48.316 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 148904 ']' 00:30:48.316 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 148904 00:30:48.316 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:30:48.316 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:48.316 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 148904 00:30:48.575 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:48.575 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:48.575 killing process with pid 148904 00:30:48.575 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 148904' 00:30:48.575 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 148904 00:30:48.575 Received shutdown signal, test time was about 60.000000 seconds 00:30:48.575 00:30:48.575 Latency(us) 00:30:48.575 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.575 =================================================================================================================== 00:30:48.575 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:48.575 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 148904 00:30:48.575 [2024-07-15 21:44:21.693849] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:48.575 [2024-07-15 21:44:21.693970] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:48.575 [2024-07-15 21:44:21.694039] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:48.575 [2024-07-15 21:44:21.694050] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:30:48.833 [2024-07-15 21:44:22.181893] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:30:50.211 ************************************ 00:30:50.211 END TEST raid_rebuild_test_sb 00:30:50.211 ************************************ 00:30:50.211 00:30:50.211 real 0m35.985s 00:30:50.211 user 0m53.065s 00:30:50.211 sys 0m4.665s 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:50.211 21:44:23 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:30:50.211 21:44:23 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:30:50.211 21:44:23 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:30:50.211 21:44:23 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:50.211 21:44:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:50.211 ************************************ 00:30:50.211 START TEST raid_rebuild_test_io 00:30:50.211 ************************************ 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 false true true 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=149881 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 149881 /var/tmp/spdk-raid.sock 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@829 -- # '[' -z 149881 ']' 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:50.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:50.211 21:44:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:50.211 [2024-07-15 21:44:23.541088] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:30:50.211 [2024-07-15 21:44:23.541233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149881 ] 00:30:50.211 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:50.211 Zero copy mechanism will not be used. 00:30:50.469 [2024-07-15 21:44:23.684919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.727 [2024-07-15 21:44:23.882293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.727 [2024-07-15 21:44:24.074829] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:51.293 21:44:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:51.293 21:44:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # return 0 00:30:51.293 21:44:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:51.293 21:44:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:51.293 BaseBdev1_malloc 00:30:51.293 21:44:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:51.551 [2024-07-15 21:44:24.788426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:51.551 [2024-07-15 21:44:24.788541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:51.551 [2024-07-15 21:44:24.788612] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:30:51.551 [2024-07-15 21:44:24.788640] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:51.551 [2024-07-15 21:44:24.790714] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:51.551 [2024-07-15 21:44:24.790764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:51.551 BaseBdev1 00:30:51.551 21:44:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:51.551 21:44:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:51.809 BaseBdev2_malloc 00:30:51.809 21:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:52.067 [2024-07-15 21:44:25.230285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:52.067 [2024-07-15 21:44:25.230391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:52.067 [2024-07-15 21:44:25.230426] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:30:52.067 [2024-07-15 21:44:25.230448] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:52.067 [2024-07-15 21:44:25.232635] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:52.067 [2024-07-15 21:44:25.232683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:52.067 BaseBdev2 00:30:52.067 21:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:52.067 21:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:52.067 BaseBdev3_malloc 00:30:52.325 21:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:30:52.325 [2024-07-15 21:44:25.635983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:30:52.326 [2024-07-15 21:44:25.636067] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:52.326 [2024-07-15 21:44:25.636096] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:30:52.326 [2024-07-15 21:44:25.636116] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:52.326 [2024-07-15 21:44:25.638217] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:52.326 [2024-07-15 21:44:25.638270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:52.326 BaseBdev3 00:30:52.326 21:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:52.326 21:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:30:52.583 BaseBdev4_malloc 00:30:52.583 21:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:30:52.842 [2024-07-15 21:44:26.068749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:30:52.842 [2024-07-15 21:44:26.068843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:52.842 [2024-07-15 21:44:26.068874] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:30:52.842 [2024-07-15 21:44:26.068896] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:52.842 [2024-07-15 21:44:26.071004] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:52.842 [2024-07-15 21:44:26.071055] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:30:52.842 BaseBdev4 00:30:52.842 21:44:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:30:53.100 spare_malloc 00:30:53.100 21:44:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:53.357 spare_delay 00:30:53.357 21:44:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:53.614 [2024-07-15 21:44:26.738337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:53.614 [2024-07-15 21:44:26.738433] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:53.614 [2024-07-15 21:44:26.738462] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:30:53.614 [2024-07-15 21:44:26.738489] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:53.614 [2024-07-15 21:44:26.740561] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:53.614 [2024-07-15 21:44:26.740612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:53.614 spare 00:30:53.614 21:44:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:30:53.614 [2024-07-15 21:44:26.930047] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:53.614 [2024-07-15 21:44:26.931688] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:53.614 [2024-07-15 21:44:26.931769] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:53.614 [2024-07-15 21:44:26.931812] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:53.614 [2024-07-15 21:44:26.931895] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:30:53.614 [2024-07-15 21:44:26.931920] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:30:53.614 [2024-07-15 21:44:26.932086] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:30:53.614 [2024-07-15 21:44:26.932374] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:30:53.614 [2024-07-15 21:44:26.932392] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:30:53.614 [2024-07-15 21:44:26.932544] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:53.614 21:44:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:30:53.614 21:44:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:53.614 21:44:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:53.614 21:44:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:53.614 21:44:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:53.614 21:44:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:53.614 21:44:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:53.614 21:44:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:53.614 21:44:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:53.614 21:44:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:53.614 21:44:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:53.614 21:44:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:53.872 21:44:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:53.872 "name": "raid_bdev1", 00:30:53.872 "uuid": "9c9da3cb-3d0b-4d39-9281-ae93339bb1a7", 00:30:53.872 "strip_size_kb": 0, 00:30:53.872 "state": "online", 00:30:53.872 "raid_level": "raid1", 00:30:53.872 "superblock": false, 00:30:53.872 "num_base_bdevs": 4, 00:30:53.872 "num_base_bdevs_discovered": 4, 00:30:53.872 "num_base_bdevs_operational": 4, 00:30:53.872 "base_bdevs_list": [ 00:30:53.872 { 00:30:53.872 "name": "BaseBdev1", 00:30:53.872 "uuid": "d66ba697-0c22-5e2d-aad9-a6c55fbb7a6a", 00:30:53.872 "is_configured": true, 00:30:53.872 "data_offset": 0, 00:30:53.872 "data_size": 65536 00:30:53.872 }, 00:30:53.872 { 00:30:53.872 "name": "BaseBdev2", 00:30:53.872 "uuid": "444a7e20-652a-5f69-aa20-58d1d5d87ad7", 00:30:53.872 "is_configured": true, 00:30:53.872 "data_offset": 0, 00:30:53.872 "data_size": 65536 00:30:53.872 }, 00:30:53.872 { 00:30:53.872 "name": "BaseBdev3", 00:30:53.872 "uuid": "5b479fec-716d-5d8b-b185-64ea67436b4b", 00:30:53.872 "is_configured": true, 00:30:53.872 "data_offset": 0, 00:30:53.872 "data_size": 65536 00:30:53.872 }, 00:30:53.872 { 00:30:53.872 "name": "BaseBdev4", 00:30:53.872 "uuid": "3fc94b48-9a77-5c11-b0b4-5db5ac7559bc", 00:30:53.872 "is_configured": true, 00:30:53.872 "data_offset": 0, 00:30:53.872 "data_size": 65536 00:30:53.872 } 00:30:53.872 ] 00:30:53.872 }' 00:30:53.872 21:44:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:53.872 21:44:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:54.438 21:44:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:54.438 21:44:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:30:54.696 [2024-07-15 21:44:27.912530] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:54.696 21:44:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:30:54.696 21:44:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:54.696 21:44:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:54.955 21:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:30:54.955 21:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:30:54.955 21:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:30:54.955 21:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:30:54.955 [2024-07-15 21:44:28.202098] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:30:54.955 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:54.955 Zero copy mechanism will not be used. 00:30:54.955 Running I/O for 60 seconds... 00:30:54.955 [2024-07-15 21:44:28.306405] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:54.955 [2024-07-15 21:44:28.306633] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:30:55.213 21:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:55.213 21:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:55.213 21:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:55.213 21:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:55.213 21:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:55.213 21:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:55.213 21:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:55.213 21:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:55.213 21:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:55.213 21:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:55.213 21:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:55.213 21:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:55.471 21:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:55.471 "name": "raid_bdev1", 00:30:55.471 "uuid": "9c9da3cb-3d0b-4d39-9281-ae93339bb1a7", 00:30:55.471 "strip_size_kb": 0, 00:30:55.471 "state": "online", 00:30:55.471 "raid_level": "raid1", 00:30:55.471 "superblock": false, 00:30:55.471 "num_base_bdevs": 4, 00:30:55.471 "num_base_bdevs_discovered": 3, 00:30:55.471 "num_base_bdevs_operational": 3, 00:30:55.471 "base_bdevs_list": [ 00:30:55.471 { 00:30:55.471 "name": null, 00:30:55.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:55.471 "is_configured": false, 00:30:55.471 "data_offset": 0, 00:30:55.471 "data_size": 65536 00:30:55.471 }, 00:30:55.471 { 00:30:55.471 "name": "BaseBdev2", 00:30:55.471 "uuid": "444a7e20-652a-5f69-aa20-58d1d5d87ad7", 00:30:55.471 "is_configured": true, 00:30:55.471 "data_offset": 0, 00:30:55.471 "data_size": 65536 00:30:55.471 }, 00:30:55.471 { 00:30:55.471 "name": "BaseBdev3", 00:30:55.471 "uuid": "5b479fec-716d-5d8b-b185-64ea67436b4b", 00:30:55.471 "is_configured": true, 00:30:55.471 "data_offset": 0, 00:30:55.471 "data_size": 65536 00:30:55.471 }, 00:30:55.471 { 00:30:55.471 "name": "BaseBdev4", 00:30:55.471 "uuid": "3fc94b48-9a77-5c11-b0b4-5db5ac7559bc", 00:30:55.471 "is_configured": true, 00:30:55.471 "data_offset": 0, 00:30:55.471 "data_size": 65536 00:30:55.471 } 00:30:55.471 ] 00:30:55.471 }' 00:30:55.471 21:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:55.471 21:44:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:56.037 21:44:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:56.296 [2024-07-15 21:44:29.440975] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:56.296 21:44:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:30:56.296 [2024-07-15 21:44:29.517365] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:30:56.296 [2024-07-15 21:44:29.519238] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:56.296 [2024-07-15 21:44:29.644327] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:56.296 [2024-07-15 21:44:29.644888] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:56.555 [2024-07-15 21:44:29.772242] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:56.555 [2024-07-15 21:44:29.772560] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:56.813 [2024-07-15 21:44:30.104758] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:56.813 [2024-07-15 21:44:30.105189] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:57.071 [2024-07-15 21:44:30.313665] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:57.071 [2024-07-15 21:44:30.314303] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:57.331 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:57.331 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:57.331 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:57.331 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:57.331 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:57.331 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:57.331 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:57.331 [2024-07-15 21:44:30.672258] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:57.331 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:57.331 "name": "raid_bdev1", 00:30:57.331 "uuid": "9c9da3cb-3d0b-4d39-9281-ae93339bb1a7", 00:30:57.331 "strip_size_kb": 0, 00:30:57.331 "state": "online", 00:30:57.331 "raid_level": "raid1", 00:30:57.331 "superblock": false, 00:30:57.331 "num_base_bdevs": 4, 00:30:57.331 "num_base_bdevs_discovered": 4, 00:30:57.331 "num_base_bdevs_operational": 4, 00:30:57.331 "process": { 00:30:57.331 "type": "rebuild", 00:30:57.331 "target": "spare", 00:30:57.331 "progress": { 00:30:57.331 "blocks": 14336, 00:30:57.331 "percent": 21 00:30:57.331 } 00:30:57.331 }, 00:30:57.331 "base_bdevs_list": [ 00:30:57.331 { 00:30:57.331 "name": "spare", 00:30:57.331 "uuid": "f3027f1b-14a9-5015-9731-89ec89a1bea7", 00:30:57.331 "is_configured": true, 00:30:57.331 "data_offset": 0, 00:30:57.331 "data_size": 65536 00:30:57.331 }, 00:30:57.331 { 00:30:57.331 "name": "BaseBdev2", 00:30:57.331 "uuid": "444a7e20-652a-5f69-aa20-58d1d5d87ad7", 00:30:57.331 "is_configured": true, 00:30:57.331 "data_offset": 0, 00:30:57.331 "data_size": 65536 00:30:57.331 }, 00:30:57.331 { 00:30:57.331 "name": "BaseBdev3", 00:30:57.331 "uuid": "5b479fec-716d-5d8b-b185-64ea67436b4b", 00:30:57.331 "is_configured": true, 00:30:57.331 "data_offset": 0, 00:30:57.331 "data_size": 65536 00:30:57.331 }, 00:30:57.331 { 00:30:57.331 "name": "BaseBdev4", 00:30:57.331 "uuid": "3fc94b48-9a77-5c11-b0b4-5db5ac7559bc", 00:30:57.331 "is_configured": true, 00:30:57.331 "data_offset": 0, 00:30:57.331 "data_size": 65536 00:30:57.331 } 00:30:57.331 ] 00:30:57.331 }' 00:30:57.331 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:57.590 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:57.591 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:57.591 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:57.591 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:57.591 [2024-07-15 21:44:30.891181] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:57.591 [2024-07-15 21:44:30.891930] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:57.851 [2024-07-15 21:44:30.985396] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:57.851 [2024-07-15 21:44:31.000700] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:57.851 [2024-07-15 21:44:31.103121] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:57.851 [2024-07-15 21:44:31.106331] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:57.851 [2024-07-15 21:44:31.106374] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:57.851 [2024-07-15 21:44:31.106384] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:57.851 [2024-07-15 21:44:31.130157] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:30:57.851 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:57.851 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:57.851 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:57.851 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:57.851 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:57.851 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:57.851 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:57.851 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:57.851 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:57.851 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:57.851 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:57.851 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:58.110 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:58.110 "name": "raid_bdev1", 00:30:58.110 "uuid": "9c9da3cb-3d0b-4d39-9281-ae93339bb1a7", 00:30:58.110 "strip_size_kb": 0, 00:30:58.110 "state": "online", 00:30:58.110 "raid_level": "raid1", 00:30:58.110 "superblock": false, 00:30:58.110 "num_base_bdevs": 4, 00:30:58.110 "num_base_bdevs_discovered": 3, 00:30:58.110 "num_base_bdevs_operational": 3, 00:30:58.110 "base_bdevs_list": [ 00:30:58.110 { 00:30:58.110 "name": null, 00:30:58.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:58.110 "is_configured": false, 00:30:58.110 "data_offset": 0, 00:30:58.110 "data_size": 65536 00:30:58.110 }, 00:30:58.110 { 00:30:58.110 "name": "BaseBdev2", 00:30:58.110 "uuid": "444a7e20-652a-5f69-aa20-58d1d5d87ad7", 00:30:58.110 "is_configured": true, 00:30:58.110 "data_offset": 0, 00:30:58.110 "data_size": 65536 00:30:58.110 }, 00:30:58.110 { 00:30:58.110 "name": "BaseBdev3", 00:30:58.110 "uuid": "5b479fec-716d-5d8b-b185-64ea67436b4b", 00:30:58.110 "is_configured": true, 00:30:58.110 "data_offset": 0, 00:30:58.110 "data_size": 65536 00:30:58.110 }, 00:30:58.110 { 00:30:58.110 "name": "BaseBdev4", 00:30:58.110 "uuid": "3fc94b48-9a77-5c11-b0b4-5db5ac7559bc", 00:30:58.110 "is_configured": true, 00:30:58.110 "data_offset": 0, 00:30:58.110 "data_size": 65536 00:30:58.110 } 00:30:58.110 ] 00:30:58.110 }' 00:30:58.110 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:58.110 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:58.678 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:58.678 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:58.678 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:58.678 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:58.678 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:58.678 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:58.678 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:58.938 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:58.938 "name": "raid_bdev1", 00:30:58.938 "uuid": "9c9da3cb-3d0b-4d39-9281-ae93339bb1a7", 00:30:58.938 "strip_size_kb": 0, 00:30:58.938 "state": "online", 00:30:58.938 "raid_level": "raid1", 00:30:58.938 "superblock": false, 00:30:58.938 "num_base_bdevs": 4, 00:30:58.938 "num_base_bdevs_discovered": 3, 00:30:58.938 "num_base_bdevs_operational": 3, 00:30:58.938 "base_bdevs_list": [ 00:30:58.938 { 00:30:58.938 "name": null, 00:30:58.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:58.938 "is_configured": false, 00:30:58.938 "data_offset": 0, 00:30:58.938 "data_size": 65536 00:30:58.938 }, 00:30:58.938 { 00:30:58.938 "name": "BaseBdev2", 00:30:58.938 "uuid": "444a7e20-652a-5f69-aa20-58d1d5d87ad7", 00:30:58.938 "is_configured": true, 00:30:58.938 "data_offset": 0, 00:30:58.938 "data_size": 65536 00:30:58.938 }, 00:30:58.938 { 00:30:58.938 "name": "BaseBdev3", 00:30:58.938 "uuid": "5b479fec-716d-5d8b-b185-64ea67436b4b", 00:30:58.938 "is_configured": true, 00:30:58.938 "data_offset": 0, 00:30:58.938 "data_size": 65536 00:30:58.938 }, 00:30:58.938 { 00:30:58.938 "name": "BaseBdev4", 00:30:58.938 "uuid": "3fc94b48-9a77-5c11-b0b4-5db5ac7559bc", 00:30:58.938 "is_configured": true, 00:30:58.938 "data_offset": 0, 00:30:58.938 "data_size": 65536 00:30:58.938 } 00:30:58.938 ] 00:30:58.938 }' 00:30:58.938 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:58.938 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:58.938 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:59.198 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:59.198 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:59.458 [2024-07-15 21:44:32.586120] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:59.458 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:30:59.458 [2024-07-15 21:44:32.675189] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:30:59.458 [2024-07-15 21:44:32.677019] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:59.458 [2024-07-15 21:44:32.807449] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:59.458 [2024-07-15 21:44:32.808099] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:59.717 [2024-07-15 21:44:32.918770] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:59.717 [2024-07-15 21:44:32.919117] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:59.976 [2024-07-15 21:44:33.255069] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:59.976 [2024-07-15 21:44:33.256523] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:31:00.235 [2024-07-15 21:44:33.490128] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:00.492 21:44:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:00.492 21:44:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:00.492 21:44:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:00.492 21:44:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:00.492 21:44:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:00.492 21:44:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:00.492 21:44:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:00.492 [2024-07-15 21:44:33.728908] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:31:00.492 [2024-07-15 21:44:33.840797] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:31:00.750 21:44:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:00.750 "name": "raid_bdev1", 00:31:00.750 "uuid": "9c9da3cb-3d0b-4d39-9281-ae93339bb1a7", 00:31:00.750 "strip_size_kb": 0, 00:31:00.750 "state": "online", 00:31:00.750 "raid_level": "raid1", 00:31:00.750 "superblock": false, 00:31:00.750 "num_base_bdevs": 4, 00:31:00.750 "num_base_bdevs_discovered": 4, 00:31:00.750 "num_base_bdevs_operational": 4, 00:31:00.750 "process": { 00:31:00.750 "type": "rebuild", 00:31:00.750 "target": "spare", 00:31:00.750 "progress": { 00:31:00.750 "blocks": 16384, 00:31:00.750 "percent": 25 00:31:00.750 } 00:31:00.750 }, 00:31:00.750 "base_bdevs_list": [ 00:31:00.750 { 00:31:00.750 "name": "spare", 00:31:00.750 "uuid": "f3027f1b-14a9-5015-9731-89ec89a1bea7", 00:31:00.750 "is_configured": true, 00:31:00.750 "data_offset": 0, 00:31:00.750 "data_size": 65536 00:31:00.750 }, 00:31:00.750 { 00:31:00.750 "name": "BaseBdev2", 00:31:00.750 "uuid": "444a7e20-652a-5f69-aa20-58d1d5d87ad7", 00:31:00.750 "is_configured": true, 00:31:00.750 "data_offset": 0, 00:31:00.750 "data_size": 65536 00:31:00.750 }, 00:31:00.750 { 00:31:00.750 "name": "BaseBdev3", 00:31:00.750 "uuid": "5b479fec-716d-5d8b-b185-64ea67436b4b", 00:31:00.750 "is_configured": true, 00:31:00.750 "data_offset": 0, 00:31:00.750 "data_size": 65536 00:31:00.750 }, 00:31:00.750 { 00:31:00.750 "name": "BaseBdev4", 00:31:00.750 "uuid": "3fc94b48-9a77-5c11-b0b4-5db5ac7559bc", 00:31:00.750 "is_configured": true, 00:31:00.750 "data_offset": 0, 00:31:00.750 "data_size": 65536 00:31:00.750 } 00:31:00.750 ] 00:31:00.750 }' 00:31:00.750 21:44:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:00.750 21:44:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:00.750 21:44:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:00.750 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:00.750 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:31:00.750 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:31:00.750 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:31:00.750 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:31:00.750 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:31:00.750 [2024-07-15 21:44:34.103101] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:31:01.008 [2024-07-15 21:44:34.189823] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:01.008 [2024-07-15 21:44:34.342601] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:31:01.008 [2024-07-15 21:44:34.342711] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000062f0 00:31:01.008 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:31:01.008 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:31:01.008 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:01.008 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:01.008 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:01.008 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:01.008 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:01.008 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:01.008 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:01.264 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:01.264 "name": "raid_bdev1", 00:31:01.264 "uuid": "9c9da3cb-3d0b-4d39-9281-ae93339bb1a7", 00:31:01.264 "strip_size_kb": 0, 00:31:01.264 "state": "online", 00:31:01.264 "raid_level": "raid1", 00:31:01.264 "superblock": false, 00:31:01.264 "num_base_bdevs": 4, 00:31:01.264 "num_base_bdevs_discovered": 3, 00:31:01.264 "num_base_bdevs_operational": 3, 00:31:01.264 "process": { 00:31:01.264 "type": "rebuild", 00:31:01.264 "target": "spare", 00:31:01.264 "progress": { 00:31:01.264 "blocks": 24576, 00:31:01.264 "percent": 37 00:31:01.264 } 00:31:01.264 }, 00:31:01.264 "base_bdevs_list": [ 00:31:01.264 { 00:31:01.264 "name": "spare", 00:31:01.264 "uuid": "f3027f1b-14a9-5015-9731-89ec89a1bea7", 00:31:01.264 "is_configured": true, 00:31:01.264 "data_offset": 0, 00:31:01.264 "data_size": 65536 00:31:01.264 }, 00:31:01.264 { 00:31:01.264 "name": null, 00:31:01.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:01.264 "is_configured": false, 00:31:01.264 "data_offset": 0, 00:31:01.264 "data_size": 65536 00:31:01.264 }, 00:31:01.264 { 00:31:01.264 "name": "BaseBdev3", 00:31:01.264 "uuid": "5b479fec-716d-5d8b-b185-64ea67436b4b", 00:31:01.264 "is_configured": true, 00:31:01.264 "data_offset": 0, 00:31:01.264 "data_size": 65536 00:31:01.264 }, 00:31:01.264 { 00:31:01.264 "name": "BaseBdev4", 00:31:01.264 "uuid": "3fc94b48-9a77-5c11-b0b4-5db5ac7559bc", 00:31:01.264 "is_configured": true, 00:31:01.264 "data_offset": 0, 00:31:01.264 "data_size": 65536 00:31:01.264 } 00:31:01.264 ] 00:31:01.264 }' 00:31:01.264 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:01.264 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:01.264 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:01.521 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:01.521 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=927 00:31:01.521 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:01.521 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:01.521 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:01.521 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:01.521 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:01.521 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:01.521 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:01.521 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:01.521 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:01.521 "name": "raid_bdev1", 00:31:01.521 "uuid": "9c9da3cb-3d0b-4d39-9281-ae93339bb1a7", 00:31:01.521 "strip_size_kb": 0, 00:31:01.521 "state": "online", 00:31:01.521 "raid_level": "raid1", 00:31:01.521 "superblock": false, 00:31:01.521 "num_base_bdevs": 4, 00:31:01.521 "num_base_bdevs_discovered": 3, 00:31:01.521 "num_base_bdevs_operational": 3, 00:31:01.521 "process": { 00:31:01.521 "type": "rebuild", 00:31:01.521 "target": "spare", 00:31:01.521 "progress": { 00:31:01.521 "blocks": 30720, 00:31:01.521 "percent": 46 00:31:01.521 } 00:31:01.521 }, 00:31:01.521 "base_bdevs_list": [ 00:31:01.521 { 00:31:01.521 "name": "spare", 00:31:01.521 "uuid": "f3027f1b-14a9-5015-9731-89ec89a1bea7", 00:31:01.521 "is_configured": true, 00:31:01.521 "data_offset": 0, 00:31:01.521 "data_size": 65536 00:31:01.521 }, 00:31:01.521 { 00:31:01.521 "name": null, 00:31:01.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:01.521 "is_configured": false, 00:31:01.521 "data_offset": 0, 00:31:01.521 "data_size": 65536 00:31:01.521 }, 00:31:01.521 { 00:31:01.521 "name": "BaseBdev3", 00:31:01.521 "uuid": "5b479fec-716d-5d8b-b185-64ea67436b4b", 00:31:01.521 "is_configured": true, 00:31:01.521 "data_offset": 0, 00:31:01.521 "data_size": 65536 00:31:01.521 }, 00:31:01.521 { 00:31:01.521 "name": "BaseBdev4", 00:31:01.521 "uuid": "3fc94b48-9a77-5c11-b0b4-5db5ac7559bc", 00:31:01.521 "is_configured": true, 00:31:01.521 "data_offset": 0, 00:31:01.521 "data_size": 65536 00:31:01.521 } 00:31:01.521 ] 00:31:01.521 }' 00:31:01.521 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:01.777 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:01.777 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:01.777 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:01.777 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:02.341 [2024-07-15 21:44:35.664327] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:31:02.906 21:44:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:02.906 21:44:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:02.906 21:44:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:02.906 21:44:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:02.906 21:44:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:02.906 21:44:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:02.906 21:44:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:02.906 21:44:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:02.906 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:02.906 "name": "raid_bdev1", 00:31:02.906 "uuid": "9c9da3cb-3d0b-4d39-9281-ae93339bb1a7", 00:31:02.906 "strip_size_kb": 0, 00:31:02.906 "state": "online", 00:31:02.906 "raid_level": "raid1", 00:31:02.906 "superblock": false, 00:31:02.906 "num_base_bdevs": 4, 00:31:02.906 "num_base_bdevs_discovered": 3, 00:31:02.906 "num_base_bdevs_operational": 3, 00:31:02.906 "process": { 00:31:02.906 "type": "rebuild", 00:31:02.906 "target": "spare", 00:31:02.906 "progress": { 00:31:02.906 "blocks": 55296, 00:31:02.906 "percent": 84 00:31:02.906 } 00:31:02.906 }, 00:31:02.906 "base_bdevs_list": [ 00:31:02.906 { 00:31:02.906 "name": "spare", 00:31:02.906 "uuid": "f3027f1b-14a9-5015-9731-89ec89a1bea7", 00:31:02.906 "is_configured": true, 00:31:02.906 "data_offset": 0, 00:31:02.906 "data_size": 65536 00:31:02.906 }, 00:31:02.906 { 00:31:02.906 "name": null, 00:31:02.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:02.906 "is_configured": false, 00:31:02.906 "data_offset": 0, 00:31:02.906 "data_size": 65536 00:31:02.906 }, 00:31:02.906 { 00:31:02.906 "name": "BaseBdev3", 00:31:02.906 "uuid": "5b479fec-716d-5d8b-b185-64ea67436b4b", 00:31:02.906 "is_configured": true, 00:31:02.906 "data_offset": 0, 00:31:02.906 "data_size": 65536 00:31:02.906 }, 00:31:02.906 { 00:31:02.906 "name": "BaseBdev4", 00:31:02.906 "uuid": "3fc94b48-9a77-5c11-b0b4-5db5ac7559bc", 00:31:02.906 "is_configured": true, 00:31:02.906 "data_offset": 0, 00:31:02.906 "data_size": 65536 00:31:02.906 } 00:31:02.906 ] 00:31:02.906 }' 00:31:02.906 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:02.906 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:02.906 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:03.162 [2024-07-15 21:44:36.322832] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:31:03.162 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:03.162 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:03.420 [2024-07-15 21:44:36.652504] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:03.420 [2024-07-15 21:44:36.758008] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:03.420 [2024-07-15 21:44:36.762293] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:03.985 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:03.985 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:03.985 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:03.985 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:03.985 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:03.985 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:03.985 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:03.985 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:04.243 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:04.243 "name": "raid_bdev1", 00:31:04.243 "uuid": "9c9da3cb-3d0b-4d39-9281-ae93339bb1a7", 00:31:04.243 "strip_size_kb": 0, 00:31:04.243 "state": "online", 00:31:04.243 "raid_level": "raid1", 00:31:04.243 "superblock": false, 00:31:04.243 "num_base_bdevs": 4, 00:31:04.243 "num_base_bdevs_discovered": 3, 00:31:04.243 "num_base_bdevs_operational": 3, 00:31:04.243 "base_bdevs_list": [ 00:31:04.243 { 00:31:04.243 "name": "spare", 00:31:04.243 "uuid": "f3027f1b-14a9-5015-9731-89ec89a1bea7", 00:31:04.243 "is_configured": true, 00:31:04.243 "data_offset": 0, 00:31:04.243 "data_size": 65536 00:31:04.243 }, 00:31:04.243 { 00:31:04.243 "name": null, 00:31:04.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:04.243 "is_configured": false, 00:31:04.243 "data_offset": 0, 00:31:04.243 "data_size": 65536 00:31:04.243 }, 00:31:04.243 { 00:31:04.243 "name": "BaseBdev3", 00:31:04.243 "uuid": "5b479fec-716d-5d8b-b185-64ea67436b4b", 00:31:04.243 "is_configured": true, 00:31:04.243 "data_offset": 0, 00:31:04.243 "data_size": 65536 00:31:04.243 }, 00:31:04.243 { 00:31:04.243 "name": "BaseBdev4", 00:31:04.243 "uuid": "3fc94b48-9a77-5c11-b0b4-5db5ac7559bc", 00:31:04.243 "is_configured": true, 00:31:04.243 "data_offset": 0, 00:31:04.243 "data_size": 65536 00:31:04.243 } 00:31:04.243 ] 00:31:04.243 }' 00:31:04.243 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:04.243 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:04.243 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:04.501 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:31:04.501 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:31:04.501 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:04.501 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:04.501 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:04.501 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:04.501 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:04.501 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:04.501 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:04.501 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:04.501 "name": "raid_bdev1", 00:31:04.501 "uuid": "9c9da3cb-3d0b-4d39-9281-ae93339bb1a7", 00:31:04.501 "strip_size_kb": 0, 00:31:04.501 "state": "online", 00:31:04.501 "raid_level": "raid1", 00:31:04.501 "superblock": false, 00:31:04.501 "num_base_bdevs": 4, 00:31:04.501 "num_base_bdevs_discovered": 3, 00:31:04.501 "num_base_bdevs_operational": 3, 00:31:04.501 "base_bdevs_list": [ 00:31:04.501 { 00:31:04.501 "name": "spare", 00:31:04.501 "uuid": "f3027f1b-14a9-5015-9731-89ec89a1bea7", 00:31:04.501 "is_configured": true, 00:31:04.501 "data_offset": 0, 00:31:04.501 "data_size": 65536 00:31:04.501 }, 00:31:04.501 { 00:31:04.501 "name": null, 00:31:04.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:04.501 "is_configured": false, 00:31:04.501 "data_offset": 0, 00:31:04.501 "data_size": 65536 00:31:04.501 }, 00:31:04.501 { 00:31:04.501 "name": "BaseBdev3", 00:31:04.501 "uuid": "5b479fec-716d-5d8b-b185-64ea67436b4b", 00:31:04.501 "is_configured": true, 00:31:04.501 "data_offset": 0, 00:31:04.501 "data_size": 65536 00:31:04.501 }, 00:31:04.501 { 00:31:04.501 "name": "BaseBdev4", 00:31:04.501 "uuid": "3fc94b48-9a77-5c11-b0b4-5db5ac7559bc", 00:31:04.501 "is_configured": true, 00:31:04.501 "data_offset": 0, 00:31:04.501 "data_size": 65536 00:31:04.501 } 00:31:04.501 ] 00:31:04.501 }' 00:31:04.501 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:04.760 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:04.760 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:04.760 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:04.760 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:31:04.760 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:04.760 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:04.760 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:04.760 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:04.760 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:04.760 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:04.760 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:04.760 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:04.760 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:04.760 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:04.760 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:05.020 21:44:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:05.020 "name": "raid_bdev1", 00:31:05.020 "uuid": "9c9da3cb-3d0b-4d39-9281-ae93339bb1a7", 00:31:05.020 "strip_size_kb": 0, 00:31:05.020 "state": "online", 00:31:05.020 "raid_level": "raid1", 00:31:05.020 "superblock": false, 00:31:05.020 "num_base_bdevs": 4, 00:31:05.020 "num_base_bdevs_discovered": 3, 00:31:05.020 "num_base_bdevs_operational": 3, 00:31:05.020 "base_bdevs_list": [ 00:31:05.020 { 00:31:05.020 "name": "spare", 00:31:05.020 "uuid": "f3027f1b-14a9-5015-9731-89ec89a1bea7", 00:31:05.020 "is_configured": true, 00:31:05.020 "data_offset": 0, 00:31:05.020 "data_size": 65536 00:31:05.020 }, 00:31:05.020 { 00:31:05.020 "name": null, 00:31:05.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:05.020 "is_configured": false, 00:31:05.020 "data_offset": 0, 00:31:05.020 "data_size": 65536 00:31:05.020 }, 00:31:05.020 { 00:31:05.020 "name": "BaseBdev3", 00:31:05.020 "uuid": "5b479fec-716d-5d8b-b185-64ea67436b4b", 00:31:05.020 "is_configured": true, 00:31:05.020 "data_offset": 0, 00:31:05.020 "data_size": 65536 00:31:05.020 }, 00:31:05.020 { 00:31:05.020 "name": "BaseBdev4", 00:31:05.020 "uuid": "3fc94b48-9a77-5c11-b0b4-5db5ac7559bc", 00:31:05.020 "is_configured": true, 00:31:05.020 "data_offset": 0, 00:31:05.020 "data_size": 65536 00:31:05.020 } 00:31:05.020 ] 00:31:05.020 }' 00:31:05.020 21:44:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:05.020 21:44:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:05.593 21:44:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:05.851 [2024-07-15 21:44:39.043798] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:05.851 [2024-07-15 21:44:39.043904] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:05.851 00:31:05.851 Latency(us) 00:31:05.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.851 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:31:05.851 raid_bdev1 : 10.96 107.75 323.25 0.00 0.00 13157.81 482.93 114931.26 00:31:05.851 =================================================================================================================== 00:31:05.851 Total : 107.75 323.25 0.00 0.00 13157.81 482.93 114931.26 00:31:05.851 [2024-07-15 21:44:39.164084] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:05.851 [2024-07-15 21:44:39.164185] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:05.851 [2024-07-15 21:44:39.164289] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:05.851 [2024-07-15 21:44:39.164319] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:31:05.851 0 00:31:05.851 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:05.851 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:31:06.109 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:31:06.109 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:31:06.109 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:31:06.109 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:31:06.109 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:06.109 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:31:06.109 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:06.109 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:31:06.109 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:06.109 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:31:06.109 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:06.109 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:06.109 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:31:06.367 /dev/nbd0 00:31:06.367 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:06.367 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:06.367 21:44:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:31:06.367 21:44:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:31:06.367 21:44:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:06.367 21:44:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:06.367 21:44:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:31:06.367 21:44:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:31:06.367 21:44:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:06.367 21:44:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:06.367 21:44:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:06.367 1+0 records in 00:31:06.367 1+0 records out 00:31:06.367 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000502585 s, 8.1 MB/s 00:31:06.367 21:44:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:06.367 21:44:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:31:06.367 21:44:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:06.367 21:44:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:06.367 21:44:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:31:06.367 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:06.367 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:06.367 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:31:06.367 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:31:06.367 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # continue 00:31:06.367 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:31:06.367 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:31:06.367 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:31:06.367 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:06.367 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:31:06.367 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:06.367 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:31:06.367 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:06.367 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:31:06.367 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:06.367 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:06.367 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:31:06.625 /dev/nbd1 00:31:06.625 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:06.625 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:06.625 21:44:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:31:06.625 21:44:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:31:06.625 21:44:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:06.625 21:44:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:06.625 21:44:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:31:06.625 21:44:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:31:06.625 21:44:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:06.625 21:44:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:06.625 21:44:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:06.625 1+0 records in 00:31:06.625 1+0 records out 00:31:06.625 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443275 s, 9.2 MB/s 00:31:06.625 21:44:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:06.625 21:44:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:31:06.625 21:44:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:06.626 21:44:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:06.626 21:44:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:31:06.626 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:06.626 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:06.626 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:31:06.883 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:31:06.883 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:06.883 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:31:06.883 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:06.883 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:31:06.883 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:06.883 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:31:06.883 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:06.883 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:06.883 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:06.883 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:06.883 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:06.883 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:06.883 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:31:06.883 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:31:06.883 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:31:06.883 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:31:06.883 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:31:06.883 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:07.141 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:31:07.141 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:07.141 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:31:07.141 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:07.141 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:31:07.141 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:07.141 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:07.141 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:31:07.141 /dev/nbd1 00:31:07.141 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:07.141 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:07.141 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:31:07.141 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:31:07.141 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:07.141 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:07.141 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:31:07.141 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:31:07.141 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:07.141 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:07.141 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:07.141 1+0 records in 00:31:07.141 1+0 records out 00:31:07.141 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436556 s, 9.4 MB/s 00:31:07.141 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:07.141 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:31:07.141 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:07.141 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:07.141 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:31:07.141 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:07.141 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:07.141 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:31:07.400 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:31:07.400 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:07.400 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:31:07.400 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:07.400 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:31:07.400 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:07.400 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:31:07.658 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:07.658 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:07.658 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:07.658 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:07.658 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:07.658 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:07.658 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:31:07.658 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:31:07.658 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:07.658 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:07.658 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:31:07.658 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:31:07.658 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:31:07.658 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:07.658 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:31:07.658 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:07.658 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:31:07.658 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:07.658 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:07.916 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:07.916 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:07.916 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:07.916 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:07.916 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:07.916 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:07.916 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:31:07.916 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:31:07.916 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:07.916 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:07.916 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:31:07.916 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:31:07.916 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:31:07.916 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 149881 00:31:07.916 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@948 -- # '[' -z 149881 ']' 00:31:07.916 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # kill -0 149881 00:31:07.916 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # uname 00:31:07.916 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:07.916 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 149881 00:31:07.916 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:07.916 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:07.916 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 149881' 00:31:07.916 killing process with pid 149881 00:31:07.916 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@967 -- # kill 149881 00:31:07.916 Received shutdown signal, test time was about 13.060575 seconds 00:31:07.916 00:31:07.916 Latency(us) 00:31:07.916 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:07.916 =================================================================================================================== 00:31:07.916 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:07.916 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # wait 149881 00:31:07.916 [2024-07-15 21:44:41.239604] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:08.481 [2024-07-15 21:44:41.644903] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:09.853 ************************************ 00:31:09.853 END TEST raid_rebuild_test_io 00:31:09.853 ************************************ 00:31:09.853 21:44:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:31:09.853 00:31:09.853 real 0m19.523s 00:31:09.853 user 0m29.894s 00:31:09.853 sys 0m2.366s 00:31:09.853 21:44:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:09.853 21:44:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:09.853 21:44:43 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:31:09.853 21:44:43 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:31:09.853 21:44:43 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:31:09.853 21:44:43 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:09.853 21:44:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:09.853 ************************************ 00:31:09.853 START TEST raid_rebuild_test_sb_io 00:31:09.853 ************************************ 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 true true true 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=150437 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 150437 /var/tmp/spdk-raid.sock 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@829 -- # '[' -z 150437 ']' 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:09.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:09.853 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:09.853 [2024-07-15 21:44:43.137034] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:31:09.853 [2024-07-15 21:44:43.137234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150437 ] 00:31:09.853 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:09.853 Zero copy mechanism will not be used. 00:31:10.109 [2024-07-15 21:44:43.297187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:10.366 [2024-07-15 21:44:43.498015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:10.366 [2024-07-15 21:44:43.700730] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:10.932 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:10.932 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # return 0 00:31:10.932 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:10.932 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:10.932 BaseBdev1_malloc 00:31:10.932 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:11.191 [2024-07-15 21:44:44.464557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:11.191 [2024-07-15 21:44:44.464733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:11.192 [2024-07-15 21:44:44.464799] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:31:11.192 [2024-07-15 21:44:44.464840] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:11.192 [2024-07-15 21:44:44.466955] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:11.192 [2024-07-15 21:44:44.467031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:11.192 BaseBdev1 00:31:11.192 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:11.192 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:11.451 BaseBdev2_malloc 00:31:11.451 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:11.710 [2024-07-15 21:44:44.949657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:11.710 [2024-07-15 21:44:44.949830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:11.710 [2024-07-15 21:44:44.949884] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:31:11.710 [2024-07-15 21:44:44.949955] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:11.710 [2024-07-15 21:44:44.952077] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:11.710 [2024-07-15 21:44:44.952164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:11.710 BaseBdev2 00:31:11.710 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:11.710 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:31:11.967 BaseBdev3_malloc 00:31:11.967 21:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:31:12.226 [2024-07-15 21:44:45.415458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:31:12.226 [2024-07-15 21:44:45.415611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:12.226 [2024-07-15 21:44:45.415661] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:31:12.226 [2024-07-15 21:44:45.415707] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:12.226 [2024-07-15 21:44:45.417722] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:12.226 [2024-07-15 21:44:45.417801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:31:12.226 BaseBdev3 00:31:12.226 21:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:12.226 21:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:31:12.484 BaseBdev4_malloc 00:31:12.484 21:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:31:12.743 [2024-07-15 21:44:45.867109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:31:12.743 [2024-07-15 21:44:45.867274] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:12.743 [2024-07-15 21:44:45.867342] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:31:12.743 [2024-07-15 21:44:45.867405] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:12.743 [2024-07-15 21:44:45.869561] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:12.743 [2024-07-15 21:44:45.869647] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:31:12.743 BaseBdev4 00:31:12.743 21:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:31:12.743 spare_malloc 00:31:13.003 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:13.003 spare_delay 00:31:13.003 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:13.262 [2024-07-15 21:44:46.567995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:13.262 [2024-07-15 21:44:46.568169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:13.262 [2024-07-15 21:44:46.568216] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:31:13.262 [2024-07-15 21:44:46.568273] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:13.262 [2024-07-15 21:44:46.570550] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:13.262 [2024-07-15 21:44:46.570642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:13.262 spare 00:31:13.262 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:31:13.522 [2024-07-15 21:44:46.779705] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:13.522 [2024-07-15 21:44:46.781572] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:13.522 [2024-07-15 21:44:46.781682] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:13.522 [2024-07-15 21:44:46.781762] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:31:13.522 [2024-07-15 21:44:46.782027] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:31:13.522 [2024-07-15 21:44:46.782070] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:13.522 [2024-07-15 21:44:46.782222] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:31:13.522 [2024-07-15 21:44:46.782594] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:31:13.522 [2024-07-15 21:44:46.782637] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:31:13.522 [2024-07-15 21:44:46.782816] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:13.522 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:31:13.522 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:13.522 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:13.522 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:13.522 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:13.522 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:31:13.522 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:13.522 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:13.522 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:13.522 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:13.522 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:13.522 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:13.781 21:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:13.781 "name": "raid_bdev1", 00:31:13.781 "uuid": "0a6dfe99-28e4-4dde-9bda-a41414877b07", 00:31:13.781 "strip_size_kb": 0, 00:31:13.781 "state": "online", 00:31:13.781 "raid_level": "raid1", 00:31:13.781 "superblock": true, 00:31:13.781 "num_base_bdevs": 4, 00:31:13.781 "num_base_bdevs_discovered": 4, 00:31:13.781 "num_base_bdevs_operational": 4, 00:31:13.781 "base_bdevs_list": [ 00:31:13.781 { 00:31:13.781 "name": "BaseBdev1", 00:31:13.781 "uuid": "6a29185c-3da9-58f3-8d64-4aacef010c03", 00:31:13.781 "is_configured": true, 00:31:13.781 "data_offset": 2048, 00:31:13.781 "data_size": 63488 00:31:13.781 }, 00:31:13.781 { 00:31:13.781 "name": "BaseBdev2", 00:31:13.782 "uuid": "2f17e063-992b-5686-97b6-f052b94f9613", 00:31:13.782 "is_configured": true, 00:31:13.782 "data_offset": 2048, 00:31:13.782 "data_size": 63488 00:31:13.782 }, 00:31:13.782 { 00:31:13.782 "name": "BaseBdev3", 00:31:13.782 "uuid": "c58bad3c-e4fa-52f3-8a3d-3f9fa9ae4295", 00:31:13.782 "is_configured": true, 00:31:13.782 "data_offset": 2048, 00:31:13.782 "data_size": 63488 00:31:13.782 }, 00:31:13.782 { 00:31:13.782 "name": "BaseBdev4", 00:31:13.782 "uuid": "a4493acd-4af1-5a86-b40b-701614ccb9f4", 00:31:13.782 "is_configured": true, 00:31:13.782 "data_offset": 2048, 00:31:13.782 "data_size": 63488 00:31:13.782 } 00:31:13.782 ] 00:31:13.782 }' 00:31:13.782 21:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:13.782 21:44:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:14.349 21:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:31:14.349 21:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:14.608 [2024-07-15 21:44:47.834078] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:14.608 21:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:31:14.608 21:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:14.608 21:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:14.868 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:31:14.868 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:31:14.868 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:31:14.868 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:31:14.868 [2024-07-15 21:44:48.153077] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:31:14.868 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:14.868 Zero copy mechanism will not be used. 00:31:14.868 Running I/O for 60 seconds... 00:31:15.128 [2024-07-15 21:44:48.265297] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:15.128 [2024-07-15 21:44:48.278559] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:31:15.128 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:31:15.128 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:15.128 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:15.128 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:15.128 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:15.128 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:15.128 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:15.128 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:15.128 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:15.128 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:15.128 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:15.128 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:15.388 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:15.388 "name": "raid_bdev1", 00:31:15.388 "uuid": "0a6dfe99-28e4-4dde-9bda-a41414877b07", 00:31:15.388 "strip_size_kb": 0, 00:31:15.388 "state": "online", 00:31:15.388 "raid_level": "raid1", 00:31:15.388 "superblock": true, 00:31:15.388 "num_base_bdevs": 4, 00:31:15.388 "num_base_bdevs_discovered": 3, 00:31:15.388 "num_base_bdevs_operational": 3, 00:31:15.388 "base_bdevs_list": [ 00:31:15.388 { 00:31:15.388 "name": null, 00:31:15.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:15.388 "is_configured": false, 00:31:15.388 "data_offset": 2048, 00:31:15.388 "data_size": 63488 00:31:15.388 }, 00:31:15.388 { 00:31:15.388 "name": "BaseBdev2", 00:31:15.388 "uuid": "2f17e063-992b-5686-97b6-f052b94f9613", 00:31:15.388 "is_configured": true, 00:31:15.388 "data_offset": 2048, 00:31:15.388 "data_size": 63488 00:31:15.388 }, 00:31:15.388 { 00:31:15.388 "name": "BaseBdev3", 00:31:15.388 "uuid": "c58bad3c-e4fa-52f3-8a3d-3f9fa9ae4295", 00:31:15.388 "is_configured": true, 00:31:15.388 "data_offset": 2048, 00:31:15.388 "data_size": 63488 00:31:15.388 }, 00:31:15.388 { 00:31:15.388 "name": "BaseBdev4", 00:31:15.388 "uuid": "a4493acd-4af1-5a86-b40b-701614ccb9f4", 00:31:15.388 "is_configured": true, 00:31:15.388 "data_offset": 2048, 00:31:15.388 "data_size": 63488 00:31:15.388 } 00:31:15.388 ] 00:31:15.388 }' 00:31:15.388 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:15.388 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:15.957 21:44:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:16.215 [2024-07-15 21:44:49.350619] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:16.215 21:44:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:31:16.216 [2024-07-15 21:44:49.407030] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:31:16.216 [2024-07-15 21:44:49.408914] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:16.216 [2024-07-15 21:44:49.539409] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:16.216 [2024-07-15 21:44:49.540950] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:16.474 [2024-07-15 21:44:49.761094] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:16.474 [2024-07-15 21:44:49.761524] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:16.734 [2024-07-15 21:44:50.008968] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:31:16.993 [2024-07-15 21:44:50.152201] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:16.993 [2024-07-15 21:44:50.153112] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:17.252 21:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:17.252 21:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:17.252 21:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:17.252 21:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:17.252 21:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:17.252 21:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:17.252 21:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:17.252 [2024-07-15 21:44:50.521596] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:31:17.252 21:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:17.252 "name": "raid_bdev1", 00:31:17.252 "uuid": "0a6dfe99-28e4-4dde-9bda-a41414877b07", 00:31:17.252 "strip_size_kb": 0, 00:31:17.252 "state": "online", 00:31:17.252 "raid_level": "raid1", 00:31:17.252 "superblock": true, 00:31:17.252 "num_base_bdevs": 4, 00:31:17.252 "num_base_bdevs_discovered": 4, 00:31:17.252 "num_base_bdevs_operational": 4, 00:31:17.252 "process": { 00:31:17.252 "type": "rebuild", 00:31:17.252 "target": "spare", 00:31:17.252 "progress": { 00:31:17.252 "blocks": 14336, 00:31:17.252 "percent": 22 00:31:17.252 } 00:31:17.252 }, 00:31:17.252 "base_bdevs_list": [ 00:31:17.252 { 00:31:17.252 "name": "spare", 00:31:17.252 "uuid": "98d2e968-9780-5073-af39-0840800b0635", 00:31:17.252 "is_configured": true, 00:31:17.252 "data_offset": 2048, 00:31:17.252 "data_size": 63488 00:31:17.252 }, 00:31:17.252 { 00:31:17.252 "name": "BaseBdev2", 00:31:17.252 "uuid": "2f17e063-992b-5686-97b6-f052b94f9613", 00:31:17.252 "is_configured": true, 00:31:17.252 "data_offset": 2048, 00:31:17.252 "data_size": 63488 00:31:17.252 }, 00:31:17.252 { 00:31:17.252 "name": "BaseBdev3", 00:31:17.252 "uuid": "c58bad3c-e4fa-52f3-8a3d-3f9fa9ae4295", 00:31:17.252 "is_configured": true, 00:31:17.252 "data_offset": 2048, 00:31:17.252 "data_size": 63488 00:31:17.252 }, 00:31:17.252 { 00:31:17.252 "name": "BaseBdev4", 00:31:17.252 "uuid": "a4493acd-4af1-5a86-b40b-701614ccb9f4", 00:31:17.252 "is_configured": true, 00:31:17.252 "data_offset": 2048, 00:31:17.252 "data_size": 63488 00:31:17.252 } 00:31:17.252 ] 00:31:17.252 }' 00:31:17.252 21:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:17.511 21:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:17.511 21:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:17.511 21:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:17.511 21:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:17.511 [2024-07-15 21:44:50.751740] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:31:17.770 [2024-07-15 21:44:50.890839] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:17.770 [2024-07-15 21:44:51.006385] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:17.770 [2024-07-15 21:44:51.010473] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:17.770 [2024-07-15 21:44:51.010545] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:17.770 [2024-07-15 21:44:51.010569] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:17.770 [2024-07-15 21:44:51.035133] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:31:17.770 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:31:17.770 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:17.770 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:17.770 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:17.770 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:17.770 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:17.770 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:17.770 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:17.770 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:17.770 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:17.770 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:17.770 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:18.028 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:18.028 "name": "raid_bdev1", 00:31:18.028 "uuid": "0a6dfe99-28e4-4dde-9bda-a41414877b07", 00:31:18.028 "strip_size_kb": 0, 00:31:18.028 "state": "online", 00:31:18.028 "raid_level": "raid1", 00:31:18.028 "superblock": true, 00:31:18.028 "num_base_bdevs": 4, 00:31:18.028 "num_base_bdevs_discovered": 3, 00:31:18.028 "num_base_bdevs_operational": 3, 00:31:18.028 "base_bdevs_list": [ 00:31:18.028 { 00:31:18.028 "name": null, 00:31:18.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:18.028 "is_configured": false, 00:31:18.028 "data_offset": 2048, 00:31:18.028 "data_size": 63488 00:31:18.028 }, 00:31:18.028 { 00:31:18.028 "name": "BaseBdev2", 00:31:18.028 "uuid": "2f17e063-992b-5686-97b6-f052b94f9613", 00:31:18.028 "is_configured": true, 00:31:18.028 "data_offset": 2048, 00:31:18.028 "data_size": 63488 00:31:18.028 }, 00:31:18.028 { 00:31:18.028 "name": "BaseBdev3", 00:31:18.028 "uuid": "c58bad3c-e4fa-52f3-8a3d-3f9fa9ae4295", 00:31:18.028 "is_configured": true, 00:31:18.028 "data_offset": 2048, 00:31:18.028 "data_size": 63488 00:31:18.028 }, 00:31:18.028 { 00:31:18.028 "name": "BaseBdev4", 00:31:18.028 "uuid": "a4493acd-4af1-5a86-b40b-701614ccb9f4", 00:31:18.028 "is_configured": true, 00:31:18.028 "data_offset": 2048, 00:31:18.028 "data_size": 63488 00:31:18.028 } 00:31:18.028 ] 00:31:18.028 }' 00:31:18.028 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:18.028 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:18.596 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:18.596 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:18.596 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:18.596 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:18.596 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:18.596 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:18.596 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:18.856 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:18.856 "name": "raid_bdev1", 00:31:18.856 "uuid": "0a6dfe99-28e4-4dde-9bda-a41414877b07", 00:31:18.856 "strip_size_kb": 0, 00:31:18.856 "state": "online", 00:31:18.856 "raid_level": "raid1", 00:31:18.856 "superblock": true, 00:31:18.856 "num_base_bdevs": 4, 00:31:18.856 "num_base_bdevs_discovered": 3, 00:31:18.856 "num_base_bdevs_operational": 3, 00:31:18.856 "base_bdevs_list": [ 00:31:18.856 { 00:31:18.856 "name": null, 00:31:18.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:18.856 "is_configured": false, 00:31:18.856 "data_offset": 2048, 00:31:18.856 "data_size": 63488 00:31:18.856 }, 00:31:18.856 { 00:31:18.856 "name": "BaseBdev2", 00:31:18.856 "uuid": "2f17e063-992b-5686-97b6-f052b94f9613", 00:31:18.856 "is_configured": true, 00:31:18.856 "data_offset": 2048, 00:31:18.856 "data_size": 63488 00:31:18.856 }, 00:31:18.856 { 00:31:18.856 "name": "BaseBdev3", 00:31:18.856 "uuid": "c58bad3c-e4fa-52f3-8a3d-3f9fa9ae4295", 00:31:18.856 "is_configured": true, 00:31:18.856 "data_offset": 2048, 00:31:18.856 "data_size": 63488 00:31:18.856 }, 00:31:18.856 { 00:31:18.856 "name": "BaseBdev4", 00:31:18.856 "uuid": "a4493acd-4af1-5a86-b40b-701614ccb9f4", 00:31:18.856 "is_configured": true, 00:31:18.856 "data_offset": 2048, 00:31:18.856 "data_size": 63488 00:31:18.856 } 00:31:18.856 ] 00:31:18.856 }' 00:31:18.856 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:18.856 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:18.856 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:19.114 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:19.114 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:19.114 [2024-07-15 21:44:52.475504] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:19.372 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:31:19.372 [2024-07-15 21:44:52.535291] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:31:19.372 [2024-07-15 21:44:52.537137] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:19.372 [2024-07-15 21:44:52.653752] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:19.372 [2024-07-15 21:44:52.654379] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:19.629 [2024-07-15 21:44:52.792629] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:19.629 [2024-07-15 21:44:52.792997] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:19.886 [2024-07-15 21:44:53.175884] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:31:20.144 [2024-07-15 21:44:53.389718] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:20.144 [2024-07-15 21:44:53.390125] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:20.401 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:20.401 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:20.401 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:20.401 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:20.401 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:20.401 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:20.401 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:20.401 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:20.401 "name": "raid_bdev1", 00:31:20.401 "uuid": "0a6dfe99-28e4-4dde-9bda-a41414877b07", 00:31:20.401 "strip_size_kb": 0, 00:31:20.401 "state": "online", 00:31:20.401 "raid_level": "raid1", 00:31:20.401 "superblock": true, 00:31:20.401 "num_base_bdevs": 4, 00:31:20.401 "num_base_bdevs_discovered": 4, 00:31:20.401 "num_base_bdevs_operational": 4, 00:31:20.401 "process": { 00:31:20.401 "type": "rebuild", 00:31:20.401 "target": "spare", 00:31:20.401 "progress": { 00:31:20.401 "blocks": 14336, 00:31:20.401 "percent": 22 00:31:20.401 } 00:31:20.401 }, 00:31:20.401 "base_bdevs_list": [ 00:31:20.401 { 00:31:20.401 "name": "spare", 00:31:20.401 "uuid": "98d2e968-9780-5073-af39-0840800b0635", 00:31:20.401 "is_configured": true, 00:31:20.401 "data_offset": 2048, 00:31:20.401 "data_size": 63488 00:31:20.401 }, 00:31:20.401 { 00:31:20.401 "name": "BaseBdev2", 00:31:20.401 "uuid": "2f17e063-992b-5686-97b6-f052b94f9613", 00:31:20.401 "is_configured": true, 00:31:20.401 "data_offset": 2048, 00:31:20.401 "data_size": 63488 00:31:20.401 }, 00:31:20.401 { 00:31:20.401 "name": "BaseBdev3", 00:31:20.401 "uuid": "c58bad3c-e4fa-52f3-8a3d-3f9fa9ae4295", 00:31:20.401 "is_configured": true, 00:31:20.401 "data_offset": 2048, 00:31:20.401 "data_size": 63488 00:31:20.401 }, 00:31:20.401 { 00:31:20.401 "name": "BaseBdev4", 00:31:20.401 "uuid": "a4493acd-4af1-5a86-b40b-701614ccb9f4", 00:31:20.401 "is_configured": true, 00:31:20.401 "data_offset": 2048, 00:31:20.401 "data_size": 63488 00:31:20.401 } 00:31:20.401 ] 00:31:20.401 }' 00:31:20.401 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:20.659 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:20.659 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:20.659 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:20.659 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:31:20.659 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:31:20.659 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:31:20.659 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:31:20.659 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:31:20.659 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:31:20.659 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:31:20.659 [2024-07-15 21:44:54.004211] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:31:20.917 [2024-07-15 21:44:54.037196] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:21.174 [2024-07-15 21:44:54.450673] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:31:21.174 [2024-07-15 21:44:54.450753] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000062f0 00:31:21.174 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:31:21.174 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:31:21.174 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:21.174 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:21.174 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:21.174 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:21.174 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:21.174 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:21.174 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:21.432 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:21.432 "name": "raid_bdev1", 00:31:21.432 "uuid": "0a6dfe99-28e4-4dde-9bda-a41414877b07", 00:31:21.432 "strip_size_kb": 0, 00:31:21.432 "state": "online", 00:31:21.432 "raid_level": "raid1", 00:31:21.432 "superblock": true, 00:31:21.432 "num_base_bdevs": 4, 00:31:21.432 "num_base_bdevs_discovered": 3, 00:31:21.432 "num_base_bdevs_operational": 3, 00:31:21.432 "process": { 00:31:21.432 "type": "rebuild", 00:31:21.432 "target": "spare", 00:31:21.432 "progress": { 00:31:21.432 "blocks": 24576, 00:31:21.432 "percent": 38 00:31:21.432 } 00:31:21.432 }, 00:31:21.432 "base_bdevs_list": [ 00:31:21.432 { 00:31:21.432 "name": "spare", 00:31:21.432 "uuid": "98d2e968-9780-5073-af39-0840800b0635", 00:31:21.432 "is_configured": true, 00:31:21.432 "data_offset": 2048, 00:31:21.432 "data_size": 63488 00:31:21.432 }, 00:31:21.432 { 00:31:21.432 "name": null, 00:31:21.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:21.432 "is_configured": false, 00:31:21.432 "data_offset": 2048, 00:31:21.432 "data_size": 63488 00:31:21.432 }, 00:31:21.432 { 00:31:21.432 "name": "BaseBdev3", 00:31:21.432 "uuid": "c58bad3c-e4fa-52f3-8a3d-3f9fa9ae4295", 00:31:21.432 "is_configured": true, 00:31:21.432 "data_offset": 2048, 00:31:21.432 "data_size": 63488 00:31:21.432 }, 00:31:21.432 { 00:31:21.432 "name": "BaseBdev4", 00:31:21.432 "uuid": "a4493acd-4af1-5a86-b40b-701614ccb9f4", 00:31:21.432 "is_configured": true, 00:31:21.432 "data_offset": 2048, 00:31:21.432 "data_size": 63488 00:31:21.432 } 00:31:21.432 ] 00:31:21.432 }' 00:31:21.432 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:21.432 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:21.432 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:21.432 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:21.433 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=947 00:31:21.433 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:21.433 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:21.433 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:21.433 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:21.433 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:21.433 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:21.433 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:21.433 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:21.433 [2024-07-15 21:44:54.800355] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:31:21.691 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:21.691 "name": "raid_bdev1", 00:31:21.691 "uuid": "0a6dfe99-28e4-4dde-9bda-a41414877b07", 00:31:21.691 "strip_size_kb": 0, 00:31:21.691 "state": "online", 00:31:21.691 "raid_level": "raid1", 00:31:21.691 "superblock": true, 00:31:21.691 "num_base_bdevs": 4, 00:31:21.691 "num_base_bdevs_discovered": 3, 00:31:21.691 "num_base_bdevs_operational": 3, 00:31:21.691 "process": { 00:31:21.691 "type": "rebuild", 00:31:21.691 "target": "spare", 00:31:21.691 "progress": { 00:31:21.691 "blocks": 28672, 00:31:21.691 "percent": 45 00:31:21.691 } 00:31:21.691 }, 00:31:21.691 "base_bdevs_list": [ 00:31:21.691 { 00:31:21.691 "name": "spare", 00:31:21.691 "uuid": "98d2e968-9780-5073-af39-0840800b0635", 00:31:21.691 "is_configured": true, 00:31:21.691 "data_offset": 2048, 00:31:21.691 "data_size": 63488 00:31:21.691 }, 00:31:21.691 { 00:31:21.691 "name": null, 00:31:21.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:21.691 "is_configured": false, 00:31:21.691 "data_offset": 2048, 00:31:21.691 "data_size": 63488 00:31:21.691 }, 00:31:21.691 { 00:31:21.691 "name": "BaseBdev3", 00:31:21.691 "uuid": "c58bad3c-e4fa-52f3-8a3d-3f9fa9ae4295", 00:31:21.691 "is_configured": true, 00:31:21.691 "data_offset": 2048, 00:31:21.691 "data_size": 63488 00:31:21.691 }, 00:31:21.691 { 00:31:21.691 "name": "BaseBdev4", 00:31:21.691 "uuid": "a4493acd-4af1-5a86-b40b-701614ccb9f4", 00:31:21.691 "is_configured": true, 00:31:21.691 "data_offset": 2048, 00:31:21.691 "data_size": 63488 00:31:21.691 } 00:31:21.691 ] 00:31:21.691 }' 00:31:21.691 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:21.691 21:44:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:21.691 21:44:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:21.949 21:44:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:21.949 21:44:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:21.949 [2024-07-15 21:44:55.212692] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:31:22.206 [2024-07-15 21:44:55.437573] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:31:22.772 [2024-07-15 21:44:55.885209] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:31:22.772 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:22.772 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:22.772 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:22.772 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:22.772 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:22.772 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:22.772 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:22.772 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:23.031 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:23.031 "name": "raid_bdev1", 00:31:23.031 "uuid": "0a6dfe99-28e4-4dde-9bda-a41414877b07", 00:31:23.031 "strip_size_kb": 0, 00:31:23.031 "state": "online", 00:31:23.031 "raid_level": "raid1", 00:31:23.031 "superblock": true, 00:31:23.031 "num_base_bdevs": 4, 00:31:23.031 "num_base_bdevs_discovered": 3, 00:31:23.031 "num_base_bdevs_operational": 3, 00:31:23.031 "process": { 00:31:23.031 "type": "rebuild", 00:31:23.031 "target": "spare", 00:31:23.031 "progress": { 00:31:23.031 "blocks": 51200, 00:31:23.031 "percent": 80 00:31:23.031 } 00:31:23.031 }, 00:31:23.031 "base_bdevs_list": [ 00:31:23.031 { 00:31:23.031 "name": "spare", 00:31:23.031 "uuid": "98d2e968-9780-5073-af39-0840800b0635", 00:31:23.031 "is_configured": true, 00:31:23.031 "data_offset": 2048, 00:31:23.031 "data_size": 63488 00:31:23.031 }, 00:31:23.031 { 00:31:23.031 "name": null, 00:31:23.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:23.031 "is_configured": false, 00:31:23.031 "data_offset": 2048, 00:31:23.031 "data_size": 63488 00:31:23.031 }, 00:31:23.031 { 00:31:23.031 "name": "BaseBdev3", 00:31:23.031 "uuid": "c58bad3c-e4fa-52f3-8a3d-3f9fa9ae4295", 00:31:23.031 "is_configured": true, 00:31:23.031 "data_offset": 2048, 00:31:23.031 "data_size": 63488 00:31:23.031 }, 00:31:23.031 { 00:31:23.031 "name": "BaseBdev4", 00:31:23.031 "uuid": "a4493acd-4af1-5a86-b40b-701614ccb9f4", 00:31:23.031 "is_configured": true, 00:31:23.031 "data_offset": 2048, 00:31:23.031 "data_size": 63488 00:31:23.031 } 00:31:23.031 ] 00:31:23.031 }' 00:31:23.031 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:23.031 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:23.031 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:23.031 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:23.031 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:23.599 [2024-07-15 21:44:56.908466] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:23.858 [2024-07-15 21:44:57.008306] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:23.858 [2024-07-15 21:44:57.011318] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:24.118 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:24.118 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:24.118 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:24.118 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:24.118 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:24.118 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:24.118 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:24.118 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:24.378 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:24.378 "name": "raid_bdev1", 00:31:24.378 "uuid": "0a6dfe99-28e4-4dde-9bda-a41414877b07", 00:31:24.378 "strip_size_kb": 0, 00:31:24.378 "state": "online", 00:31:24.378 "raid_level": "raid1", 00:31:24.378 "superblock": true, 00:31:24.378 "num_base_bdevs": 4, 00:31:24.378 "num_base_bdevs_discovered": 3, 00:31:24.378 "num_base_bdevs_operational": 3, 00:31:24.378 "base_bdevs_list": [ 00:31:24.378 { 00:31:24.378 "name": "spare", 00:31:24.378 "uuid": "98d2e968-9780-5073-af39-0840800b0635", 00:31:24.378 "is_configured": true, 00:31:24.378 "data_offset": 2048, 00:31:24.378 "data_size": 63488 00:31:24.378 }, 00:31:24.378 { 00:31:24.378 "name": null, 00:31:24.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:24.378 "is_configured": false, 00:31:24.378 "data_offset": 2048, 00:31:24.378 "data_size": 63488 00:31:24.378 }, 00:31:24.378 { 00:31:24.378 "name": "BaseBdev3", 00:31:24.378 "uuid": "c58bad3c-e4fa-52f3-8a3d-3f9fa9ae4295", 00:31:24.378 "is_configured": true, 00:31:24.378 "data_offset": 2048, 00:31:24.378 "data_size": 63488 00:31:24.378 }, 00:31:24.378 { 00:31:24.378 "name": "BaseBdev4", 00:31:24.378 "uuid": "a4493acd-4af1-5a86-b40b-701614ccb9f4", 00:31:24.378 "is_configured": true, 00:31:24.378 "data_offset": 2048, 00:31:24.378 "data_size": 63488 00:31:24.378 } 00:31:24.378 ] 00:31:24.378 }' 00:31:24.378 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:24.378 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:24.378 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:24.378 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:31:24.378 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:31:24.378 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:24.378 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:24.378 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:24.378 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:24.378 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:24.378 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:24.378 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:24.638 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:24.638 "name": "raid_bdev1", 00:31:24.638 "uuid": "0a6dfe99-28e4-4dde-9bda-a41414877b07", 00:31:24.638 "strip_size_kb": 0, 00:31:24.638 "state": "online", 00:31:24.638 "raid_level": "raid1", 00:31:24.638 "superblock": true, 00:31:24.638 "num_base_bdevs": 4, 00:31:24.638 "num_base_bdevs_discovered": 3, 00:31:24.638 "num_base_bdevs_operational": 3, 00:31:24.638 "base_bdevs_list": [ 00:31:24.638 { 00:31:24.638 "name": "spare", 00:31:24.638 "uuid": "98d2e968-9780-5073-af39-0840800b0635", 00:31:24.638 "is_configured": true, 00:31:24.638 "data_offset": 2048, 00:31:24.638 "data_size": 63488 00:31:24.638 }, 00:31:24.638 { 00:31:24.638 "name": null, 00:31:24.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:24.638 "is_configured": false, 00:31:24.638 "data_offset": 2048, 00:31:24.638 "data_size": 63488 00:31:24.638 }, 00:31:24.638 { 00:31:24.638 "name": "BaseBdev3", 00:31:24.638 "uuid": "c58bad3c-e4fa-52f3-8a3d-3f9fa9ae4295", 00:31:24.638 "is_configured": true, 00:31:24.638 "data_offset": 2048, 00:31:24.638 "data_size": 63488 00:31:24.638 }, 00:31:24.638 { 00:31:24.638 "name": "BaseBdev4", 00:31:24.638 "uuid": "a4493acd-4af1-5a86-b40b-701614ccb9f4", 00:31:24.638 "is_configured": true, 00:31:24.638 "data_offset": 2048, 00:31:24.638 "data_size": 63488 00:31:24.638 } 00:31:24.638 ] 00:31:24.638 }' 00:31:24.638 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:24.638 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:24.638 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:24.638 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:24.638 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:31:24.638 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:24.638 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:24.638 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:24.638 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:24.638 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:24.638 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:24.638 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:24.638 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:24.638 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:24.638 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:24.638 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:24.897 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:24.897 "name": "raid_bdev1", 00:31:24.897 "uuid": "0a6dfe99-28e4-4dde-9bda-a41414877b07", 00:31:24.897 "strip_size_kb": 0, 00:31:24.897 "state": "online", 00:31:24.897 "raid_level": "raid1", 00:31:24.897 "superblock": true, 00:31:24.897 "num_base_bdevs": 4, 00:31:24.897 "num_base_bdevs_discovered": 3, 00:31:24.897 "num_base_bdevs_operational": 3, 00:31:24.897 "base_bdevs_list": [ 00:31:24.897 { 00:31:24.897 "name": "spare", 00:31:24.897 "uuid": "98d2e968-9780-5073-af39-0840800b0635", 00:31:24.897 "is_configured": true, 00:31:24.897 "data_offset": 2048, 00:31:24.897 "data_size": 63488 00:31:24.897 }, 00:31:24.897 { 00:31:24.897 "name": null, 00:31:24.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:24.897 "is_configured": false, 00:31:24.897 "data_offset": 2048, 00:31:24.897 "data_size": 63488 00:31:24.897 }, 00:31:24.897 { 00:31:24.897 "name": "BaseBdev3", 00:31:24.897 "uuid": "c58bad3c-e4fa-52f3-8a3d-3f9fa9ae4295", 00:31:24.897 "is_configured": true, 00:31:24.897 "data_offset": 2048, 00:31:24.897 "data_size": 63488 00:31:24.897 }, 00:31:24.897 { 00:31:24.897 "name": "BaseBdev4", 00:31:24.897 "uuid": "a4493acd-4af1-5a86-b40b-701614ccb9f4", 00:31:24.897 "is_configured": true, 00:31:24.897 "data_offset": 2048, 00:31:24.897 "data_size": 63488 00:31:24.897 } 00:31:24.897 ] 00:31:24.897 }' 00:31:24.897 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:24.897 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:25.466 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:25.724 [2024-07-15 21:44:58.954890] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:25.724 [2024-07-15 21:44:58.954993] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:25.724 00:31:25.724 Latency(us) 00:31:25.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:25.724 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:31:25.724 raid_bdev1 : 10.92 108.06 324.17 0.00 0.00 13073.26 347.00 117220.72 00:31:25.724 =================================================================================================================== 00:31:25.724 Total : 108.06 324.17 0.00 0.00 13073.26 347.00 117220.72 00:31:25.724 [2024-07-15 21:44:59.080159] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:25.724 [2024-07-15 21:44:59.080272] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:25.724 [2024-07-15 21:44:59.080411] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:25.724 [2024-07-15 21:44:59.080450] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:31:25.724 0 00:31:25.982 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:25.982 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:31:25.982 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:31:25.982 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:31:25.982 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:31:25.982 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:31:25.982 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:25.982 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:31:25.982 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:25.982 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:31:25.982 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:25.982 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:31:25.982 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:25.982 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:25.982 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:31:26.241 /dev/nbd0 00:31:26.241 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:26.241 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:26.241 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:31:26.241 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:31:26.241 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:26.241 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:26.241 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:31:26.241 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:31:26.241 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:26.241 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:26.241 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:26.241 1+0 records in 00:31:26.241 1+0 records out 00:31:26.241 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458011 s, 8.9 MB/s 00:31:26.241 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:26.241 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:31:26.241 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:26.241 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:26.241 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:31:26.241 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:26.241 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:26.241 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:31:26.241 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:31:26.241 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # continue 00:31:26.241 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:31:26.241 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:31:26.241 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:31:26.241 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:26.241 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:31:26.241 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:26.241 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:31:26.241 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:26.241 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:31:26.241 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:26.241 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:26.241 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:31:26.500 /dev/nbd1 00:31:26.500 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:26.500 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:26.500 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:31:26.500 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:31:26.500 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:26.500 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:26.500 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:31:26.500 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:31:26.500 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:26.500 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:26.500 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:26.500 1+0 records in 00:31:26.500 1+0 records out 00:31:26.500 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000539229 s, 7.6 MB/s 00:31:26.500 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:26.500 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:31:26.500 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:26.500 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:26.500 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:31:26.500 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:26.500 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:26.500 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:31:26.760 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:31:26.760 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:26.760 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:31:26.760 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:26.760 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:31:26.760 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:26.760 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:31:27.023 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:27.023 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:27.023 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:27.023 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:27.023 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:27.023 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:27.023 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:31:27.023 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:31:27.023 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:27.023 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:27.023 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:31:27.023 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:31:27.023 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:31:27.023 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:31:27.023 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:31:27.023 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:27.023 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:31:27.023 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:27.023 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:31:27.023 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:27.023 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:31:27.023 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:27.023 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:27.023 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:31:27.282 /dev/nbd1 00:31:27.282 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:27.282 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:27.282 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:31:27.282 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:31:27.282 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:27.282 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:27.282 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:31:27.282 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:31:27.282 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:27.282 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:27.282 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:27.282 1+0 records in 00:31:27.282 1+0 records out 00:31:27.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362742 s, 11.3 MB/s 00:31:27.282 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:27.282 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:31:27.282 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:27.282 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:27.282 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:31:27.282 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:27.282 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:27.282 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:31:27.541 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:31:27.541 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:27.541 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:31:27.541 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:27.541 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:31:27.541 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:27.541 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:31:27.541 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:27.541 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:27.541 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:27.541 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:27.541 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:27.541 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:27.541 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:31:27.541 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:31:27.541 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:31:27.541 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:27.541 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:31:27.541 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:27.541 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:31:27.541 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:27.541 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:27.800 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:27.800 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:27.800 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:27.800 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:27.800 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:27.800 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:27.800 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:31:27.800 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:31:27.800 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:31:27.800 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:28.058 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:28.317 [2024-07-15 21:45:01.453692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:28.317 [2024-07-15 21:45:01.453813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:28.317 [2024-07-15 21:45:01.453883] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:31:28.317 [2024-07-15 21:45:01.453925] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:28.317 [2024-07-15 21:45:01.456000] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:28.317 [2024-07-15 21:45:01.456085] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:28.317 [2024-07-15 21:45:01.456246] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:28.317 [2024-07-15 21:45:01.456338] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:28.317 [2024-07-15 21:45:01.456522] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:28.317 [2024-07-15 21:45:01.456663] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:31:28.317 spare 00:31:28.317 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:31:28.317 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:28.317 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:28.317 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:28.317 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:28.317 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:28.317 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:28.317 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:28.317 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:28.317 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:28.317 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:28.317 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:28.317 [2024-07-15 21:45:01.556601] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c380 00:31:28.317 [2024-07-15 21:45:01.556692] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:28.317 [2024-07-15 21:45:01.556924] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00003a3c0 00:31:28.317 [2024-07-15 21:45:01.557384] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c380 00:31:28.317 [2024-07-15 21:45:01.557432] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c380 00:31:28.317 [2024-07-15 21:45:01.557659] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:28.317 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:28.317 "name": "raid_bdev1", 00:31:28.317 "uuid": "0a6dfe99-28e4-4dde-9bda-a41414877b07", 00:31:28.317 "strip_size_kb": 0, 00:31:28.317 "state": "online", 00:31:28.317 "raid_level": "raid1", 00:31:28.317 "superblock": true, 00:31:28.317 "num_base_bdevs": 4, 00:31:28.317 "num_base_bdevs_discovered": 3, 00:31:28.317 "num_base_bdevs_operational": 3, 00:31:28.317 "base_bdevs_list": [ 00:31:28.317 { 00:31:28.317 "name": "spare", 00:31:28.317 "uuid": "98d2e968-9780-5073-af39-0840800b0635", 00:31:28.317 "is_configured": true, 00:31:28.317 "data_offset": 2048, 00:31:28.317 "data_size": 63488 00:31:28.317 }, 00:31:28.317 { 00:31:28.317 "name": null, 00:31:28.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:28.317 "is_configured": false, 00:31:28.317 "data_offset": 2048, 00:31:28.317 "data_size": 63488 00:31:28.317 }, 00:31:28.317 { 00:31:28.317 "name": "BaseBdev3", 00:31:28.317 "uuid": "c58bad3c-e4fa-52f3-8a3d-3f9fa9ae4295", 00:31:28.317 "is_configured": true, 00:31:28.317 "data_offset": 2048, 00:31:28.317 "data_size": 63488 00:31:28.317 }, 00:31:28.317 { 00:31:28.317 "name": "BaseBdev4", 00:31:28.317 "uuid": "a4493acd-4af1-5a86-b40b-701614ccb9f4", 00:31:28.317 "is_configured": true, 00:31:28.317 "data_offset": 2048, 00:31:28.317 "data_size": 63488 00:31:28.317 } 00:31:28.317 ] 00:31:28.317 }' 00:31:28.317 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:28.317 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:29.253 21:45:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:29.253 21:45:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:29.253 21:45:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:29.253 21:45:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:29.253 21:45:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:29.253 21:45:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:29.253 21:45:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:29.253 21:45:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:29.253 "name": "raid_bdev1", 00:31:29.253 "uuid": "0a6dfe99-28e4-4dde-9bda-a41414877b07", 00:31:29.253 "strip_size_kb": 0, 00:31:29.253 "state": "online", 00:31:29.253 "raid_level": "raid1", 00:31:29.253 "superblock": true, 00:31:29.253 "num_base_bdevs": 4, 00:31:29.253 "num_base_bdevs_discovered": 3, 00:31:29.253 "num_base_bdevs_operational": 3, 00:31:29.253 "base_bdevs_list": [ 00:31:29.253 { 00:31:29.253 "name": "spare", 00:31:29.253 "uuid": "98d2e968-9780-5073-af39-0840800b0635", 00:31:29.253 "is_configured": true, 00:31:29.253 "data_offset": 2048, 00:31:29.253 "data_size": 63488 00:31:29.253 }, 00:31:29.253 { 00:31:29.253 "name": null, 00:31:29.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:29.253 "is_configured": false, 00:31:29.253 "data_offset": 2048, 00:31:29.253 "data_size": 63488 00:31:29.253 }, 00:31:29.253 { 00:31:29.253 "name": "BaseBdev3", 00:31:29.253 "uuid": "c58bad3c-e4fa-52f3-8a3d-3f9fa9ae4295", 00:31:29.253 "is_configured": true, 00:31:29.253 "data_offset": 2048, 00:31:29.253 "data_size": 63488 00:31:29.253 }, 00:31:29.253 { 00:31:29.253 "name": "BaseBdev4", 00:31:29.253 "uuid": "a4493acd-4af1-5a86-b40b-701614ccb9f4", 00:31:29.253 "is_configured": true, 00:31:29.253 "data_offset": 2048, 00:31:29.253 "data_size": 63488 00:31:29.253 } 00:31:29.253 ] 00:31:29.253 }' 00:31:29.253 21:45:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:29.253 21:45:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:29.253 21:45:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:29.512 21:45:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:29.512 21:45:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:29.512 21:45:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:31:29.512 21:45:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:31:29.512 21:45:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:29.771 [2024-07-15 21:45:03.019500] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:29.771 21:45:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:29.771 21:45:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:29.771 21:45:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:29.771 21:45:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:29.771 21:45:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:29.771 21:45:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:29.771 21:45:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:29.771 21:45:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:29.771 21:45:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:29.771 21:45:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:29.771 21:45:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:29.771 21:45:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:30.030 21:45:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:30.030 "name": "raid_bdev1", 00:31:30.030 "uuid": "0a6dfe99-28e4-4dde-9bda-a41414877b07", 00:31:30.030 "strip_size_kb": 0, 00:31:30.030 "state": "online", 00:31:30.030 "raid_level": "raid1", 00:31:30.030 "superblock": true, 00:31:30.030 "num_base_bdevs": 4, 00:31:30.030 "num_base_bdevs_discovered": 2, 00:31:30.030 "num_base_bdevs_operational": 2, 00:31:30.030 "base_bdevs_list": [ 00:31:30.030 { 00:31:30.030 "name": null, 00:31:30.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:30.030 "is_configured": false, 00:31:30.030 "data_offset": 2048, 00:31:30.030 "data_size": 63488 00:31:30.030 }, 00:31:30.030 { 00:31:30.030 "name": null, 00:31:30.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:30.030 "is_configured": false, 00:31:30.030 "data_offset": 2048, 00:31:30.030 "data_size": 63488 00:31:30.030 }, 00:31:30.030 { 00:31:30.030 "name": "BaseBdev3", 00:31:30.030 "uuid": "c58bad3c-e4fa-52f3-8a3d-3f9fa9ae4295", 00:31:30.030 "is_configured": true, 00:31:30.030 "data_offset": 2048, 00:31:30.030 "data_size": 63488 00:31:30.030 }, 00:31:30.030 { 00:31:30.030 "name": "BaseBdev4", 00:31:30.030 "uuid": "a4493acd-4af1-5a86-b40b-701614ccb9f4", 00:31:30.030 "is_configured": true, 00:31:30.030 "data_offset": 2048, 00:31:30.030 "data_size": 63488 00:31:30.030 } 00:31:30.030 ] 00:31:30.030 }' 00:31:30.030 21:45:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:30.030 21:45:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:30.594 21:45:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:30.853 [2024-07-15 21:45:04.033901] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:30.853 [2024-07-15 21:45:04.034143] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:31:30.853 [2024-07-15 21:45:04.034183] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:30.853 [2024-07-15 21:45:04.034276] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:30.853 [2024-07-15 21:45:04.047164] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00003a560 00:31:30.853 [2024-07-15 21:45:04.048900] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:30.853 21:45:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:31:31.791 21:45:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:31.791 21:45:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:31.791 21:45:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:31.791 21:45:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:31.791 21:45:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:31.791 21:45:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:31.791 21:45:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:32.082 21:45:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:32.082 "name": "raid_bdev1", 00:31:32.082 "uuid": "0a6dfe99-28e4-4dde-9bda-a41414877b07", 00:31:32.082 "strip_size_kb": 0, 00:31:32.082 "state": "online", 00:31:32.082 "raid_level": "raid1", 00:31:32.082 "superblock": true, 00:31:32.082 "num_base_bdevs": 4, 00:31:32.082 "num_base_bdevs_discovered": 3, 00:31:32.082 "num_base_bdevs_operational": 3, 00:31:32.082 "process": { 00:31:32.082 "type": "rebuild", 00:31:32.082 "target": "spare", 00:31:32.082 "progress": { 00:31:32.082 "blocks": 22528, 00:31:32.082 "percent": 35 00:31:32.082 } 00:31:32.082 }, 00:31:32.082 "base_bdevs_list": [ 00:31:32.082 { 00:31:32.082 "name": "spare", 00:31:32.082 "uuid": "98d2e968-9780-5073-af39-0840800b0635", 00:31:32.082 "is_configured": true, 00:31:32.082 "data_offset": 2048, 00:31:32.082 "data_size": 63488 00:31:32.082 }, 00:31:32.082 { 00:31:32.082 "name": null, 00:31:32.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:32.082 "is_configured": false, 00:31:32.082 "data_offset": 2048, 00:31:32.082 "data_size": 63488 00:31:32.082 }, 00:31:32.082 { 00:31:32.082 "name": "BaseBdev3", 00:31:32.082 "uuid": "c58bad3c-e4fa-52f3-8a3d-3f9fa9ae4295", 00:31:32.082 "is_configured": true, 00:31:32.082 "data_offset": 2048, 00:31:32.082 "data_size": 63488 00:31:32.082 }, 00:31:32.082 { 00:31:32.082 "name": "BaseBdev4", 00:31:32.082 "uuid": "a4493acd-4af1-5a86-b40b-701614ccb9f4", 00:31:32.082 "is_configured": true, 00:31:32.082 "data_offset": 2048, 00:31:32.082 "data_size": 63488 00:31:32.082 } 00:31:32.082 ] 00:31:32.082 }' 00:31:32.082 21:45:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:32.082 21:45:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:32.082 21:45:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:32.082 21:45:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:32.082 21:45:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:32.340 [2024-07-15 21:45:05.568735] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:32.340 [2024-07-15 21:45:05.655842] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:32.340 [2024-07-15 21:45:05.655910] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:32.340 [2024-07-15 21:45:05.655924] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:32.340 [2024-07-15 21:45:05.655931] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:32.340 21:45:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:32.340 21:45:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:32.340 21:45:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:32.340 21:45:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:32.340 21:45:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:32.340 21:45:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:32.340 21:45:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:32.340 21:45:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:32.340 21:45:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:32.340 21:45:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:32.340 21:45:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:32.340 21:45:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:32.599 21:45:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:32.599 "name": "raid_bdev1", 00:31:32.599 "uuid": "0a6dfe99-28e4-4dde-9bda-a41414877b07", 00:31:32.599 "strip_size_kb": 0, 00:31:32.599 "state": "online", 00:31:32.599 "raid_level": "raid1", 00:31:32.599 "superblock": true, 00:31:32.599 "num_base_bdevs": 4, 00:31:32.599 "num_base_bdevs_discovered": 2, 00:31:32.599 "num_base_bdevs_operational": 2, 00:31:32.599 "base_bdevs_list": [ 00:31:32.599 { 00:31:32.599 "name": null, 00:31:32.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:32.599 "is_configured": false, 00:31:32.599 "data_offset": 2048, 00:31:32.599 "data_size": 63488 00:31:32.599 }, 00:31:32.599 { 00:31:32.599 "name": null, 00:31:32.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:32.599 "is_configured": false, 00:31:32.599 "data_offset": 2048, 00:31:32.599 "data_size": 63488 00:31:32.599 }, 00:31:32.599 { 00:31:32.599 "name": "BaseBdev3", 00:31:32.599 "uuid": "c58bad3c-e4fa-52f3-8a3d-3f9fa9ae4295", 00:31:32.599 "is_configured": true, 00:31:32.599 "data_offset": 2048, 00:31:32.599 "data_size": 63488 00:31:32.599 }, 00:31:32.599 { 00:31:32.599 "name": "BaseBdev4", 00:31:32.599 "uuid": "a4493acd-4af1-5a86-b40b-701614ccb9f4", 00:31:32.599 "is_configured": true, 00:31:32.599 "data_offset": 2048, 00:31:32.599 "data_size": 63488 00:31:32.599 } 00:31:32.599 ] 00:31:32.599 }' 00:31:32.599 21:45:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:32.599 21:45:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:33.168 21:45:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:33.427 [2024-07-15 21:45:06.701962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:33.427 [2024-07-15 21:45:06.702040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:33.427 [2024-07-15 21:45:06.702095] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:31:33.427 [2024-07-15 21:45:06.702113] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:33.427 [2024-07-15 21:45:06.702619] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:33.427 [2024-07-15 21:45:06.702655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:33.427 [2024-07-15 21:45:06.702795] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:33.427 [2024-07-15 21:45:06.702815] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:31:33.427 [2024-07-15 21:45:06.702823] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:33.427 [2024-07-15 21:45:06.702865] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:33.427 [2024-07-15 21:45:06.716971] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00003a8a0 00:31:33.427 spare 00:31:33.427 [2024-07-15 21:45:06.718608] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:33.427 21:45:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:31:34.364 21:45:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:34.364 21:45:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:34.364 21:45:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:34.364 21:45:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:34.364 21:45:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:34.364 21:45:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:34.364 21:45:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:34.623 21:45:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:34.623 "name": "raid_bdev1", 00:31:34.623 "uuid": "0a6dfe99-28e4-4dde-9bda-a41414877b07", 00:31:34.623 "strip_size_kb": 0, 00:31:34.623 "state": "online", 00:31:34.623 "raid_level": "raid1", 00:31:34.623 "superblock": true, 00:31:34.623 "num_base_bdevs": 4, 00:31:34.623 "num_base_bdevs_discovered": 3, 00:31:34.623 "num_base_bdevs_operational": 3, 00:31:34.623 "process": { 00:31:34.623 "type": "rebuild", 00:31:34.623 "target": "spare", 00:31:34.623 "progress": { 00:31:34.623 "blocks": 24576, 00:31:34.623 "percent": 38 00:31:34.624 } 00:31:34.624 }, 00:31:34.624 "base_bdevs_list": [ 00:31:34.624 { 00:31:34.624 "name": "spare", 00:31:34.624 "uuid": "98d2e968-9780-5073-af39-0840800b0635", 00:31:34.624 "is_configured": true, 00:31:34.624 "data_offset": 2048, 00:31:34.624 "data_size": 63488 00:31:34.624 }, 00:31:34.624 { 00:31:34.624 "name": null, 00:31:34.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:34.624 "is_configured": false, 00:31:34.624 "data_offset": 2048, 00:31:34.624 "data_size": 63488 00:31:34.624 }, 00:31:34.624 { 00:31:34.624 "name": "BaseBdev3", 00:31:34.624 "uuid": "c58bad3c-e4fa-52f3-8a3d-3f9fa9ae4295", 00:31:34.624 "is_configured": true, 00:31:34.624 "data_offset": 2048, 00:31:34.624 "data_size": 63488 00:31:34.624 }, 00:31:34.624 { 00:31:34.624 "name": "BaseBdev4", 00:31:34.624 "uuid": "a4493acd-4af1-5a86-b40b-701614ccb9f4", 00:31:34.624 "is_configured": true, 00:31:34.624 "data_offset": 2048, 00:31:34.624 "data_size": 63488 00:31:34.624 } 00:31:34.624 ] 00:31:34.624 }' 00:31:34.624 21:45:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:34.883 21:45:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:34.883 21:45:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:34.883 21:45:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:34.883 21:45:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:35.141 [2024-07-15 21:45:08.267327] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:35.141 [2024-07-15 21:45:08.325301] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:35.141 [2024-07-15 21:45:08.325370] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:35.141 [2024-07-15 21:45:08.325383] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:35.141 [2024-07-15 21:45:08.325389] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:35.141 21:45:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:35.141 21:45:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:35.141 21:45:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:35.141 21:45:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:35.141 21:45:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:35.141 21:45:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:35.141 21:45:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:35.141 21:45:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:35.142 21:45:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:35.142 21:45:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:35.142 21:45:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:35.142 21:45:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:35.400 21:45:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:35.400 "name": "raid_bdev1", 00:31:35.400 "uuid": "0a6dfe99-28e4-4dde-9bda-a41414877b07", 00:31:35.400 "strip_size_kb": 0, 00:31:35.400 "state": "online", 00:31:35.400 "raid_level": "raid1", 00:31:35.400 "superblock": true, 00:31:35.400 "num_base_bdevs": 4, 00:31:35.400 "num_base_bdevs_discovered": 2, 00:31:35.400 "num_base_bdevs_operational": 2, 00:31:35.400 "base_bdevs_list": [ 00:31:35.400 { 00:31:35.400 "name": null, 00:31:35.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:35.400 "is_configured": false, 00:31:35.400 "data_offset": 2048, 00:31:35.400 "data_size": 63488 00:31:35.400 }, 00:31:35.400 { 00:31:35.400 "name": null, 00:31:35.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:35.400 "is_configured": false, 00:31:35.400 "data_offset": 2048, 00:31:35.400 "data_size": 63488 00:31:35.400 }, 00:31:35.400 { 00:31:35.400 "name": "BaseBdev3", 00:31:35.400 "uuid": "c58bad3c-e4fa-52f3-8a3d-3f9fa9ae4295", 00:31:35.400 "is_configured": true, 00:31:35.400 "data_offset": 2048, 00:31:35.400 "data_size": 63488 00:31:35.400 }, 00:31:35.400 { 00:31:35.400 "name": "BaseBdev4", 00:31:35.400 "uuid": "a4493acd-4af1-5a86-b40b-701614ccb9f4", 00:31:35.400 "is_configured": true, 00:31:35.400 "data_offset": 2048, 00:31:35.400 "data_size": 63488 00:31:35.400 } 00:31:35.400 ] 00:31:35.400 }' 00:31:35.400 21:45:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:35.400 21:45:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:35.969 21:45:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:35.969 21:45:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:35.969 21:45:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:35.969 21:45:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:35.969 21:45:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:35.969 21:45:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:35.969 21:45:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:36.228 21:45:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:36.228 "name": "raid_bdev1", 00:31:36.228 "uuid": "0a6dfe99-28e4-4dde-9bda-a41414877b07", 00:31:36.228 "strip_size_kb": 0, 00:31:36.228 "state": "online", 00:31:36.228 "raid_level": "raid1", 00:31:36.228 "superblock": true, 00:31:36.228 "num_base_bdevs": 4, 00:31:36.228 "num_base_bdevs_discovered": 2, 00:31:36.229 "num_base_bdevs_operational": 2, 00:31:36.229 "base_bdevs_list": [ 00:31:36.229 { 00:31:36.229 "name": null, 00:31:36.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:36.229 "is_configured": false, 00:31:36.229 "data_offset": 2048, 00:31:36.229 "data_size": 63488 00:31:36.229 }, 00:31:36.229 { 00:31:36.229 "name": null, 00:31:36.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:36.229 "is_configured": false, 00:31:36.229 "data_offset": 2048, 00:31:36.229 "data_size": 63488 00:31:36.229 }, 00:31:36.229 { 00:31:36.229 "name": "BaseBdev3", 00:31:36.229 "uuid": "c58bad3c-e4fa-52f3-8a3d-3f9fa9ae4295", 00:31:36.229 "is_configured": true, 00:31:36.229 "data_offset": 2048, 00:31:36.229 "data_size": 63488 00:31:36.229 }, 00:31:36.229 { 00:31:36.229 "name": "BaseBdev4", 00:31:36.229 "uuid": "a4493acd-4af1-5a86-b40b-701614ccb9f4", 00:31:36.229 "is_configured": true, 00:31:36.229 "data_offset": 2048, 00:31:36.229 "data_size": 63488 00:31:36.229 } 00:31:36.229 ] 00:31:36.229 }' 00:31:36.229 21:45:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:36.229 21:45:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:36.229 21:45:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:36.229 21:45:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:36.229 21:45:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:31:36.488 21:45:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:36.748 [2024-07-15 21:45:09.878644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:36.748 [2024-07-15 21:45:09.878761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:36.748 [2024-07-15 21:45:09.878805] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cf80 00:31:36.748 [2024-07-15 21:45:09.878825] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:36.748 [2024-07-15 21:45:09.879400] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:36.748 [2024-07-15 21:45:09.879442] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:36.748 [2024-07-15 21:45:09.879610] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:31:36.748 [2024-07-15 21:45:09.879630] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:31:36.748 [2024-07-15 21:45:09.879639] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:36.748 BaseBdev1 00:31:36.748 21:45:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:31:37.683 21:45:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:37.683 21:45:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:37.683 21:45:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:37.683 21:45:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:37.683 21:45:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:37.683 21:45:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:37.683 21:45:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:37.683 21:45:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:37.683 21:45:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:37.683 21:45:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:37.683 21:45:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:37.683 21:45:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:37.942 21:45:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:37.942 "name": "raid_bdev1", 00:31:37.942 "uuid": "0a6dfe99-28e4-4dde-9bda-a41414877b07", 00:31:37.942 "strip_size_kb": 0, 00:31:37.942 "state": "online", 00:31:37.942 "raid_level": "raid1", 00:31:37.942 "superblock": true, 00:31:37.942 "num_base_bdevs": 4, 00:31:37.942 "num_base_bdevs_discovered": 2, 00:31:37.942 "num_base_bdevs_operational": 2, 00:31:37.942 "base_bdevs_list": [ 00:31:37.942 { 00:31:37.942 "name": null, 00:31:37.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:37.942 "is_configured": false, 00:31:37.942 "data_offset": 2048, 00:31:37.942 "data_size": 63488 00:31:37.942 }, 00:31:37.942 { 00:31:37.942 "name": null, 00:31:37.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:37.942 "is_configured": false, 00:31:37.942 "data_offset": 2048, 00:31:37.942 "data_size": 63488 00:31:37.942 }, 00:31:37.942 { 00:31:37.942 "name": "BaseBdev3", 00:31:37.942 "uuid": "c58bad3c-e4fa-52f3-8a3d-3f9fa9ae4295", 00:31:37.942 "is_configured": true, 00:31:37.942 "data_offset": 2048, 00:31:37.942 "data_size": 63488 00:31:37.942 }, 00:31:37.942 { 00:31:37.942 "name": "BaseBdev4", 00:31:37.942 "uuid": "a4493acd-4af1-5a86-b40b-701614ccb9f4", 00:31:37.942 "is_configured": true, 00:31:37.942 "data_offset": 2048, 00:31:37.942 "data_size": 63488 00:31:37.942 } 00:31:37.942 ] 00:31:37.942 }' 00:31:37.942 21:45:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:37.942 21:45:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:38.543 21:45:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:38.543 21:45:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:38.543 21:45:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:38.543 21:45:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:38.543 21:45:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:38.543 21:45:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:38.543 21:45:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:38.802 21:45:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:38.802 "name": "raid_bdev1", 00:31:38.802 "uuid": "0a6dfe99-28e4-4dde-9bda-a41414877b07", 00:31:38.802 "strip_size_kb": 0, 00:31:38.802 "state": "online", 00:31:38.802 "raid_level": "raid1", 00:31:38.802 "superblock": true, 00:31:38.802 "num_base_bdevs": 4, 00:31:38.802 "num_base_bdevs_discovered": 2, 00:31:38.802 "num_base_bdevs_operational": 2, 00:31:38.802 "base_bdevs_list": [ 00:31:38.802 { 00:31:38.802 "name": null, 00:31:38.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:38.802 "is_configured": false, 00:31:38.802 "data_offset": 2048, 00:31:38.802 "data_size": 63488 00:31:38.802 }, 00:31:38.802 { 00:31:38.802 "name": null, 00:31:38.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:38.802 "is_configured": false, 00:31:38.802 "data_offset": 2048, 00:31:38.802 "data_size": 63488 00:31:38.802 }, 00:31:38.802 { 00:31:38.802 "name": "BaseBdev3", 00:31:38.802 "uuid": "c58bad3c-e4fa-52f3-8a3d-3f9fa9ae4295", 00:31:38.802 "is_configured": true, 00:31:38.802 "data_offset": 2048, 00:31:38.802 "data_size": 63488 00:31:38.802 }, 00:31:38.802 { 00:31:38.802 "name": "BaseBdev4", 00:31:38.802 "uuid": "a4493acd-4af1-5a86-b40b-701614ccb9f4", 00:31:38.802 "is_configured": true, 00:31:38.802 "data_offset": 2048, 00:31:38.802 "data_size": 63488 00:31:38.802 } 00:31:38.802 ] 00:31:38.802 }' 00:31:38.802 21:45:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:38.802 21:45:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:38.802 21:45:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:38.802 21:45:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:38.802 21:45:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:38.802 21:45:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:31:38.802 21:45:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:38.802 21:45:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:38.802 21:45:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:38.802 21:45:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:38.802 21:45:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:38.802 21:45:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:38.802 21:45:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:38.802 21:45:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:38.802 21:45:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:31:38.802 21:45:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:39.060 [2024-07-15 21:45:12.270935] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:39.060 [2024-07-15 21:45:12.271162] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:31:39.060 [2024-07-15 21:45:12.271177] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:39.060 request: 00:31:39.060 { 00:31:39.060 "base_bdev": "BaseBdev1", 00:31:39.060 "raid_bdev": "raid_bdev1", 00:31:39.060 "method": "bdev_raid_add_base_bdev", 00:31:39.060 "req_id": 1 00:31:39.060 } 00:31:39.060 Got JSON-RPC error response 00:31:39.060 response: 00:31:39.060 { 00:31:39.060 "code": -22, 00:31:39.060 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:31:39.060 } 00:31:39.060 21:45:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:31:39.060 21:45:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:39.060 21:45:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:39.060 21:45:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:39.060 21:45:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:31:40.011 21:45:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:40.011 21:45:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:40.011 21:45:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:40.011 21:45:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:40.011 21:45:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:40.011 21:45:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:40.011 21:45:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:40.011 21:45:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:40.011 21:45:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:40.011 21:45:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:40.011 21:45:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:40.011 21:45:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:40.276 21:45:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:40.276 "name": "raid_bdev1", 00:31:40.276 "uuid": "0a6dfe99-28e4-4dde-9bda-a41414877b07", 00:31:40.276 "strip_size_kb": 0, 00:31:40.276 "state": "online", 00:31:40.276 "raid_level": "raid1", 00:31:40.276 "superblock": true, 00:31:40.276 "num_base_bdevs": 4, 00:31:40.276 "num_base_bdevs_discovered": 2, 00:31:40.276 "num_base_bdevs_operational": 2, 00:31:40.276 "base_bdevs_list": [ 00:31:40.276 { 00:31:40.276 "name": null, 00:31:40.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:40.276 "is_configured": false, 00:31:40.276 "data_offset": 2048, 00:31:40.276 "data_size": 63488 00:31:40.276 }, 00:31:40.276 { 00:31:40.276 "name": null, 00:31:40.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:40.276 "is_configured": false, 00:31:40.276 "data_offset": 2048, 00:31:40.276 "data_size": 63488 00:31:40.276 }, 00:31:40.276 { 00:31:40.276 "name": "BaseBdev3", 00:31:40.276 "uuid": "c58bad3c-e4fa-52f3-8a3d-3f9fa9ae4295", 00:31:40.276 "is_configured": true, 00:31:40.276 "data_offset": 2048, 00:31:40.276 "data_size": 63488 00:31:40.276 }, 00:31:40.276 { 00:31:40.276 "name": "BaseBdev4", 00:31:40.276 "uuid": "a4493acd-4af1-5a86-b40b-701614ccb9f4", 00:31:40.276 "is_configured": true, 00:31:40.276 "data_offset": 2048, 00:31:40.276 "data_size": 63488 00:31:40.276 } 00:31:40.276 ] 00:31:40.276 }' 00:31:40.276 21:45:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:40.276 21:45:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:40.852 21:45:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:40.852 21:45:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:40.852 21:45:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:40.852 21:45:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:40.852 21:45:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:40.852 21:45:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:40.852 21:45:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:41.109 21:45:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:41.109 "name": "raid_bdev1", 00:31:41.109 "uuid": "0a6dfe99-28e4-4dde-9bda-a41414877b07", 00:31:41.109 "strip_size_kb": 0, 00:31:41.109 "state": "online", 00:31:41.109 "raid_level": "raid1", 00:31:41.109 "superblock": true, 00:31:41.109 "num_base_bdevs": 4, 00:31:41.109 "num_base_bdevs_discovered": 2, 00:31:41.109 "num_base_bdevs_operational": 2, 00:31:41.109 "base_bdevs_list": [ 00:31:41.109 { 00:31:41.109 "name": null, 00:31:41.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:41.109 "is_configured": false, 00:31:41.109 "data_offset": 2048, 00:31:41.109 "data_size": 63488 00:31:41.109 }, 00:31:41.109 { 00:31:41.109 "name": null, 00:31:41.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:41.109 "is_configured": false, 00:31:41.109 "data_offset": 2048, 00:31:41.109 "data_size": 63488 00:31:41.109 }, 00:31:41.109 { 00:31:41.109 "name": "BaseBdev3", 00:31:41.109 "uuid": "c58bad3c-e4fa-52f3-8a3d-3f9fa9ae4295", 00:31:41.109 "is_configured": true, 00:31:41.109 "data_offset": 2048, 00:31:41.109 "data_size": 63488 00:31:41.109 }, 00:31:41.109 { 00:31:41.109 "name": "BaseBdev4", 00:31:41.109 "uuid": "a4493acd-4af1-5a86-b40b-701614ccb9f4", 00:31:41.109 "is_configured": true, 00:31:41.109 "data_offset": 2048, 00:31:41.109 "data_size": 63488 00:31:41.109 } 00:31:41.109 ] 00:31:41.109 }' 00:31:41.109 21:45:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:41.110 21:45:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:41.110 21:45:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:41.367 21:45:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:41.367 21:45:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 150437 00:31:41.367 21:45:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@948 -- # '[' -z 150437 ']' 00:31:41.367 21:45:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # kill -0 150437 00:31:41.367 21:45:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # uname 00:31:41.367 21:45:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:41.367 21:45:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 150437 00:31:41.367 killing process with pid 150437 00:31:41.367 Received shutdown signal, test time was about 26.416136 seconds 00:31:41.367 00:31:41.367 Latency(us) 00:31:41.367 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:41.367 =================================================================================================================== 00:31:41.367 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:41.367 21:45:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:41.367 21:45:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:41.367 21:45:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 150437' 00:31:41.367 21:45:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@967 -- # kill 150437 00:31:41.367 21:45:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # wait 150437 00:31:41.367 [2024-07-15 21:45:14.520948] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:41.367 [2024-07-15 21:45:14.521127] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:41.367 [2024-07-15 21:45:14.521223] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:41.367 [2024-07-15 21:45:14.521238] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c380 name raid_bdev1, state offline 00:31:41.932 [2024-07-15 21:45:15.040706] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:43.315 ************************************ 00:31:43.315 END TEST raid_rebuild_test_sb_io 00:31:43.315 ************************************ 00:31:43.315 21:45:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:31:43.315 00:31:43.315 real 0m33.466s 00:31:43.315 user 0m52.561s 00:31:43.315 sys 0m3.762s 00:31:43.315 21:45:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:43.315 21:45:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:43.315 21:45:16 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:31:43.315 21:45:16 bdev_raid -- bdev/bdev_raid.sh@884 -- # '[' y == y ']' 00:31:43.315 21:45:16 bdev_raid -- bdev/bdev_raid.sh@885 -- # for n in {3..4} 00:31:43.315 21:45:16 bdev_raid -- bdev/bdev_raid.sh@886 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:31:43.315 21:45:16 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:31:43.315 21:45:16 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:43.315 21:45:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:43.315 ************************************ 00:31:43.315 START TEST raid5f_state_function_test 00:31:43.315 ************************************ 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid5f 3 false 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=151416 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 151416' 00:31:43.315 Process raid pid: 151416 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 151416 /var/tmp/spdk-raid.sock 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 151416 ']' 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:43.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:43.315 21:45:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.315 [2024-07-15 21:45:16.673580] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:31:43.315 [2024-07-15 21:45:16.673741] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:43.574 [2024-07-15 21:45:16.823668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:43.832 [2024-07-15 21:45:17.035142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:44.090 [2024-07-15 21:45:17.253636] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:44.349 21:45:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:44.349 21:45:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:31:44.349 21:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:31:44.349 [2024-07-15 21:45:17.693038] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:44.349 [2024-07-15 21:45:17.693125] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:44.349 [2024-07-15 21:45:17.693135] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:44.349 [2024-07-15 21:45:17.693174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:44.349 [2024-07-15 21:45:17.693182] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:44.349 [2024-07-15 21:45:17.693195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:44.349 21:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:44.349 21:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:44.349 21:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:44.349 21:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:44.349 21:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:44.349 21:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:44.349 21:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:44.349 21:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:44.349 21:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:44.349 21:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:44.349 21:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:44.349 21:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:44.607 21:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:44.607 "name": "Existed_Raid", 00:31:44.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:44.608 "strip_size_kb": 64, 00:31:44.608 "state": "configuring", 00:31:44.608 "raid_level": "raid5f", 00:31:44.608 "superblock": false, 00:31:44.608 "num_base_bdevs": 3, 00:31:44.608 "num_base_bdevs_discovered": 0, 00:31:44.608 "num_base_bdevs_operational": 3, 00:31:44.608 "base_bdevs_list": [ 00:31:44.608 { 00:31:44.608 "name": "BaseBdev1", 00:31:44.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:44.608 "is_configured": false, 00:31:44.608 "data_offset": 0, 00:31:44.608 "data_size": 0 00:31:44.608 }, 00:31:44.608 { 00:31:44.608 "name": "BaseBdev2", 00:31:44.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:44.608 "is_configured": false, 00:31:44.608 "data_offset": 0, 00:31:44.608 "data_size": 0 00:31:44.608 }, 00:31:44.608 { 00:31:44.608 "name": "BaseBdev3", 00:31:44.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:44.608 "is_configured": false, 00:31:44.608 "data_offset": 0, 00:31:44.608 "data_size": 0 00:31:44.608 } 00:31:44.608 ] 00:31:44.608 }' 00:31:44.608 21:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:44.608 21:45:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.569 21:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:45.569 [2024-07-15 21:45:18.799200] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:45.569 [2024-07-15 21:45:18.799261] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:31:45.569 21:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:31:45.829 [2024-07-15 21:45:19.002840] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:45.829 [2024-07-15 21:45:19.002910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:45.829 [2024-07-15 21:45:19.002921] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:45.829 [2024-07-15 21:45:19.002949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:45.829 [2024-07-15 21:45:19.002955] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:45.829 [2024-07-15 21:45:19.002972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:45.829 21:45:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:31:46.088 [2024-07-15 21:45:19.244305] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:46.088 BaseBdev1 00:31:46.088 21:45:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:31:46.088 21:45:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:31:46.088 21:45:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:46.088 21:45:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:31:46.088 21:45:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:46.088 21:45:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:46.088 21:45:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:46.348 21:45:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:46.348 [ 00:31:46.348 { 00:31:46.348 "name": "BaseBdev1", 00:31:46.348 "aliases": [ 00:31:46.348 "c6a9c587-ab7c-4d12-999e-55260669b70a" 00:31:46.348 ], 00:31:46.348 "product_name": "Malloc disk", 00:31:46.348 "block_size": 512, 00:31:46.348 "num_blocks": 65536, 00:31:46.348 "uuid": "c6a9c587-ab7c-4d12-999e-55260669b70a", 00:31:46.348 "assigned_rate_limits": { 00:31:46.348 "rw_ios_per_sec": 0, 00:31:46.348 "rw_mbytes_per_sec": 0, 00:31:46.348 "r_mbytes_per_sec": 0, 00:31:46.348 "w_mbytes_per_sec": 0 00:31:46.348 }, 00:31:46.348 "claimed": true, 00:31:46.348 "claim_type": "exclusive_write", 00:31:46.348 "zoned": false, 00:31:46.348 "supported_io_types": { 00:31:46.348 "read": true, 00:31:46.348 "write": true, 00:31:46.348 "unmap": true, 00:31:46.348 "flush": true, 00:31:46.348 "reset": true, 00:31:46.348 "nvme_admin": false, 00:31:46.348 "nvme_io": false, 00:31:46.348 "nvme_io_md": false, 00:31:46.348 "write_zeroes": true, 00:31:46.348 "zcopy": true, 00:31:46.348 "get_zone_info": false, 00:31:46.348 "zone_management": false, 00:31:46.348 "zone_append": false, 00:31:46.348 "compare": false, 00:31:46.348 "compare_and_write": false, 00:31:46.348 "abort": true, 00:31:46.348 "seek_hole": false, 00:31:46.348 "seek_data": false, 00:31:46.348 "copy": true, 00:31:46.348 "nvme_iov_md": false 00:31:46.348 }, 00:31:46.348 "memory_domains": [ 00:31:46.348 { 00:31:46.348 "dma_device_id": "system", 00:31:46.348 "dma_device_type": 1 00:31:46.348 }, 00:31:46.348 { 00:31:46.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:46.348 "dma_device_type": 2 00:31:46.348 } 00:31:46.348 ], 00:31:46.348 "driver_specific": {} 00:31:46.348 } 00:31:46.348 ] 00:31:46.348 21:45:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:31:46.348 21:45:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:46.348 21:45:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:46.348 21:45:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:46.348 21:45:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:46.348 21:45:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:46.348 21:45:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:46.348 21:45:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:46.348 21:45:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:46.348 21:45:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:46.348 21:45:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:46.349 21:45:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:46.349 21:45:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:46.608 21:45:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:46.608 "name": "Existed_Raid", 00:31:46.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:46.608 "strip_size_kb": 64, 00:31:46.608 "state": "configuring", 00:31:46.608 "raid_level": "raid5f", 00:31:46.608 "superblock": false, 00:31:46.608 "num_base_bdevs": 3, 00:31:46.608 "num_base_bdevs_discovered": 1, 00:31:46.608 "num_base_bdevs_operational": 3, 00:31:46.608 "base_bdevs_list": [ 00:31:46.608 { 00:31:46.608 "name": "BaseBdev1", 00:31:46.608 "uuid": "c6a9c587-ab7c-4d12-999e-55260669b70a", 00:31:46.608 "is_configured": true, 00:31:46.608 "data_offset": 0, 00:31:46.608 "data_size": 65536 00:31:46.608 }, 00:31:46.608 { 00:31:46.608 "name": "BaseBdev2", 00:31:46.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:46.608 "is_configured": false, 00:31:46.608 "data_offset": 0, 00:31:46.608 "data_size": 0 00:31:46.608 }, 00:31:46.608 { 00:31:46.608 "name": "BaseBdev3", 00:31:46.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:46.608 "is_configured": false, 00:31:46.608 "data_offset": 0, 00:31:46.608 "data_size": 0 00:31:46.608 } 00:31:46.608 ] 00:31:46.608 }' 00:31:46.608 21:45:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:46.608 21:45:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.543 21:45:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:47.543 [2024-07-15 21:45:20.749814] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:47.543 [2024-07-15 21:45:20.749870] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:31:47.544 21:45:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:31:47.801 [2024-07-15 21:45:20.989459] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:47.801 [2024-07-15 21:45:20.991184] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:47.801 [2024-07-15 21:45:20.991270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:47.801 [2024-07-15 21:45:20.991280] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:47.802 [2024-07-15 21:45:20.991309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:47.802 21:45:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:31:47.802 21:45:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:31:47.802 21:45:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:47.802 21:45:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:47.802 21:45:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:47.802 21:45:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:47.802 21:45:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:47.802 21:45:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:47.802 21:45:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:47.802 21:45:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:47.802 21:45:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:47.802 21:45:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:47.802 21:45:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:47.802 21:45:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:48.060 21:45:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:48.060 "name": "Existed_Raid", 00:31:48.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:48.060 "strip_size_kb": 64, 00:31:48.060 "state": "configuring", 00:31:48.060 "raid_level": "raid5f", 00:31:48.060 "superblock": false, 00:31:48.060 "num_base_bdevs": 3, 00:31:48.060 "num_base_bdevs_discovered": 1, 00:31:48.060 "num_base_bdevs_operational": 3, 00:31:48.060 "base_bdevs_list": [ 00:31:48.060 { 00:31:48.060 "name": "BaseBdev1", 00:31:48.060 "uuid": "c6a9c587-ab7c-4d12-999e-55260669b70a", 00:31:48.060 "is_configured": true, 00:31:48.060 "data_offset": 0, 00:31:48.060 "data_size": 65536 00:31:48.060 }, 00:31:48.060 { 00:31:48.060 "name": "BaseBdev2", 00:31:48.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:48.060 "is_configured": false, 00:31:48.060 "data_offset": 0, 00:31:48.060 "data_size": 0 00:31:48.060 }, 00:31:48.060 { 00:31:48.060 "name": "BaseBdev3", 00:31:48.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:48.060 "is_configured": false, 00:31:48.060 "data_offset": 0, 00:31:48.060 "data_size": 0 00:31:48.060 } 00:31:48.060 ] 00:31:48.060 }' 00:31:48.060 21:45:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:48.060 21:45:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:48.628 21:45:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:31:48.887 [2024-07-15 21:45:22.159493] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:48.887 BaseBdev2 00:31:48.887 21:45:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:31:48.887 21:45:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:31:48.887 21:45:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:48.887 21:45:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:31:48.887 21:45:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:48.887 21:45:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:48.887 21:45:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:49.145 21:45:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:49.403 [ 00:31:49.403 { 00:31:49.403 "name": "BaseBdev2", 00:31:49.403 "aliases": [ 00:31:49.403 "7e5e886b-6314-406d-b64d-de6df50732b9" 00:31:49.403 ], 00:31:49.403 "product_name": "Malloc disk", 00:31:49.403 "block_size": 512, 00:31:49.403 "num_blocks": 65536, 00:31:49.403 "uuid": "7e5e886b-6314-406d-b64d-de6df50732b9", 00:31:49.403 "assigned_rate_limits": { 00:31:49.403 "rw_ios_per_sec": 0, 00:31:49.403 "rw_mbytes_per_sec": 0, 00:31:49.403 "r_mbytes_per_sec": 0, 00:31:49.403 "w_mbytes_per_sec": 0 00:31:49.403 }, 00:31:49.403 "claimed": true, 00:31:49.403 "claim_type": "exclusive_write", 00:31:49.403 "zoned": false, 00:31:49.403 "supported_io_types": { 00:31:49.403 "read": true, 00:31:49.403 "write": true, 00:31:49.403 "unmap": true, 00:31:49.403 "flush": true, 00:31:49.403 "reset": true, 00:31:49.403 "nvme_admin": false, 00:31:49.403 "nvme_io": false, 00:31:49.403 "nvme_io_md": false, 00:31:49.403 "write_zeroes": true, 00:31:49.403 "zcopy": true, 00:31:49.403 "get_zone_info": false, 00:31:49.403 "zone_management": false, 00:31:49.403 "zone_append": false, 00:31:49.403 "compare": false, 00:31:49.403 "compare_and_write": false, 00:31:49.403 "abort": true, 00:31:49.403 "seek_hole": false, 00:31:49.403 "seek_data": false, 00:31:49.403 "copy": true, 00:31:49.403 "nvme_iov_md": false 00:31:49.403 }, 00:31:49.403 "memory_domains": [ 00:31:49.403 { 00:31:49.403 "dma_device_id": "system", 00:31:49.403 "dma_device_type": 1 00:31:49.403 }, 00:31:49.403 { 00:31:49.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:49.403 "dma_device_type": 2 00:31:49.403 } 00:31:49.403 ], 00:31:49.403 "driver_specific": {} 00:31:49.403 } 00:31:49.403 ] 00:31:49.403 21:45:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:31:49.403 21:45:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:31:49.404 21:45:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:31:49.404 21:45:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:49.404 21:45:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:49.404 21:45:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:49.404 21:45:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:49.404 21:45:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:49.404 21:45:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:49.404 21:45:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:49.404 21:45:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:49.404 21:45:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:49.404 21:45:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:49.404 21:45:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:49.404 21:45:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:49.662 21:45:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:49.662 "name": "Existed_Raid", 00:31:49.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:49.662 "strip_size_kb": 64, 00:31:49.662 "state": "configuring", 00:31:49.662 "raid_level": "raid5f", 00:31:49.662 "superblock": false, 00:31:49.662 "num_base_bdevs": 3, 00:31:49.662 "num_base_bdevs_discovered": 2, 00:31:49.662 "num_base_bdevs_operational": 3, 00:31:49.662 "base_bdevs_list": [ 00:31:49.662 { 00:31:49.662 "name": "BaseBdev1", 00:31:49.662 "uuid": "c6a9c587-ab7c-4d12-999e-55260669b70a", 00:31:49.662 "is_configured": true, 00:31:49.662 "data_offset": 0, 00:31:49.662 "data_size": 65536 00:31:49.662 }, 00:31:49.662 { 00:31:49.662 "name": "BaseBdev2", 00:31:49.662 "uuid": "7e5e886b-6314-406d-b64d-de6df50732b9", 00:31:49.662 "is_configured": true, 00:31:49.662 "data_offset": 0, 00:31:49.662 "data_size": 65536 00:31:49.662 }, 00:31:49.662 { 00:31:49.662 "name": "BaseBdev3", 00:31:49.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:49.662 "is_configured": false, 00:31:49.662 "data_offset": 0, 00:31:49.662 "data_size": 0 00:31:49.662 } 00:31:49.662 ] 00:31:49.662 }' 00:31:49.662 21:45:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:49.662 21:45:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:50.228 21:45:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:31:50.486 [2024-07-15 21:45:23.780639] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:50.486 [2024-07-15 21:45:23.780716] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:31:50.486 [2024-07-15 21:45:23.780726] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:31:50.486 [2024-07-15 21:45:23.780873] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:31:50.486 [2024-07-15 21:45:23.787394] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:31:50.486 [2024-07-15 21:45:23.787423] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:31:50.486 [2024-07-15 21:45:23.787708] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:50.486 BaseBdev3 00:31:50.486 21:45:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:31:50.486 21:45:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:31:50.486 21:45:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:50.486 21:45:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:31:50.486 21:45:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:50.487 21:45:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:50.487 21:45:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:50.744 21:45:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:51.004 [ 00:31:51.004 { 00:31:51.004 "name": "BaseBdev3", 00:31:51.004 "aliases": [ 00:31:51.004 "1bfb77dd-5487-46f8-b404-77a9d0f6d38a" 00:31:51.004 ], 00:31:51.004 "product_name": "Malloc disk", 00:31:51.004 "block_size": 512, 00:31:51.004 "num_blocks": 65536, 00:31:51.004 "uuid": "1bfb77dd-5487-46f8-b404-77a9d0f6d38a", 00:31:51.004 "assigned_rate_limits": { 00:31:51.004 "rw_ios_per_sec": 0, 00:31:51.004 "rw_mbytes_per_sec": 0, 00:31:51.004 "r_mbytes_per_sec": 0, 00:31:51.004 "w_mbytes_per_sec": 0 00:31:51.004 }, 00:31:51.004 "claimed": true, 00:31:51.004 "claim_type": "exclusive_write", 00:31:51.004 "zoned": false, 00:31:51.004 "supported_io_types": { 00:31:51.004 "read": true, 00:31:51.004 "write": true, 00:31:51.004 "unmap": true, 00:31:51.004 "flush": true, 00:31:51.004 "reset": true, 00:31:51.004 "nvme_admin": false, 00:31:51.004 "nvme_io": false, 00:31:51.004 "nvme_io_md": false, 00:31:51.004 "write_zeroes": true, 00:31:51.004 "zcopy": true, 00:31:51.004 "get_zone_info": false, 00:31:51.004 "zone_management": false, 00:31:51.004 "zone_append": false, 00:31:51.004 "compare": false, 00:31:51.004 "compare_and_write": false, 00:31:51.004 "abort": true, 00:31:51.004 "seek_hole": false, 00:31:51.004 "seek_data": false, 00:31:51.004 "copy": true, 00:31:51.004 "nvme_iov_md": false 00:31:51.004 }, 00:31:51.004 "memory_domains": [ 00:31:51.004 { 00:31:51.004 "dma_device_id": "system", 00:31:51.004 "dma_device_type": 1 00:31:51.004 }, 00:31:51.004 { 00:31:51.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:51.004 "dma_device_type": 2 00:31:51.004 } 00:31:51.004 ], 00:31:51.004 "driver_specific": {} 00:31:51.004 } 00:31:51.004 ] 00:31:51.004 21:45:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:31:51.004 21:45:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:31:51.004 21:45:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:31:51.004 21:45:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:31:51.004 21:45:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:51.004 21:45:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:51.004 21:45:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:51.004 21:45:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:51.004 21:45:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:51.004 21:45:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:51.004 21:45:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:51.004 21:45:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:51.004 21:45:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:51.004 21:45:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:51.004 21:45:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:51.264 21:45:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:51.264 "name": "Existed_Raid", 00:31:51.264 "uuid": "4a9da2b2-d553-42d0-9846-f6ba7d6a32ac", 00:31:51.264 "strip_size_kb": 64, 00:31:51.264 "state": "online", 00:31:51.264 "raid_level": "raid5f", 00:31:51.264 "superblock": false, 00:31:51.264 "num_base_bdevs": 3, 00:31:51.264 "num_base_bdevs_discovered": 3, 00:31:51.264 "num_base_bdevs_operational": 3, 00:31:51.264 "base_bdevs_list": [ 00:31:51.264 { 00:31:51.264 "name": "BaseBdev1", 00:31:51.264 "uuid": "c6a9c587-ab7c-4d12-999e-55260669b70a", 00:31:51.264 "is_configured": true, 00:31:51.264 "data_offset": 0, 00:31:51.264 "data_size": 65536 00:31:51.264 }, 00:31:51.264 { 00:31:51.264 "name": "BaseBdev2", 00:31:51.264 "uuid": "7e5e886b-6314-406d-b64d-de6df50732b9", 00:31:51.264 "is_configured": true, 00:31:51.264 "data_offset": 0, 00:31:51.264 "data_size": 65536 00:31:51.264 }, 00:31:51.264 { 00:31:51.264 "name": "BaseBdev3", 00:31:51.264 "uuid": "1bfb77dd-5487-46f8-b404-77a9d0f6d38a", 00:31:51.264 "is_configured": true, 00:31:51.264 "data_offset": 0, 00:31:51.264 "data_size": 65536 00:31:51.264 } 00:31:51.264 ] 00:31:51.264 }' 00:31:51.264 21:45:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:51.264 21:45:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:51.857 21:45:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:31:51.857 21:45:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:31:51.857 21:45:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:51.857 21:45:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:51.857 21:45:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:51.857 21:45:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:31:51.857 21:45:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:31:51.857 21:45:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:52.115 [2024-07-15 21:45:25.344639] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:52.115 21:45:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:52.115 "name": "Existed_Raid", 00:31:52.115 "aliases": [ 00:31:52.115 "4a9da2b2-d553-42d0-9846-f6ba7d6a32ac" 00:31:52.115 ], 00:31:52.115 "product_name": "Raid Volume", 00:31:52.115 "block_size": 512, 00:31:52.115 "num_blocks": 131072, 00:31:52.115 "uuid": "4a9da2b2-d553-42d0-9846-f6ba7d6a32ac", 00:31:52.115 "assigned_rate_limits": { 00:31:52.115 "rw_ios_per_sec": 0, 00:31:52.115 "rw_mbytes_per_sec": 0, 00:31:52.115 "r_mbytes_per_sec": 0, 00:31:52.115 "w_mbytes_per_sec": 0 00:31:52.115 }, 00:31:52.115 "claimed": false, 00:31:52.115 "zoned": false, 00:31:52.115 "supported_io_types": { 00:31:52.115 "read": true, 00:31:52.115 "write": true, 00:31:52.115 "unmap": false, 00:31:52.115 "flush": false, 00:31:52.115 "reset": true, 00:31:52.115 "nvme_admin": false, 00:31:52.115 "nvme_io": false, 00:31:52.115 "nvme_io_md": false, 00:31:52.115 "write_zeroes": true, 00:31:52.115 "zcopy": false, 00:31:52.115 "get_zone_info": false, 00:31:52.115 "zone_management": false, 00:31:52.115 "zone_append": false, 00:31:52.115 "compare": false, 00:31:52.115 "compare_and_write": false, 00:31:52.115 "abort": false, 00:31:52.115 "seek_hole": false, 00:31:52.115 "seek_data": false, 00:31:52.115 "copy": false, 00:31:52.115 "nvme_iov_md": false 00:31:52.115 }, 00:31:52.115 "driver_specific": { 00:31:52.115 "raid": { 00:31:52.115 "uuid": "4a9da2b2-d553-42d0-9846-f6ba7d6a32ac", 00:31:52.115 "strip_size_kb": 64, 00:31:52.115 "state": "online", 00:31:52.115 "raid_level": "raid5f", 00:31:52.115 "superblock": false, 00:31:52.115 "num_base_bdevs": 3, 00:31:52.115 "num_base_bdevs_discovered": 3, 00:31:52.115 "num_base_bdevs_operational": 3, 00:31:52.115 "base_bdevs_list": [ 00:31:52.115 { 00:31:52.115 "name": "BaseBdev1", 00:31:52.115 "uuid": "c6a9c587-ab7c-4d12-999e-55260669b70a", 00:31:52.115 "is_configured": true, 00:31:52.115 "data_offset": 0, 00:31:52.115 "data_size": 65536 00:31:52.115 }, 00:31:52.115 { 00:31:52.115 "name": "BaseBdev2", 00:31:52.115 "uuid": "7e5e886b-6314-406d-b64d-de6df50732b9", 00:31:52.115 "is_configured": true, 00:31:52.115 "data_offset": 0, 00:31:52.115 "data_size": 65536 00:31:52.115 }, 00:31:52.115 { 00:31:52.115 "name": "BaseBdev3", 00:31:52.115 "uuid": "1bfb77dd-5487-46f8-b404-77a9d0f6d38a", 00:31:52.115 "is_configured": true, 00:31:52.115 "data_offset": 0, 00:31:52.115 "data_size": 65536 00:31:52.115 } 00:31:52.115 ] 00:31:52.115 } 00:31:52.115 } 00:31:52.115 }' 00:31:52.115 21:45:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:52.115 21:45:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:31:52.115 BaseBdev2 00:31:52.115 BaseBdev3' 00:31:52.115 21:45:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:52.115 21:45:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:31:52.115 21:45:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:52.375 21:45:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:52.375 "name": "BaseBdev1", 00:31:52.375 "aliases": [ 00:31:52.375 "c6a9c587-ab7c-4d12-999e-55260669b70a" 00:31:52.375 ], 00:31:52.375 "product_name": "Malloc disk", 00:31:52.375 "block_size": 512, 00:31:52.375 "num_blocks": 65536, 00:31:52.375 "uuid": "c6a9c587-ab7c-4d12-999e-55260669b70a", 00:31:52.375 "assigned_rate_limits": { 00:31:52.375 "rw_ios_per_sec": 0, 00:31:52.375 "rw_mbytes_per_sec": 0, 00:31:52.375 "r_mbytes_per_sec": 0, 00:31:52.375 "w_mbytes_per_sec": 0 00:31:52.375 }, 00:31:52.375 "claimed": true, 00:31:52.375 "claim_type": "exclusive_write", 00:31:52.375 "zoned": false, 00:31:52.375 "supported_io_types": { 00:31:52.375 "read": true, 00:31:52.375 "write": true, 00:31:52.375 "unmap": true, 00:31:52.375 "flush": true, 00:31:52.375 "reset": true, 00:31:52.375 "nvme_admin": false, 00:31:52.375 "nvme_io": false, 00:31:52.375 "nvme_io_md": false, 00:31:52.375 "write_zeroes": true, 00:31:52.375 "zcopy": true, 00:31:52.375 "get_zone_info": false, 00:31:52.375 "zone_management": false, 00:31:52.375 "zone_append": false, 00:31:52.375 "compare": false, 00:31:52.375 "compare_and_write": false, 00:31:52.375 "abort": true, 00:31:52.375 "seek_hole": false, 00:31:52.375 "seek_data": false, 00:31:52.375 "copy": true, 00:31:52.375 "nvme_iov_md": false 00:31:52.375 }, 00:31:52.375 "memory_domains": [ 00:31:52.375 { 00:31:52.375 "dma_device_id": "system", 00:31:52.375 "dma_device_type": 1 00:31:52.375 }, 00:31:52.375 { 00:31:52.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:52.375 "dma_device_type": 2 00:31:52.375 } 00:31:52.375 ], 00:31:52.375 "driver_specific": {} 00:31:52.375 }' 00:31:52.375 21:45:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:52.375 21:45:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:52.375 21:45:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:52.375 21:45:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:52.634 21:45:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:52.634 21:45:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:52.634 21:45:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:52.634 21:45:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:52.634 21:45:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:52.634 21:45:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:52.939 21:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:52.939 21:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:52.939 21:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:52.939 21:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:31:52.939 21:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:52.939 21:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:52.939 "name": "BaseBdev2", 00:31:52.939 "aliases": [ 00:31:52.939 "7e5e886b-6314-406d-b64d-de6df50732b9" 00:31:52.939 ], 00:31:52.939 "product_name": "Malloc disk", 00:31:52.939 "block_size": 512, 00:31:52.939 "num_blocks": 65536, 00:31:52.939 "uuid": "7e5e886b-6314-406d-b64d-de6df50732b9", 00:31:52.939 "assigned_rate_limits": { 00:31:52.939 "rw_ios_per_sec": 0, 00:31:52.939 "rw_mbytes_per_sec": 0, 00:31:52.939 "r_mbytes_per_sec": 0, 00:31:52.939 "w_mbytes_per_sec": 0 00:31:52.939 }, 00:31:52.939 "claimed": true, 00:31:52.939 "claim_type": "exclusive_write", 00:31:52.939 "zoned": false, 00:31:52.939 "supported_io_types": { 00:31:52.939 "read": true, 00:31:52.939 "write": true, 00:31:52.939 "unmap": true, 00:31:52.939 "flush": true, 00:31:52.939 "reset": true, 00:31:52.939 "nvme_admin": false, 00:31:52.939 "nvme_io": false, 00:31:52.939 "nvme_io_md": false, 00:31:52.939 "write_zeroes": true, 00:31:52.939 "zcopy": true, 00:31:52.939 "get_zone_info": false, 00:31:52.939 "zone_management": false, 00:31:52.939 "zone_append": false, 00:31:52.939 "compare": false, 00:31:52.939 "compare_and_write": false, 00:31:52.939 "abort": true, 00:31:52.939 "seek_hole": false, 00:31:52.939 "seek_data": false, 00:31:52.939 "copy": true, 00:31:52.939 "nvme_iov_md": false 00:31:52.939 }, 00:31:52.939 "memory_domains": [ 00:31:52.939 { 00:31:52.939 "dma_device_id": "system", 00:31:52.939 "dma_device_type": 1 00:31:52.939 }, 00:31:52.939 { 00:31:52.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:52.939 "dma_device_type": 2 00:31:52.939 } 00:31:52.939 ], 00:31:52.939 "driver_specific": {} 00:31:52.939 }' 00:31:52.939 21:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:53.198 21:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:53.198 21:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:53.198 21:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:53.198 21:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:53.198 21:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:53.198 21:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:53.198 21:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:53.456 21:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:53.456 21:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:53.456 21:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:53.456 21:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:53.456 21:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:53.456 21:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:31:53.456 21:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:53.714 21:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:53.714 "name": "BaseBdev3", 00:31:53.714 "aliases": [ 00:31:53.714 "1bfb77dd-5487-46f8-b404-77a9d0f6d38a" 00:31:53.714 ], 00:31:53.714 "product_name": "Malloc disk", 00:31:53.714 "block_size": 512, 00:31:53.714 "num_blocks": 65536, 00:31:53.714 "uuid": "1bfb77dd-5487-46f8-b404-77a9d0f6d38a", 00:31:53.714 "assigned_rate_limits": { 00:31:53.714 "rw_ios_per_sec": 0, 00:31:53.714 "rw_mbytes_per_sec": 0, 00:31:53.714 "r_mbytes_per_sec": 0, 00:31:53.714 "w_mbytes_per_sec": 0 00:31:53.714 }, 00:31:53.714 "claimed": true, 00:31:53.714 "claim_type": "exclusive_write", 00:31:53.714 "zoned": false, 00:31:53.714 "supported_io_types": { 00:31:53.714 "read": true, 00:31:53.714 "write": true, 00:31:53.714 "unmap": true, 00:31:53.714 "flush": true, 00:31:53.714 "reset": true, 00:31:53.714 "nvme_admin": false, 00:31:53.714 "nvme_io": false, 00:31:53.714 "nvme_io_md": false, 00:31:53.714 "write_zeroes": true, 00:31:53.714 "zcopy": true, 00:31:53.714 "get_zone_info": false, 00:31:53.714 "zone_management": false, 00:31:53.714 "zone_append": false, 00:31:53.714 "compare": false, 00:31:53.714 "compare_and_write": false, 00:31:53.714 "abort": true, 00:31:53.714 "seek_hole": false, 00:31:53.714 "seek_data": false, 00:31:53.714 "copy": true, 00:31:53.714 "nvme_iov_md": false 00:31:53.714 }, 00:31:53.714 "memory_domains": [ 00:31:53.714 { 00:31:53.714 "dma_device_id": "system", 00:31:53.714 "dma_device_type": 1 00:31:53.714 }, 00:31:53.714 { 00:31:53.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:53.714 "dma_device_type": 2 00:31:53.714 } 00:31:53.714 ], 00:31:53.714 "driver_specific": {} 00:31:53.714 }' 00:31:53.714 21:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:53.714 21:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:53.714 21:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:53.714 21:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:53.714 21:45:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:53.714 21:45:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:53.714 21:45:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:53.973 21:45:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:53.973 21:45:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:53.973 21:45:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:53.973 21:45:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:53.973 21:45:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:53.973 21:45:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:31:54.231 [2024-07-15 21:45:27.448977] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:54.231 21:45:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:31:54.231 21:45:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:31:54.231 21:45:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:31:54.231 21:45:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:31:54.231 21:45:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:31:54.231 21:45:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:31:54.231 21:45:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:54.231 21:45:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:54.231 21:45:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:54.231 21:45:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:54.231 21:45:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:54.231 21:45:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:54.231 21:45:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:54.231 21:45:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:54.231 21:45:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:54.231 21:45:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:54.231 21:45:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:54.498 21:45:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:54.498 "name": "Existed_Raid", 00:31:54.498 "uuid": "4a9da2b2-d553-42d0-9846-f6ba7d6a32ac", 00:31:54.498 "strip_size_kb": 64, 00:31:54.498 "state": "online", 00:31:54.498 "raid_level": "raid5f", 00:31:54.498 "superblock": false, 00:31:54.498 "num_base_bdevs": 3, 00:31:54.498 "num_base_bdevs_discovered": 2, 00:31:54.498 "num_base_bdevs_operational": 2, 00:31:54.498 "base_bdevs_list": [ 00:31:54.498 { 00:31:54.498 "name": null, 00:31:54.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:54.498 "is_configured": false, 00:31:54.498 "data_offset": 0, 00:31:54.498 "data_size": 65536 00:31:54.498 }, 00:31:54.498 { 00:31:54.498 "name": "BaseBdev2", 00:31:54.498 "uuid": "7e5e886b-6314-406d-b64d-de6df50732b9", 00:31:54.498 "is_configured": true, 00:31:54.498 "data_offset": 0, 00:31:54.498 "data_size": 65536 00:31:54.498 }, 00:31:54.498 { 00:31:54.498 "name": "BaseBdev3", 00:31:54.498 "uuid": "1bfb77dd-5487-46f8-b404-77a9d0f6d38a", 00:31:54.498 "is_configured": true, 00:31:54.498 "data_offset": 0, 00:31:54.498 "data_size": 65536 00:31:54.498 } 00:31:54.498 ] 00:31:54.498 }' 00:31:54.498 21:45:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:54.498 21:45:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:55.109 21:45:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:31:55.109 21:45:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:31:55.109 21:45:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:55.109 21:45:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:31:55.367 21:45:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:31:55.367 21:45:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:55.367 21:45:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:31:55.625 [2024-07-15 21:45:28.817387] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:55.625 [2024-07-15 21:45:28.817505] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:55.625 [2024-07-15 21:45:28.921494] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:55.625 21:45:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:31:55.625 21:45:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:31:55.625 21:45:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:55.625 21:45:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:31:55.883 21:45:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:31:55.883 21:45:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:55.883 21:45:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:31:56.162 [2024-07-15 21:45:29.368826] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:56.162 [2024-07-15 21:45:29.368907] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:31:56.162 21:45:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:31:56.162 21:45:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:31:56.162 21:45:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:56.162 21:45:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:31:56.420 21:45:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:31:56.420 21:45:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:31:56.420 21:45:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:31:56.420 21:45:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:31:56.420 21:45:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:31:56.420 21:45:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:31:56.678 BaseBdev2 00:31:56.678 21:45:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:31:56.678 21:45:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:31:56.678 21:45:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:56.678 21:45:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:31:56.678 21:45:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:56.678 21:45:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:56.678 21:45:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:56.936 21:45:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:57.194 [ 00:31:57.194 { 00:31:57.194 "name": "BaseBdev2", 00:31:57.194 "aliases": [ 00:31:57.194 "7b2a21d6-f078-49a0-9d00-08d61b5e34f8" 00:31:57.194 ], 00:31:57.194 "product_name": "Malloc disk", 00:31:57.194 "block_size": 512, 00:31:57.194 "num_blocks": 65536, 00:31:57.194 "uuid": "7b2a21d6-f078-49a0-9d00-08d61b5e34f8", 00:31:57.194 "assigned_rate_limits": { 00:31:57.194 "rw_ios_per_sec": 0, 00:31:57.194 "rw_mbytes_per_sec": 0, 00:31:57.194 "r_mbytes_per_sec": 0, 00:31:57.194 "w_mbytes_per_sec": 0 00:31:57.194 }, 00:31:57.194 "claimed": false, 00:31:57.194 "zoned": false, 00:31:57.194 "supported_io_types": { 00:31:57.194 "read": true, 00:31:57.194 "write": true, 00:31:57.194 "unmap": true, 00:31:57.194 "flush": true, 00:31:57.194 "reset": true, 00:31:57.194 "nvme_admin": false, 00:31:57.194 "nvme_io": false, 00:31:57.194 "nvme_io_md": false, 00:31:57.194 "write_zeroes": true, 00:31:57.194 "zcopy": true, 00:31:57.194 "get_zone_info": false, 00:31:57.194 "zone_management": false, 00:31:57.194 "zone_append": false, 00:31:57.194 "compare": false, 00:31:57.194 "compare_and_write": false, 00:31:57.194 "abort": true, 00:31:57.194 "seek_hole": false, 00:31:57.194 "seek_data": false, 00:31:57.194 "copy": true, 00:31:57.194 "nvme_iov_md": false 00:31:57.194 }, 00:31:57.194 "memory_domains": [ 00:31:57.194 { 00:31:57.194 "dma_device_id": "system", 00:31:57.194 "dma_device_type": 1 00:31:57.194 }, 00:31:57.194 { 00:31:57.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:57.194 "dma_device_type": 2 00:31:57.194 } 00:31:57.194 ], 00:31:57.194 "driver_specific": {} 00:31:57.194 } 00:31:57.194 ] 00:31:57.194 21:45:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:31:57.194 21:45:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:31:57.194 21:45:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:31:57.194 21:45:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:31:57.453 BaseBdev3 00:31:57.453 21:45:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:31:57.453 21:45:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:31:57.453 21:45:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:57.453 21:45:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:31:57.453 21:45:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:57.453 21:45:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:57.453 21:45:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:57.711 21:45:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:57.970 [ 00:31:57.970 { 00:31:57.970 "name": "BaseBdev3", 00:31:57.970 "aliases": [ 00:31:57.970 "89c94d78-df4f-47a8-bcc5-edeb6835c0f6" 00:31:57.970 ], 00:31:57.970 "product_name": "Malloc disk", 00:31:57.970 "block_size": 512, 00:31:57.970 "num_blocks": 65536, 00:31:57.970 "uuid": "89c94d78-df4f-47a8-bcc5-edeb6835c0f6", 00:31:57.970 "assigned_rate_limits": { 00:31:57.970 "rw_ios_per_sec": 0, 00:31:57.970 "rw_mbytes_per_sec": 0, 00:31:57.970 "r_mbytes_per_sec": 0, 00:31:57.970 "w_mbytes_per_sec": 0 00:31:57.970 }, 00:31:57.970 "claimed": false, 00:31:57.970 "zoned": false, 00:31:57.970 "supported_io_types": { 00:31:57.970 "read": true, 00:31:57.970 "write": true, 00:31:57.970 "unmap": true, 00:31:57.970 "flush": true, 00:31:57.970 "reset": true, 00:31:57.970 "nvme_admin": false, 00:31:57.970 "nvme_io": false, 00:31:57.970 "nvme_io_md": false, 00:31:57.970 "write_zeroes": true, 00:31:57.970 "zcopy": true, 00:31:57.970 "get_zone_info": false, 00:31:57.970 "zone_management": false, 00:31:57.970 "zone_append": false, 00:31:57.970 "compare": false, 00:31:57.970 "compare_and_write": false, 00:31:57.970 "abort": true, 00:31:57.970 "seek_hole": false, 00:31:57.970 "seek_data": false, 00:31:57.970 "copy": true, 00:31:57.970 "nvme_iov_md": false 00:31:57.970 }, 00:31:57.970 "memory_domains": [ 00:31:57.970 { 00:31:57.970 "dma_device_id": "system", 00:31:57.970 "dma_device_type": 1 00:31:57.970 }, 00:31:57.970 { 00:31:57.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:57.970 "dma_device_type": 2 00:31:57.970 } 00:31:57.970 ], 00:31:57.970 "driver_specific": {} 00:31:57.970 } 00:31:57.970 ] 00:31:57.970 21:45:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:31:57.970 21:45:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:31:57.970 21:45:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:31:57.970 21:45:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:31:57.971 [2024-07-15 21:45:31.332743] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:57.971 [2024-07-15 21:45:31.332814] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:57.971 [2024-07-15 21:45:31.332859] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:57.971 [2024-07-15 21:45:31.334677] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:58.229 21:45:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:58.229 21:45:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:58.229 21:45:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:58.229 21:45:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:58.229 21:45:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:58.229 21:45:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:58.229 21:45:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:58.229 21:45:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:58.229 21:45:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:58.229 21:45:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:58.229 21:45:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:58.229 21:45:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:58.229 21:45:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:58.229 "name": "Existed_Raid", 00:31:58.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:58.229 "strip_size_kb": 64, 00:31:58.229 "state": "configuring", 00:31:58.229 "raid_level": "raid5f", 00:31:58.229 "superblock": false, 00:31:58.229 "num_base_bdevs": 3, 00:31:58.229 "num_base_bdevs_discovered": 2, 00:31:58.229 "num_base_bdevs_operational": 3, 00:31:58.229 "base_bdevs_list": [ 00:31:58.229 { 00:31:58.229 "name": "BaseBdev1", 00:31:58.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:58.229 "is_configured": false, 00:31:58.229 "data_offset": 0, 00:31:58.229 "data_size": 0 00:31:58.229 }, 00:31:58.229 { 00:31:58.229 "name": "BaseBdev2", 00:31:58.229 "uuid": "7b2a21d6-f078-49a0-9d00-08d61b5e34f8", 00:31:58.229 "is_configured": true, 00:31:58.229 "data_offset": 0, 00:31:58.229 "data_size": 65536 00:31:58.229 }, 00:31:58.229 { 00:31:58.229 "name": "BaseBdev3", 00:31:58.229 "uuid": "89c94d78-df4f-47a8-bcc5-edeb6835c0f6", 00:31:58.229 "is_configured": true, 00:31:58.229 "data_offset": 0, 00:31:58.229 "data_size": 65536 00:31:58.229 } 00:31:58.229 ] 00:31:58.229 }' 00:31:58.229 21:45:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:58.229 21:45:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:59.165 21:45:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:31:59.165 [2024-07-15 21:45:32.382941] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:59.165 21:45:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:59.165 21:45:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:59.165 21:45:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:59.165 21:45:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:59.165 21:45:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:59.165 21:45:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:59.165 21:45:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:59.165 21:45:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:59.165 21:45:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:59.165 21:45:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:59.165 21:45:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:59.165 21:45:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:59.423 21:45:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:59.423 "name": "Existed_Raid", 00:31:59.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:59.423 "strip_size_kb": 64, 00:31:59.423 "state": "configuring", 00:31:59.423 "raid_level": "raid5f", 00:31:59.423 "superblock": false, 00:31:59.423 "num_base_bdevs": 3, 00:31:59.423 "num_base_bdevs_discovered": 1, 00:31:59.423 "num_base_bdevs_operational": 3, 00:31:59.423 "base_bdevs_list": [ 00:31:59.423 { 00:31:59.423 "name": "BaseBdev1", 00:31:59.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:59.423 "is_configured": false, 00:31:59.423 "data_offset": 0, 00:31:59.423 "data_size": 0 00:31:59.423 }, 00:31:59.423 { 00:31:59.423 "name": null, 00:31:59.423 "uuid": "7b2a21d6-f078-49a0-9d00-08d61b5e34f8", 00:31:59.423 "is_configured": false, 00:31:59.423 "data_offset": 0, 00:31:59.423 "data_size": 65536 00:31:59.423 }, 00:31:59.423 { 00:31:59.423 "name": "BaseBdev3", 00:31:59.423 "uuid": "89c94d78-df4f-47a8-bcc5-edeb6835c0f6", 00:31:59.423 "is_configured": true, 00:31:59.423 "data_offset": 0, 00:31:59.423 "data_size": 65536 00:31:59.423 } 00:31:59.423 ] 00:31:59.423 }' 00:31:59.423 21:45:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:59.423 21:45:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:59.987 21:45:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:59.987 21:45:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:00.244 21:45:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:32:00.244 21:45:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:32:00.528 [2024-07-15 21:45:33.639930] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:00.528 BaseBdev1 00:32:00.528 21:45:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:32:00.528 21:45:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:32:00.528 21:45:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:00.528 21:45:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:32:00.528 21:45:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:00.528 21:45:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:00.528 21:45:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:00.528 21:45:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:00.793 [ 00:32:00.793 { 00:32:00.793 "name": "BaseBdev1", 00:32:00.793 "aliases": [ 00:32:00.793 "fb0026fc-a2ab-47ce-911f-10a2d3c085a0" 00:32:00.793 ], 00:32:00.793 "product_name": "Malloc disk", 00:32:00.793 "block_size": 512, 00:32:00.793 "num_blocks": 65536, 00:32:00.793 "uuid": "fb0026fc-a2ab-47ce-911f-10a2d3c085a0", 00:32:00.793 "assigned_rate_limits": { 00:32:00.793 "rw_ios_per_sec": 0, 00:32:00.793 "rw_mbytes_per_sec": 0, 00:32:00.793 "r_mbytes_per_sec": 0, 00:32:00.793 "w_mbytes_per_sec": 0 00:32:00.793 }, 00:32:00.793 "claimed": true, 00:32:00.793 "claim_type": "exclusive_write", 00:32:00.793 "zoned": false, 00:32:00.793 "supported_io_types": { 00:32:00.793 "read": true, 00:32:00.793 "write": true, 00:32:00.793 "unmap": true, 00:32:00.793 "flush": true, 00:32:00.793 "reset": true, 00:32:00.793 "nvme_admin": false, 00:32:00.793 "nvme_io": false, 00:32:00.793 "nvme_io_md": false, 00:32:00.793 "write_zeroes": true, 00:32:00.793 "zcopy": true, 00:32:00.793 "get_zone_info": false, 00:32:00.793 "zone_management": false, 00:32:00.793 "zone_append": false, 00:32:00.793 "compare": false, 00:32:00.793 "compare_and_write": false, 00:32:00.793 "abort": true, 00:32:00.793 "seek_hole": false, 00:32:00.793 "seek_data": false, 00:32:00.793 "copy": true, 00:32:00.793 "nvme_iov_md": false 00:32:00.793 }, 00:32:00.793 "memory_domains": [ 00:32:00.793 { 00:32:00.793 "dma_device_id": "system", 00:32:00.793 "dma_device_type": 1 00:32:00.793 }, 00:32:00.793 { 00:32:00.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:00.793 "dma_device_type": 2 00:32:00.793 } 00:32:00.793 ], 00:32:00.793 "driver_specific": {} 00:32:00.793 } 00:32:00.793 ] 00:32:00.793 21:45:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:32:00.793 21:45:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:00.793 21:45:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:00.793 21:45:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:00.793 21:45:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:00.793 21:45:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:00.793 21:45:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:00.793 21:45:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:00.793 21:45:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:00.793 21:45:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:00.793 21:45:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:00.793 21:45:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:00.793 21:45:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:01.051 21:45:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:01.051 "name": "Existed_Raid", 00:32:01.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:01.051 "strip_size_kb": 64, 00:32:01.051 "state": "configuring", 00:32:01.051 "raid_level": "raid5f", 00:32:01.051 "superblock": false, 00:32:01.051 "num_base_bdevs": 3, 00:32:01.051 "num_base_bdevs_discovered": 2, 00:32:01.051 "num_base_bdevs_operational": 3, 00:32:01.051 "base_bdevs_list": [ 00:32:01.051 { 00:32:01.051 "name": "BaseBdev1", 00:32:01.051 "uuid": "fb0026fc-a2ab-47ce-911f-10a2d3c085a0", 00:32:01.051 "is_configured": true, 00:32:01.051 "data_offset": 0, 00:32:01.051 "data_size": 65536 00:32:01.051 }, 00:32:01.051 { 00:32:01.051 "name": null, 00:32:01.051 "uuid": "7b2a21d6-f078-49a0-9d00-08d61b5e34f8", 00:32:01.051 "is_configured": false, 00:32:01.051 "data_offset": 0, 00:32:01.051 "data_size": 65536 00:32:01.051 }, 00:32:01.051 { 00:32:01.051 "name": "BaseBdev3", 00:32:01.051 "uuid": "89c94d78-df4f-47a8-bcc5-edeb6835c0f6", 00:32:01.051 "is_configured": true, 00:32:01.051 "data_offset": 0, 00:32:01.051 "data_size": 65536 00:32:01.051 } 00:32:01.051 ] 00:32:01.051 }' 00:32:01.051 21:45:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:01.051 21:45:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.618 21:45:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:01.618 21:45:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:01.876 21:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:32:01.876 21:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:32:02.135 [2024-07-15 21:45:35.405018] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:02.135 21:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:02.135 21:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:02.135 21:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:02.135 21:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:02.135 21:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:02.135 21:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:02.135 21:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:02.135 21:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:02.135 21:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:02.135 21:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:02.135 21:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:02.135 21:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:02.392 21:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:02.392 "name": "Existed_Raid", 00:32:02.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:02.392 "strip_size_kb": 64, 00:32:02.392 "state": "configuring", 00:32:02.392 "raid_level": "raid5f", 00:32:02.392 "superblock": false, 00:32:02.392 "num_base_bdevs": 3, 00:32:02.392 "num_base_bdevs_discovered": 1, 00:32:02.392 "num_base_bdevs_operational": 3, 00:32:02.392 "base_bdevs_list": [ 00:32:02.392 { 00:32:02.392 "name": "BaseBdev1", 00:32:02.392 "uuid": "fb0026fc-a2ab-47ce-911f-10a2d3c085a0", 00:32:02.392 "is_configured": true, 00:32:02.392 "data_offset": 0, 00:32:02.392 "data_size": 65536 00:32:02.392 }, 00:32:02.392 { 00:32:02.392 "name": null, 00:32:02.392 "uuid": "7b2a21d6-f078-49a0-9d00-08d61b5e34f8", 00:32:02.392 "is_configured": false, 00:32:02.392 "data_offset": 0, 00:32:02.392 "data_size": 65536 00:32:02.392 }, 00:32:02.392 { 00:32:02.392 "name": null, 00:32:02.392 "uuid": "89c94d78-df4f-47a8-bcc5-edeb6835c0f6", 00:32:02.392 "is_configured": false, 00:32:02.392 "data_offset": 0, 00:32:02.392 "data_size": 65536 00:32:02.392 } 00:32:02.392 ] 00:32:02.392 }' 00:32:02.392 21:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:02.392 21:45:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.957 21:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:02.957 21:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:03.216 21:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:32:03.217 21:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:32:03.475 [2024-07-15 21:45:36.690839] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:03.475 21:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:03.475 21:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:03.475 21:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:03.475 21:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:03.475 21:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:03.475 21:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:03.475 21:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:03.475 21:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:03.475 21:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:03.475 21:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:03.475 21:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:03.475 21:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:03.734 21:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:03.735 "name": "Existed_Raid", 00:32:03.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:03.735 "strip_size_kb": 64, 00:32:03.735 "state": "configuring", 00:32:03.735 "raid_level": "raid5f", 00:32:03.735 "superblock": false, 00:32:03.735 "num_base_bdevs": 3, 00:32:03.735 "num_base_bdevs_discovered": 2, 00:32:03.735 "num_base_bdevs_operational": 3, 00:32:03.735 "base_bdevs_list": [ 00:32:03.735 { 00:32:03.735 "name": "BaseBdev1", 00:32:03.735 "uuid": "fb0026fc-a2ab-47ce-911f-10a2d3c085a0", 00:32:03.735 "is_configured": true, 00:32:03.735 "data_offset": 0, 00:32:03.735 "data_size": 65536 00:32:03.735 }, 00:32:03.735 { 00:32:03.735 "name": null, 00:32:03.735 "uuid": "7b2a21d6-f078-49a0-9d00-08d61b5e34f8", 00:32:03.735 "is_configured": false, 00:32:03.735 "data_offset": 0, 00:32:03.735 "data_size": 65536 00:32:03.735 }, 00:32:03.735 { 00:32:03.735 "name": "BaseBdev3", 00:32:03.735 "uuid": "89c94d78-df4f-47a8-bcc5-edeb6835c0f6", 00:32:03.735 "is_configured": true, 00:32:03.735 "data_offset": 0, 00:32:03.735 "data_size": 65536 00:32:03.735 } 00:32:03.735 ] 00:32:03.735 }' 00:32:03.735 21:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:03.735 21:45:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:04.303 21:45:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:04.303 21:45:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:04.562 21:45:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:32:04.562 21:45:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:32:04.822 [2024-07-15 21:45:38.036610] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:04.822 21:45:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:04.822 21:45:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:04.822 21:45:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:04.822 21:45:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:04.822 21:45:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:04.822 21:45:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:04.822 21:45:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:04.822 21:45:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:04.822 21:45:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:04.822 21:45:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:04.822 21:45:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:04.822 21:45:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:05.081 21:45:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:05.081 "name": "Existed_Raid", 00:32:05.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:05.081 "strip_size_kb": 64, 00:32:05.081 "state": "configuring", 00:32:05.081 "raid_level": "raid5f", 00:32:05.081 "superblock": false, 00:32:05.081 "num_base_bdevs": 3, 00:32:05.081 "num_base_bdevs_discovered": 1, 00:32:05.081 "num_base_bdevs_operational": 3, 00:32:05.081 "base_bdevs_list": [ 00:32:05.081 { 00:32:05.081 "name": null, 00:32:05.081 "uuid": "fb0026fc-a2ab-47ce-911f-10a2d3c085a0", 00:32:05.081 "is_configured": false, 00:32:05.081 "data_offset": 0, 00:32:05.081 "data_size": 65536 00:32:05.081 }, 00:32:05.081 { 00:32:05.081 "name": null, 00:32:05.081 "uuid": "7b2a21d6-f078-49a0-9d00-08d61b5e34f8", 00:32:05.081 "is_configured": false, 00:32:05.081 "data_offset": 0, 00:32:05.081 "data_size": 65536 00:32:05.081 }, 00:32:05.081 { 00:32:05.081 "name": "BaseBdev3", 00:32:05.081 "uuid": "89c94d78-df4f-47a8-bcc5-edeb6835c0f6", 00:32:05.081 "is_configured": true, 00:32:05.081 "data_offset": 0, 00:32:05.081 "data_size": 65536 00:32:05.081 } 00:32:05.081 ] 00:32:05.081 }' 00:32:05.081 21:45:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:05.081 21:45:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:05.648 21:45:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:05.648 21:45:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:05.907 21:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:32:05.907 21:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:32:06.165 [2024-07-15 21:45:39.376281] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:06.165 21:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:06.165 21:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:06.165 21:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:06.165 21:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:06.165 21:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:06.165 21:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:06.165 21:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:06.165 21:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:06.165 21:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:06.165 21:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:06.165 21:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:06.165 21:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:06.424 21:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:06.424 "name": "Existed_Raid", 00:32:06.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:06.424 "strip_size_kb": 64, 00:32:06.424 "state": "configuring", 00:32:06.424 "raid_level": "raid5f", 00:32:06.424 "superblock": false, 00:32:06.424 "num_base_bdevs": 3, 00:32:06.424 "num_base_bdevs_discovered": 2, 00:32:06.424 "num_base_bdevs_operational": 3, 00:32:06.424 "base_bdevs_list": [ 00:32:06.424 { 00:32:06.424 "name": null, 00:32:06.424 "uuid": "fb0026fc-a2ab-47ce-911f-10a2d3c085a0", 00:32:06.424 "is_configured": false, 00:32:06.424 "data_offset": 0, 00:32:06.424 "data_size": 65536 00:32:06.424 }, 00:32:06.424 { 00:32:06.424 "name": "BaseBdev2", 00:32:06.424 "uuid": "7b2a21d6-f078-49a0-9d00-08d61b5e34f8", 00:32:06.424 "is_configured": true, 00:32:06.424 "data_offset": 0, 00:32:06.424 "data_size": 65536 00:32:06.424 }, 00:32:06.424 { 00:32:06.424 "name": "BaseBdev3", 00:32:06.424 "uuid": "89c94d78-df4f-47a8-bcc5-edeb6835c0f6", 00:32:06.424 "is_configured": true, 00:32:06.424 "data_offset": 0, 00:32:06.424 "data_size": 65536 00:32:06.424 } 00:32:06.424 ] 00:32:06.424 }' 00:32:06.424 21:45:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:06.424 21:45:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:06.991 21:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:06.991 21:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:07.249 21:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:32:07.249 21:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:07.249 21:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:32:07.249 21:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u fb0026fc-a2ab-47ce-911f-10a2d3c085a0 00:32:07.507 [2024-07-15 21:45:40.825617] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:32:07.507 [2024-07-15 21:45:40.825736] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:32:07.507 [2024-07-15 21:45:40.825756] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:32:07.507 [2024-07-15 21:45:40.825883] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:32:07.507 [2024-07-15 21:45:40.831058] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:32:07.507 [2024-07-15 21:45:40.831115] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008d80 00:32:07.507 [2024-07-15 21:45:40.831355] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:07.507 NewBaseBdev 00:32:07.507 21:45:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:32:07.507 21:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:32:07.507 21:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:07.507 21:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:32:07.507 21:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:07.507 21:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:07.507 21:45:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:07.766 21:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:32:08.025 [ 00:32:08.025 { 00:32:08.025 "name": "NewBaseBdev", 00:32:08.025 "aliases": [ 00:32:08.025 "fb0026fc-a2ab-47ce-911f-10a2d3c085a0" 00:32:08.025 ], 00:32:08.025 "product_name": "Malloc disk", 00:32:08.025 "block_size": 512, 00:32:08.025 "num_blocks": 65536, 00:32:08.025 "uuid": "fb0026fc-a2ab-47ce-911f-10a2d3c085a0", 00:32:08.025 "assigned_rate_limits": { 00:32:08.025 "rw_ios_per_sec": 0, 00:32:08.025 "rw_mbytes_per_sec": 0, 00:32:08.025 "r_mbytes_per_sec": 0, 00:32:08.025 "w_mbytes_per_sec": 0 00:32:08.025 }, 00:32:08.025 "claimed": true, 00:32:08.025 "claim_type": "exclusive_write", 00:32:08.025 "zoned": false, 00:32:08.025 "supported_io_types": { 00:32:08.025 "read": true, 00:32:08.025 "write": true, 00:32:08.025 "unmap": true, 00:32:08.025 "flush": true, 00:32:08.025 "reset": true, 00:32:08.025 "nvme_admin": false, 00:32:08.025 "nvme_io": false, 00:32:08.025 "nvme_io_md": false, 00:32:08.025 "write_zeroes": true, 00:32:08.025 "zcopy": true, 00:32:08.025 "get_zone_info": false, 00:32:08.025 "zone_management": false, 00:32:08.025 "zone_append": false, 00:32:08.025 "compare": false, 00:32:08.025 "compare_and_write": false, 00:32:08.025 "abort": true, 00:32:08.025 "seek_hole": false, 00:32:08.025 "seek_data": false, 00:32:08.025 "copy": true, 00:32:08.025 "nvme_iov_md": false 00:32:08.025 }, 00:32:08.025 "memory_domains": [ 00:32:08.025 { 00:32:08.025 "dma_device_id": "system", 00:32:08.025 "dma_device_type": 1 00:32:08.025 }, 00:32:08.025 { 00:32:08.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:08.025 "dma_device_type": 2 00:32:08.025 } 00:32:08.025 ], 00:32:08.025 "driver_specific": {} 00:32:08.025 } 00:32:08.025 ] 00:32:08.025 21:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:32:08.025 21:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:32:08.025 21:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:08.025 21:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:08.025 21:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:08.025 21:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:08.025 21:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:08.025 21:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:08.025 21:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:08.025 21:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:08.025 21:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:08.025 21:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:08.025 21:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:08.025 21:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:08.025 "name": "Existed_Raid", 00:32:08.025 "uuid": "9c1a621f-3109-47ed-85f9-ba1f9e4f425c", 00:32:08.025 "strip_size_kb": 64, 00:32:08.025 "state": "online", 00:32:08.025 "raid_level": "raid5f", 00:32:08.025 "superblock": false, 00:32:08.025 "num_base_bdevs": 3, 00:32:08.025 "num_base_bdevs_discovered": 3, 00:32:08.025 "num_base_bdevs_operational": 3, 00:32:08.025 "base_bdevs_list": [ 00:32:08.025 { 00:32:08.025 "name": "NewBaseBdev", 00:32:08.025 "uuid": "fb0026fc-a2ab-47ce-911f-10a2d3c085a0", 00:32:08.025 "is_configured": true, 00:32:08.025 "data_offset": 0, 00:32:08.025 "data_size": 65536 00:32:08.025 }, 00:32:08.025 { 00:32:08.025 "name": "BaseBdev2", 00:32:08.025 "uuid": "7b2a21d6-f078-49a0-9d00-08d61b5e34f8", 00:32:08.025 "is_configured": true, 00:32:08.025 "data_offset": 0, 00:32:08.025 "data_size": 65536 00:32:08.025 }, 00:32:08.025 { 00:32:08.025 "name": "BaseBdev3", 00:32:08.025 "uuid": "89c94d78-df4f-47a8-bcc5-edeb6835c0f6", 00:32:08.025 "is_configured": true, 00:32:08.025 "data_offset": 0, 00:32:08.025 "data_size": 65536 00:32:08.025 } 00:32:08.025 ] 00:32:08.025 }' 00:32:08.025 21:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:08.025 21:45:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:08.961 21:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:32:08.961 21:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:32:08.961 21:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:08.961 21:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:08.961 21:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:08.961 21:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:32:08.961 21:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:32:08.961 21:45:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:08.961 [2024-07-15 21:45:42.212300] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:08.961 21:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:08.961 "name": "Existed_Raid", 00:32:08.961 "aliases": [ 00:32:08.961 "9c1a621f-3109-47ed-85f9-ba1f9e4f425c" 00:32:08.961 ], 00:32:08.961 "product_name": "Raid Volume", 00:32:08.961 "block_size": 512, 00:32:08.961 "num_blocks": 131072, 00:32:08.961 "uuid": "9c1a621f-3109-47ed-85f9-ba1f9e4f425c", 00:32:08.961 "assigned_rate_limits": { 00:32:08.961 "rw_ios_per_sec": 0, 00:32:08.961 "rw_mbytes_per_sec": 0, 00:32:08.961 "r_mbytes_per_sec": 0, 00:32:08.961 "w_mbytes_per_sec": 0 00:32:08.961 }, 00:32:08.961 "claimed": false, 00:32:08.961 "zoned": false, 00:32:08.961 "supported_io_types": { 00:32:08.961 "read": true, 00:32:08.961 "write": true, 00:32:08.961 "unmap": false, 00:32:08.961 "flush": false, 00:32:08.961 "reset": true, 00:32:08.961 "nvme_admin": false, 00:32:08.961 "nvme_io": false, 00:32:08.961 "nvme_io_md": false, 00:32:08.961 "write_zeroes": true, 00:32:08.961 "zcopy": false, 00:32:08.961 "get_zone_info": false, 00:32:08.961 "zone_management": false, 00:32:08.961 "zone_append": false, 00:32:08.961 "compare": false, 00:32:08.961 "compare_and_write": false, 00:32:08.961 "abort": false, 00:32:08.961 "seek_hole": false, 00:32:08.961 "seek_data": false, 00:32:08.961 "copy": false, 00:32:08.961 "nvme_iov_md": false 00:32:08.961 }, 00:32:08.961 "driver_specific": { 00:32:08.961 "raid": { 00:32:08.961 "uuid": "9c1a621f-3109-47ed-85f9-ba1f9e4f425c", 00:32:08.961 "strip_size_kb": 64, 00:32:08.961 "state": "online", 00:32:08.961 "raid_level": "raid5f", 00:32:08.961 "superblock": false, 00:32:08.961 "num_base_bdevs": 3, 00:32:08.961 "num_base_bdevs_discovered": 3, 00:32:08.961 "num_base_bdevs_operational": 3, 00:32:08.961 "base_bdevs_list": [ 00:32:08.961 { 00:32:08.961 "name": "NewBaseBdev", 00:32:08.961 "uuid": "fb0026fc-a2ab-47ce-911f-10a2d3c085a0", 00:32:08.961 "is_configured": true, 00:32:08.961 "data_offset": 0, 00:32:08.961 "data_size": 65536 00:32:08.961 }, 00:32:08.961 { 00:32:08.961 "name": "BaseBdev2", 00:32:08.961 "uuid": "7b2a21d6-f078-49a0-9d00-08d61b5e34f8", 00:32:08.961 "is_configured": true, 00:32:08.961 "data_offset": 0, 00:32:08.961 "data_size": 65536 00:32:08.961 }, 00:32:08.961 { 00:32:08.961 "name": "BaseBdev3", 00:32:08.961 "uuid": "89c94d78-df4f-47a8-bcc5-edeb6835c0f6", 00:32:08.961 "is_configured": true, 00:32:08.961 "data_offset": 0, 00:32:08.961 "data_size": 65536 00:32:08.961 } 00:32:08.961 ] 00:32:08.961 } 00:32:08.961 } 00:32:08.961 }' 00:32:08.961 21:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:08.961 21:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:32:08.961 BaseBdev2 00:32:08.961 BaseBdev3' 00:32:08.961 21:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:08.961 21:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:32:08.961 21:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:09.220 21:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:09.220 "name": "NewBaseBdev", 00:32:09.220 "aliases": [ 00:32:09.220 "fb0026fc-a2ab-47ce-911f-10a2d3c085a0" 00:32:09.220 ], 00:32:09.220 "product_name": "Malloc disk", 00:32:09.220 "block_size": 512, 00:32:09.220 "num_blocks": 65536, 00:32:09.220 "uuid": "fb0026fc-a2ab-47ce-911f-10a2d3c085a0", 00:32:09.220 "assigned_rate_limits": { 00:32:09.220 "rw_ios_per_sec": 0, 00:32:09.220 "rw_mbytes_per_sec": 0, 00:32:09.220 "r_mbytes_per_sec": 0, 00:32:09.220 "w_mbytes_per_sec": 0 00:32:09.220 }, 00:32:09.220 "claimed": true, 00:32:09.220 "claim_type": "exclusive_write", 00:32:09.220 "zoned": false, 00:32:09.220 "supported_io_types": { 00:32:09.220 "read": true, 00:32:09.220 "write": true, 00:32:09.220 "unmap": true, 00:32:09.220 "flush": true, 00:32:09.220 "reset": true, 00:32:09.220 "nvme_admin": false, 00:32:09.220 "nvme_io": false, 00:32:09.220 "nvme_io_md": false, 00:32:09.220 "write_zeroes": true, 00:32:09.220 "zcopy": true, 00:32:09.220 "get_zone_info": false, 00:32:09.220 "zone_management": false, 00:32:09.220 "zone_append": false, 00:32:09.220 "compare": false, 00:32:09.220 "compare_and_write": false, 00:32:09.220 "abort": true, 00:32:09.220 "seek_hole": false, 00:32:09.220 "seek_data": false, 00:32:09.220 "copy": true, 00:32:09.220 "nvme_iov_md": false 00:32:09.220 }, 00:32:09.220 "memory_domains": [ 00:32:09.220 { 00:32:09.220 "dma_device_id": "system", 00:32:09.220 "dma_device_type": 1 00:32:09.220 }, 00:32:09.220 { 00:32:09.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:09.220 "dma_device_type": 2 00:32:09.220 } 00:32:09.220 ], 00:32:09.220 "driver_specific": {} 00:32:09.220 }' 00:32:09.220 21:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:09.220 21:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:09.479 21:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:09.479 21:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:09.479 21:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:09.479 21:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:09.479 21:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:09.479 21:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:09.479 21:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:09.479 21:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:09.763 21:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:09.763 21:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:09.763 21:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:09.763 21:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:32:09.763 21:45:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:10.022 21:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:10.022 "name": "BaseBdev2", 00:32:10.022 "aliases": [ 00:32:10.022 "7b2a21d6-f078-49a0-9d00-08d61b5e34f8" 00:32:10.022 ], 00:32:10.022 "product_name": "Malloc disk", 00:32:10.022 "block_size": 512, 00:32:10.022 "num_blocks": 65536, 00:32:10.022 "uuid": "7b2a21d6-f078-49a0-9d00-08d61b5e34f8", 00:32:10.022 "assigned_rate_limits": { 00:32:10.022 "rw_ios_per_sec": 0, 00:32:10.022 "rw_mbytes_per_sec": 0, 00:32:10.022 "r_mbytes_per_sec": 0, 00:32:10.022 "w_mbytes_per_sec": 0 00:32:10.022 }, 00:32:10.022 "claimed": true, 00:32:10.022 "claim_type": "exclusive_write", 00:32:10.022 "zoned": false, 00:32:10.022 "supported_io_types": { 00:32:10.022 "read": true, 00:32:10.022 "write": true, 00:32:10.022 "unmap": true, 00:32:10.022 "flush": true, 00:32:10.022 "reset": true, 00:32:10.022 "nvme_admin": false, 00:32:10.022 "nvme_io": false, 00:32:10.022 "nvme_io_md": false, 00:32:10.022 "write_zeroes": true, 00:32:10.022 "zcopy": true, 00:32:10.022 "get_zone_info": false, 00:32:10.022 "zone_management": false, 00:32:10.022 "zone_append": false, 00:32:10.022 "compare": false, 00:32:10.022 "compare_and_write": false, 00:32:10.022 "abort": true, 00:32:10.022 "seek_hole": false, 00:32:10.022 "seek_data": false, 00:32:10.022 "copy": true, 00:32:10.022 "nvme_iov_md": false 00:32:10.022 }, 00:32:10.022 "memory_domains": [ 00:32:10.022 { 00:32:10.022 "dma_device_id": "system", 00:32:10.022 "dma_device_type": 1 00:32:10.022 }, 00:32:10.022 { 00:32:10.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:10.022 "dma_device_type": 2 00:32:10.022 } 00:32:10.022 ], 00:32:10.022 "driver_specific": {} 00:32:10.022 }' 00:32:10.022 21:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:10.022 21:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:10.022 21:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:10.022 21:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:10.022 21:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:10.022 21:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:10.022 21:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:10.280 21:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:10.280 21:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:10.280 21:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:10.280 21:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:10.280 21:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:10.280 21:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:10.280 21:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:32:10.280 21:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:10.537 21:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:10.537 "name": "BaseBdev3", 00:32:10.537 "aliases": [ 00:32:10.537 "89c94d78-df4f-47a8-bcc5-edeb6835c0f6" 00:32:10.537 ], 00:32:10.537 "product_name": "Malloc disk", 00:32:10.537 "block_size": 512, 00:32:10.537 "num_blocks": 65536, 00:32:10.537 "uuid": "89c94d78-df4f-47a8-bcc5-edeb6835c0f6", 00:32:10.537 "assigned_rate_limits": { 00:32:10.537 "rw_ios_per_sec": 0, 00:32:10.537 "rw_mbytes_per_sec": 0, 00:32:10.537 "r_mbytes_per_sec": 0, 00:32:10.537 "w_mbytes_per_sec": 0 00:32:10.537 }, 00:32:10.537 "claimed": true, 00:32:10.537 "claim_type": "exclusive_write", 00:32:10.537 "zoned": false, 00:32:10.537 "supported_io_types": { 00:32:10.537 "read": true, 00:32:10.537 "write": true, 00:32:10.537 "unmap": true, 00:32:10.537 "flush": true, 00:32:10.537 "reset": true, 00:32:10.537 "nvme_admin": false, 00:32:10.537 "nvme_io": false, 00:32:10.537 "nvme_io_md": false, 00:32:10.537 "write_zeroes": true, 00:32:10.537 "zcopy": true, 00:32:10.537 "get_zone_info": false, 00:32:10.537 "zone_management": false, 00:32:10.537 "zone_append": false, 00:32:10.537 "compare": false, 00:32:10.537 "compare_and_write": false, 00:32:10.537 "abort": true, 00:32:10.537 "seek_hole": false, 00:32:10.537 "seek_data": false, 00:32:10.537 "copy": true, 00:32:10.537 "nvme_iov_md": false 00:32:10.537 }, 00:32:10.538 "memory_domains": [ 00:32:10.538 { 00:32:10.538 "dma_device_id": "system", 00:32:10.538 "dma_device_type": 1 00:32:10.538 }, 00:32:10.538 { 00:32:10.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:10.538 "dma_device_type": 2 00:32:10.538 } 00:32:10.538 ], 00:32:10.538 "driver_specific": {} 00:32:10.538 }' 00:32:10.538 21:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:10.538 21:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:10.538 21:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:10.795 21:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:10.795 21:45:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:10.795 21:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:10.795 21:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:10.795 21:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:10.795 21:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:10.795 21:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:11.055 21:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:11.055 21:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:11.055 21:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:11.055 [2024-07-15 21:45:44.408509] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:11.055 [2024-07-15 21:45:44.408663] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:11.055 [2024-07-15 21:45:44.408793] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:11.055 [2024-07-15 21:45:44.409130] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:11.055 [2024-07-15 21:45:44.409172] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name Existed_Raid, state offline 00:32:11.055 21:45:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 151416 00:32:11.055 21:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 151416 ']' 00:32:11.055 21:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # kill -0 151416 00:32:11.055 21:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@953 -- # uname 00:32:11.055 21:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:11.313 21:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 151416 00:32:11.313 killing process with pid 151416 00:32:11.313 21:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:11.313 21:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:11.313 21:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 151416' 00:32:11.313 21:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@967 -- # kill 151416 00:32:11.313 21:45:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # wait 151416 00:32:11.313 [2024-07-15 21:45:44.450270] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:11.572 [2024-07-15 21:45:44.791563] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:12.956 ************************************ 00:32:12.956 END TEST raid5f_state_function_test 00:32:12.956 ************************************ 00:32:12.956 21:45:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:32:12.956 00:32:12.956 real 0m29.606s 00:32:12.956 user 0m54.453s 00:32:12.956 sys 0m3.819s 00:32:12.956 21:45:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:12.956 21:45:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:12.956 21:45:46 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:32:12.957 21:45:46 bdev_raid -- bdev/bdev_raid.sh@887 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:32:12.957 21:45:46 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:32:12.957 21:45:46 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:12.957 21:45:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:12.957 ************************************ 00:32:12.957 START TEST raid5f_state_function_test_sb 00:32:12.957 ************************************ 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid5f 3 true 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=152413 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 152413' 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:32:12.957 Process raid pid: 152413 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 152413 /var/tmp/spdk-raid.sock 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 152413 ']' 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:12.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:12.957 21:45:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:13.217 [2024-07-15 21:45:46.351731] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:32:13.217 [2024-07-15 21:45:46.351887] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:13.217 [2024-07-15 21:45:46.495979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.476 [2024-07-15 21:45:46.693422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:13.735 [2024-07-15 21:45:46.890400] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:13.994 21:45:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:13.994 21:45:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:32:13.994 21:45:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:32:14.251 [2024-07-15 21:45:47.396184] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:14.251 [2024-07-15 21:45:47.396248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:14.251 [2024-07-15 21:45:47.396258] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:14.251 [2024-07-15 21:45:47.396279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:14.251 [2024-07-15 21:45:47.396285] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:14.251 [2024-07-15 21:45:47.396298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:14.252 21:45:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:14.252 21:45:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:14.252 21:45:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:14.252 21:45:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:14.252 21:45:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:14.252 21:45:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:14.252 21:45:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:14.252 21:45:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:14.252 21:45:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:14.252 21:45:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:14.252 21:45:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:14.252 21:45:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:14.252 21:45:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:14.252 "name": "Existed_Raid", 00:32:14.252 "uuid": "1d61b78d-3d0e-4591-b21d-a404f2ffdd0b", 00:32:14.252 "strip_size_kb": 64, 00:32:14.252 "state": "configuring", 00:32:14.252 "raid_level": "raid5f", 00:32:14.252 "superblock": true, 00:32:14.252 "num_base_bdevs": 3, 00:32:14.252 "num_base_bdevs_discovered": 0, 00:32:14.252 "num_base_bdevs_operational": 3, 00:32:14.252 "base_bdevs_list": [ 00:32:14.252 { 00:32:14.252 "name": "BaseBdev1", 00:32:14.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:14.252 "is_configured": false, 00:32:14.252 "data_offset": 0, 00:32:14.252 "data_size": 0 00:32:14.252 }, 00:32:14.252 { 00:32:14.252 "name": "BaseBdev2", 00:32:14.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:14.252 "is_configured": false, 00:32:14.252 "data_offset": 0, 00:32:14.252 "data_size": 0 00:32:14.252 }, 00:32:14.252 { 00:32:14.252 "name": "BaseBdev3", 00:32:14.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:14.252 "is_configured": false, 00:32:14.252 "data_offset": 0, 00:32:14.252 "data_size": 0 00:32:14.252 } 00:32:14.252 ] 00:32:14.252 }' 00:32:14.252 21:45:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:14.252 21:45:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:15.185 21:45:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:15.185 [2024-07-15 21:45:48.426287] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:15.185 [2024-07-15 21:45:48.426334] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:32:15.185 21:45:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:32:15.442 [2024-07-15 21:45:48.602029] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:15.442 [2024-07-15 21:45:48.602094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:15.443 [2024-07-15 21:45:48.602104] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:15.443 [2024-07-15 21:45:48.602118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:15.443 [2024-07-15 21:45:48.602124] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:15.443 [2024-07-15 21:45:48.602160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:15.443 21:45:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:32:15.700 [2024-07-15 21:45:48.835755] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:15.700 BaseBdev1 00:32:15.700 21:45:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:32:15.700 21:45:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:32:15.700 21:45:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:15.700 21:45:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:32:15.700 21:45:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:15.700 21:45:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:15.700 21:45:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:15.701 21:45:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:15.958 [ 00:32:15.958 { 00:32:15.958 "name": "BaseBdev1", 00:32:15.958 "aliases": [ 00:32:15.958 "42a1d9d2-9917-4963-a192-30ecc75da46d" 00:32:15.958 ], 00:32:15.958 "product_name": "Malloc disk", 00:32:15.958 "block_size": 512, 00:32:15.958 "num_blocks": 65536, 00:32:15.958 "uuid": "42a1d9d2-9917-4963-a192-30ecc75da46d", 00:32:15.958 "assigned_rate_limits": { 00:32:15.958 "rw_ios_per_sec": 0, 00:32:15.958 "rw_mbytes_per_sec": 0, 00:32:15.958 "r_mbytes_per_sec": 0, 00:32:15.958 "w_mbytes_per_sec": 0 00:32:15.958 }, 00:32:15.958 "claimed": true, 00:32:15.958 "claim_type": "exclusive_write", 00:32:15.958 "zoned": false, 00:32:15.958 "supported_io_types": { 00:32:15.958 "read": true, 00:32:15.958 "write": true, 00:32:15.958 "unmap": true, 00:32:15.958 "flush": true, 00:32:15.958 "reset": true, 00:32:15.958 "nvme_admin": false, 00:32:15.958 "nvme_io": false, 00:32:15.958 "nvme_io_md": false, 00:32:15.958 "write_zeroes": true, 00:32:15.958 "zcopy": true, 00:32:15.958 "get_zone_info": false, 00:32:15.958 "zone_management": false, 00:32:15.958 "zone_append": false, 00:32:15.958 "compare": false, 00:32:15.958 "compare_and_write": false, 00:32:15.958 "abort": true, 00:32:15.958 "seek_hole": false, 00:32:15.958 "seek_data": false, 00:32:15.958 "copy": true, 00:32:15.958 "nvme_iov_md": false 00:32:15.958 }, 00:32:15.958 "memory_domains": [ 00:32:15.958 { 00:32:15.958 "dma_device_id": "system", 00:32:15.958 "dma_device_type": 1 00:32:15.958 }, 00:32:15.958 { 00:32:15.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:15.958 "dma_device_type": 2 00:32:15.958 } 00:32:15.958 ], 00:32:15.958 "driver_specific": {} 00:32:15.958 } 00:32:15.958 ] 00:32:15.958 21:45:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:32:15.958 21:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:15.958 21:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:15.958 21:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:15.958 21:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:15.958 21:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:15.958 21:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:15.958 21:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:15.958 21:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:15.958 21:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:15.958 21:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:15.958 21:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:15.958 21:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:16.217 21:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:16.217 "name": "Existed_Raid", 00:32:16.217 "uuid": "f8f6cc03-60fb-40f6-abd5-e029fd60f32a", 00:32:16.217 "strip_size_kb": 64, 00:32:16.217 "state": "configuring", 00:32:16.217 "raid_level": "raid5f", 00:32:16.217 "superblock": true, 00:32:16.217 "num_base_bdevs": 3, 00:32:16.217 "num_base_bdevs_discovered": 1, 00:32:16.217 "num_base_bdevs_operational": 3, 00:32:16.217 "base_bdevs_list": [ 00:32:16.217 { 00:32:16.217 "name": "BaseBdev1", 00:32:16.217 "uuid": "42a1d9d2-9917-4963-a192-30ecc75da46d", 00:32:16.217 "is_configured": true, 00:32:16.217 "data_offset": 2048, 00:32:16.217 "data_size": 63488 00:32:16.217 }, 00:32:16.217 { 00:32:16.217 "name": "BaseBdev2", 00:32:16.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:16.217 "is_configured": false, 00:32:16.217 "data_offset": 0, 00:32:16.217 "data_size": 0 00:32:16.217 }, 00:32:16.217 { 00:32:16.217 "name": "BaseBdev3", 00:32:16.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:16.217 "is_configured": false, 00:32:16.217 "data_offset": 0, 00:32:16.217 "data_size": 0 00:32:16.217 } 00:32:16.217 ] 00:32:16.217 }' 00:32:16.217 21:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:16.217 21:45:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:16.783 21:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:17.041 [2024-07-15 21:45:50.245416] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:17.041 [2024-07-15 21:45:50.245495] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:32:17.041 21:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:32:17.307 [2024-07-15 21:45:50.445094] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:17.307 [2024-07-15 21:45:50.446868] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:17.307 [2024-07-15 21:45:50.446935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:17.307 [2024-07-15 21:45:50.446944] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:17.307 [2024-07-15 21:45:50.446990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:17.307 21:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:32:17.307 21:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:17.307 21:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:17.307 21:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:17.307 21:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:17.307 21:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:17.307 21:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:17.307 21:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:17.307 21:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:17.307 21:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:17.307 21:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:17.307 21:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:17.307 21:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:17.307 21:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:17.307 21:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:17.307 "name": "Existed_Raid", 00:32:17.307 "uuid": "062b56fb-00c2-4ebe-8861-ae3421117941", 00:32:17.307 "strip_size_kb": 64, 00:32:17.307 "state": "configuring", 00:32:17.307 "raid_level": "raid5f", 00:32:17.307 "superblock": true, 00:32:17.307 "num_base_bdevs": 3, 00:32:17.307 "num_base_bdevs_discovered": 1, 00:32:17.307 "num_base_bdevs_operational": 3, 00:32:17.307 "base_bdevs_list": [ 00:32:17.307 { 00:32:17.307 "name": "BaseBdev1", 00:32:17.307 "uuid": "42a1d9d2-9917-4963-a192-30ecc75da46d", 00:32:17.307 "is_configured": true, 00:32:17.307 "data_offset": 2048, 00:32:17.307 "data_size": 63488 00:32:17.307 }, 00:32:17.307 { 00:32:17.307 "name": "BaseBdev2", 00:32:17.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:17.307 "is_configured": false, 00:32:17.307 "data_offset": 0, 00:32:17.307 "data_size": 0 00:32:17.307 }, 00:32:17.307 { 00:32:17.307 "name": "BaseBdev3", 00:32:17.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:17.307 "is_configured": false, 00:32:17.307 "data_offset": 0, 00:32:17.307 "data_size": 0 00:32:17.307 } 00:32:17.307 ] 00:32:17.307 }' 00:32:17.307 21:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:17.307 21:45:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:18.241 21:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:32:18.498 [2024-07-15 21:45:51.641529] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:18.498 BaseBdev2 00:32:18.498 21:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:32:18.498 21:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:32:18.498 21:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:18.498 21:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:32:18.498 21:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:18.498 21:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:18.498 21:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:18.498 21:45:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:18.792 [ 00:32:18.792 { 00:32:18.792 "name": "BaseBdev2", 00:32:18.792 "aliases": [ 00:32:18.792 "efca0ef2-090c-4daa-a778-c50d448a4680" 00:32:18.792 ], 00:32:18.792 "product_name": "Malloc disk", 00:32:18.792 "block_size": 512, 00:32:18.792 "num_blocks": 65536, 00:32:18.792 "uuid": "efca0ef2-090c-4daa-a778-c50d448a4680", 00:32:18.792 "assigned_rate_limits": { 00:32:18.792 "rw_ios_per_sec": 0, 00:32:18.792 "rw_mbytes_per_sec": 0, 00:32:18.792 "r_mbytes_per_sec": 0, 00:32:18.792 "w_mbytes_per_sec": 0 00:32:18.792 }, 00:32:18.792 "claimed": true, 00:32:18.792 "claim_type": "exclusive_write", 00:32:18.792 "zoned": false, 00:32:18.792 "supported_io_types": { 00:32:18.792 "read": true, 00:32:18.792 "write": true, 00:32:18.792 "unmap": true, 00:32:18.792 "flush": true, 00:32:18.792 "reset": true, 00:32:18.792 "nvme_admin": false, 00:32:18.792 "nvme_io": false, 00:32:18.792 "nvme_io_md": false, 00:32:18.792 "write_zeroes": true, 00:32:18.792 "zcopy": true, 00:32:18.792 "get_zone_info": false, 00:32:18.792 "zone_management": false, 00:32:18.792 "zone_append": false, 00:32:18.792 "compare": false, 00:32:18.792 "compare_and_write": false, 00:32:18.792 "abort": true, 00:32:18.792 "seek_hole": false, 00:32:18.792 "seek_data": false, 00:32:18.792 "copy": true, 00:32:18.792 "nvme_iov_md": false 00:32:18.792 }, 00:32:18.792 "memory_domains": [ 00:32:18.792 { 00:32:18.792 "dma_device_id": "system", 00:32:18.792 "dma_device_type": 1 00:32:18.792 }, 00:32:18.792 { 00:32:18.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:18.792 "dma_device_type": 2 00:32:18.792 } 00:32:18.792 ], 00:32:18.792 "driver_specific": {} 00:32:18.792 } 00:32:18.792 ] 00:32:18.792 21:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:32:18.792 21:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:32:18.792 21:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:18.792 21:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:18.792 21:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:18.792 21:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:18.792 21:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:18.792 21:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:18.792 21:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:18.792 21:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:18.792 21:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:18.792 21:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:18.792 21:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:18.792 21:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:18.792 21:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:19.050 21:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:19.050 "name": "Existed_Raid", 00:32:19.050 "uuid": "062b56fb-00c2-4ebe-8861-ae3421117941", 00:32:19.050 "strip_size_kb": 64, 00:32:19.050 "state": "configuring", 00:32:19.050 "raid_level": "raid5f", 00:32:19.050 "superblock": true, 00:32:19.050 "num_base_bdevs": 3, 00:32:19.050 "num_base_bdevs_discovered": 2, 00:32:19.050 "num_base_bdevs_operational": 3, 00:32:19.050 "base_bdevs_list": [ 00:32:19.050 { 00:32:19.050 "name": "BaseBdev1", 00:32:19.050 "uuid": "42a1d9d2-9917-4963-a192-30ecc75da46d", 00:32:19.050 "is_configured": true, 00:32:19.050 "data_offset": 2048, 00:32:19.050 "data_size": 63488 00:32:19.050 }, 00:32:19.050 { 00:32:19.050 "name": "BaseBdev2", 00:32:19.050 "uuid": "efca0ef2-090c-4daa-a778-c50d448a4680", 00:32:19.050 "is_configured": true, 00:32:19.050 "data_offset": 2048, 00:32:19.050 "data_size": 63488 00:32:19.050 }, 00:32:19.050 { 00:32:19.050 "name": "BaseBdev3", 00:32:19.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:19.050 "is_configured": false, 00:32:19.050 "data_offset": 0, 00:32:19.050 "data_size": 0 00:32:19.050 } 00:32:19.050 ] 00:32:19.050 }' 00:32:19.050 21:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:19.050 21:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:19.616 21:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:32:19.874 [2024-07-15 21:45:53.125082] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:19.874 [2024-07-15 21:45:53.125332] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:32:19.874 [2024-07-15 21:45:53.125345] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:19.874 [2024-07-15 21:45:53.125468] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:32:19.874 BaseBdev3 00:32:19.874 [2024-07-15 21:45:53.131137] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:32:19.874 [2024-07-15 21:45:53.131166] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:32:19.874 [2024-07-15 21:45:53.131349] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:19.874 21:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:32:19.874 21:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:32:19.874 21:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:19.874 21:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:32:19.874 21:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:19.874 21:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:19.874 21:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:20.131 21:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:20.388 [ 00:32:20.388 { 00:32:20.388 "name": "BaseBdev3", 00:32:20.388 "aliases": [ 00:32:20.388 "0e35b54e-7102-4197-accb-274dba316cf3" 00:32:20.388 ], 00:32:20.388 "product_name": "Malloc disk", 00:32:20.388 "block_size": 512, 00:32:20.389 "num_blocks": 65536, 00:32:20.389 "uuid": "0e35b54e-7102-4197-accb-274dba316cf3", 00:32:20.389 "assigned_rate_limits": { 00:32:20.389 "rw_ios_per_sec": 0, 00:32:20.389 "rw_mbytes_per_sec": 0, 00:32:20.389 "r_mbytes_per_sec": 0, 00:32:20.389 "w_mbytes_per_sec": 0 00:32:20.389 }, 00:32:20.389 "claimed": true, 00:32:20.389 "claim_type": "exclusive_write", 00:32:20.389 "zoned": false, 00:32:20.389 "supported_io_types": { 00:32:20.389 "read": true, 00:32:20.389 "write": true, 00:32:20.389 "unmap": true, 00:32:20.389 "flush": true, 00:32:20.389 "reset": true, 00:32:20.389 "nvme_admin": false, 00:32:20.389 "nvme_io": false, 00:32:20.389 "nvme_io_md": false, 00:32:20.389 "write_zeroes": true, 00:32:20.389 "zcopy": true, 00:32:20.389 "get_zone_info": false, 00:32:20.389 "zone_management": false, 00:32:20.389 "zone_append": false, 00:32:20.389 "compare": false, 00:32:20.389 "compare_and_write": false, 00:32:20.389 "abort": true, 00:32:20.389 "seek_hole": false, 00:32:20.389 "seek_data": false, 00:32:20.389 "copy": true, 00:32:20.389 "nvme_iov_md": false 00:32:20.389 }, 00:32:20.389 "memory_domains": [ 00:32:20.389 { 00:32:20.389 "dma_device_id": "system", 00:32:20.389 "dma_device_type": 1 00:32:20.389 }, 00:32:20.389 { 00:32:20.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:20.389 "dma_device_type": 2 00:32:20.389 } 00:32:20.389 ], 00:32:20.389 "driver_specific": {} 00:32:20.389 } 00:32:20.389 ] 00:32:20.389 21:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:32:20.389 21:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:32:20.389 21:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:20.389 21:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:32:20.389 21:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:20.389 21:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:20.389 21:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:20.389 21:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:20.389 21:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:20.389 21:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:20.389 21:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:20.389 21:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:20.389 21:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:20.389 21:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:20.389 21:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:20.646 21:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:20.646 "name": "Existed_Raid", 00:32:20.646 "uuid": "062b56fb-00c2-4ebe-8861-ae3421117941", 00:32:20.646 "strip_size_kb": 64, 00:32:20.646 "state": "online", 00:32:20.646 "raid_level": "raid5f", 00:32:20.646 "superblock": true, 00:32:20.646 "num_base_bdevs": 3, 00:32:20.646 "num_base_bdevs_discovered": 3, 00:32:20.646 "num_base_bdevs_operational": 3, 00:32:20.646 "base_bdevs_list": [ 00:32:20.646 { 00:32:20.646 "name": "BaseBdev1", 00:32:20.646 "uuid": "42a1d9d2-9917-4963-a192-30ecc75da46d", 00:32:20.646 "is_configured": true, 00:32:20.646 "data_offset": 2048, 00:32:20.646 "data_size": 63488 00:32:20.646 }, 00:32:20.646 { 00:32:20.646 "name": "BaseBdev2", 00:32:20.646 "uuid": "efca0ef2-090c-4daa-a778-c50d448a4680", 00:32:20.646 "is_configured": true, 00:32:20.646 "data_offset": 2048, 00:32:20.646 "data_size": 63488 00:32:20.646 }, 00:32:20.646 { 00:32:20.646 "name": "BaseBdev3", 00:32:20.646 "uuid": "0e35b54e-7102-4197-accb-274dba316cf3", 00:32:20.646 "is_configured": true, 00:32:20.646 "data_offset": 2048, 00:32:20.646 "data_size": 63488 00:32:20.646 } 00:32:20.646 ] 00:32:20.646 }' 00:32:20.646 21:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:20.646 21:45:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:21.211 21:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:32:21.211 21:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:32:21.211 21:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:21.211 21:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:21.211 21:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:21.211 21:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:32:21.211 21:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:32:21.211 21:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:21.211 [2024-07-15 21:45:54.587541] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:21.470 21:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:21.470 "name": "Existed_Raid", 00:32:21.470 "aliases": [ 00:32:21.470 "062b56fb-00c2-4ebe-8861-ae3421117941" 00:32:21.470 ], 00:32:21.470 "product_name": "Raid Volume", 00:32:21.470 "block_size": 512, 00:32:21.470 "num_blocks": 126976, 00:32:21.470 "uuid": "062b56fb-00c2-4ebe-8861-ae3421117941", 00:32:21.470 "assigned_rate_limits": { 00:32:21.470 "rw_ios_per_sec": 0, 00:32:21.470 "rw_mbytes_per_sec": 0, 00:32:21.470 "r_mbytes_per_sec": 0, 00:32:21.470 "w_mbytes_per_sec": 0 00:32:21.470 }, 00:32:21.470 "claimed": false, 00:32:21.470 "zoned": false, 00:32:21.470 "supported_io_types": { 00:32:21.470 "read": true, 00:32:21.470 "write": true, 00:32:21.470 "unmap": false, 00:32:21.470 "flush": false, 00:32:21.470 "reset": true, 00:32:21.470 "nvme_admin": false, 00:32:21.470 "nvme_io": false, 00:32:21.470 "nvme_io_md": false, 00:32:21.470 "write_zeroes": true, 00:32:21.470 "zcopy": false, 00:32:21.470 "get_zone_info": false, 00:32:21.470 "zone_management": false, 00:32:21.470 "zone_append": false, 00:32:21.470 "compare": false, 00:32:21.470 "compare_and_write": false, 00:32:21.470 "abort": false, 00:32:21.470 "seek_hole": false, 00:32:21.470 "seek_data": false, 00:32:21.470 "copy": false, 00:32:21.470 "nvme_iov_md": false 00:32:21.470 }, 00:32:21.470 "driver_specific": { 00:32:21.470 "raid": { 00:32:21.470 "uuid": "062b56fb-00c2-4ebe-8861-ae3421117941", 00:32:21.470 "strip_size_kb": 64, 00:32:21.470 "state": "online", 00:32:21.470 "raid_level": "raid5f", 00:32:21.470 "superblock": true, 00:32:21.470 "num_base_bdevs": 3, 00:32:21.470 "num_base_bdevs_discovered": 3, 00:32:21.470 "num_base_bdevs_operational": 3, 00:32:21.470 "base_bdevs_list": [ 00:32:21.470 { 00:32:21.470 "name": "BaseBdev1", 00:32:21.470 "uuid": "42a1d9d2-9917-4963-a192-30ecc75da46d", 00:32:21.470 "is_configured": true, 00:32:21.470 "data_offset": 2048, 00:32:21.470 "data_size": 63488 00:32:21.470 }, 00:32:21.470 { 00:32:21.470 "name": "BaseBdev2", 00:32:21.470 "uuid": "efca0ef2-090c-4daa-a778-c50d448a4680", 00:32:21.470 "is_configured": true, 00:32:21.470 "data_offset": 2048, 00:32:21.470 "data_size": 63488 00:32:21.470 }, 00:32:21.470 { 00:32:21.470 "name": "BaseBdev3", 00:32:21.470 "uuid": "0e35b54e-7102-4197-accb-274dba316cf3", 00:32:21.470 "is_configured": true, 00:32:21.470 "data_offset": 2048, 00:32:21.470 "data_size": 63488 00:32:21.470 } 00:32:21.470 ] 00:32:21.470 } 00:32:21.470 } 00:32:21.470 }' 00:32:21.470 21:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:21.470 21:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:32:21.470 BaseBdev2 00:32:21.470 BaseBdev3' 00:32:21.470 21:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:21.470 21:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:32:21.470 21:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:21.729 21:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:21.729 "name": "BaseBdev1", 00:32:21.729 "aliases": [ 00:32:21.729 "42a1d9d2-9917-4963-a192-30ecc75da46d" 00:32:21.729 ], 00:32:21.729 "product_name": "Malloc disk", 00:32:21.729 "block_size": 512, 00:32:21.729 "num_blocks": 65536, 00:32:21.729 "uuid": "42a1d9d2-9917-4963-a192-30ecc75da46d", 00:32:21.729 "assigned_rate_limits": { 00:32:21.729 "rw_ios_per_sec": 0, 00:32:21.729 "rw_mbytes_per_sec": 0, 00:32:21.729 "r_mbytes_per_sec": 0, 00:32:21.729 "w_mbytes_per_sec": 0 00:32:21.729 }, 00:32:21.729 "claimed": true, 00:32:21.729 "claim_type": "exclusive_write", 00:32:21.729 "zoned": false, 00:32:21.729 "supported_io_types": { 00:32:21.729 "read": true, 00:32:21.729 "write": true, 00:32:21.729 "unmap": true, 00:32:21.729 "flush": true, 00:32:21.729 "reset": true, 00:32:21.729 "nvme_admin": false, 00:32:21.729 "nvme_io": false, 00:32:21.729 "nvme_io_md": false, 00:32:21.729 "write_zeroes": true, 00:32:21.729 "zcopy": true, 00:32:21.729 "get_zone_info": false, 00:32:21.729 "zone_management": false, 00:32:21.729 "zone_append": false, 00:32:21.729 "compare": false, 00:32:21.729 "compare_and_write": false, 00:32:21.729 "abort": true, 00:32:21.729 "seek_hole": false, 00:32:21.729 "seek_data": false, 00:32:21.729 "copy": true, 00:32:21.729 "nvme_iov_md": false 00:32:21.729 }, 00:32:21.729 "memory_domains": [ 00:32:21.729 { 00:32:21.729 "dma_device_id": "system", 00:32:21.729 "dma_device_type": 1 00:32:21.729 }, 00:32:21.729 { 00:32:21.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:21.729 "dma_device_type": 2 00:32:21.729 } 00:32:21.729 ], 00:32:21.729 "driver_specific": {} 00:32:21.729 }' 00:32:21.729 21:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:21.729 21:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:21.729 21:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:21.729 21:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:21.729 21:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:21.987 21:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:21.987 21:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:21.987 21:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:21.987 21:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:21.987 21:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:21.987 21:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:21.987 21:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:21.987 21:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:21.987 21:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:32:21.987 21:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:22.245 21:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:22.245 "name": "BaseBdev2", 00:32:22.245 "aliases": [ 00:32:22.245 "efca0ef2-090c-4daa-a778-c50d448a4680" 00:32:22.245 ], 00:32:22.245 "product_name": "Malloc disk", 00:32:22.245 "block_size": 512, 00:32:22.245 "num_blocks": 65536, 00:32:22.245 "uuid": "efca0ef2-090c-4daa-a778-c50d448a4680", 00:32:22.245 "assigned_rate_limits": { 00:32:22.245 "rw_ios_per_sec": 0, 00:32:22.245 "rw_mbytes_per_sec": 0, 00:32:22.245 "r_mbytes_per_sec": 0, 00:32:22.245 "w_mbytes_per_sec": 0 00:32:22.245 }, 00:32:22.245 "claimed": true, 00:32:22.245 "claim_type": "exclusive_write", 00:32:22.245 "zoned": false, 00:32:22.245 "supported_io_types": { 00:32:22.245 "read": true, 00:32:22.245 "write": true, 00:32:22.245 "unmap": true, 00:32:22.245 "flush": true, 00:32:22.245 "reset": true, 00:32:22.245 "nvme_admin": false, 00:32:22.245 "nvme_io": false, 00:32:22.245 "nvme_io_md": false, 00:32:22.245 "write_zeroes": true, 00:32:22.245 "zcopy": true, 00:32:22.245 "get_zone_info": false, 00:32:22.245 "zone_management": false, 00:32:22.245 "zone_append": false, 00:32:22.245 "compare": false, 00:32:22.245 "compare_and_write": false, 00:32:22.245 "abort": true, 00:32:22.245 "seek_hole": false, 00:32:22.245 "seek_data": false, 00:32:22.245 "copy": true, 00:32:22.245 "nvme_iov_md": false 00:32:22.245 }, 00:32:22.245 "memory_domains": [ 00:32:22.245 { 00:32:22.245 "dma_device_id": "system", 00:32:22.245 "dma_device_type": 1 00:32:22.245 }, 00:32:22.245 { 00:32:22.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:22.245 "dma_device_type": 2 00:32:22.245 } 00:32:22.245 ], 00:32:22.245 "driver_specific": {} 00:32:22.245 }' 00:32:22.245 21:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:22.503 21:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:22.503 21:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:22.503 21:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:22.503 21:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:22.503 21:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:22.503 21:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:22.503 21:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:22.761 21:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:22.761 21:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:22.761 21:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:22.761 21:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:22.761 21:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:22.761 21:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:22.761 21:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:32:23.019 21:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:23.019 "name": "BaseBdev3", 00:32:23.019 "aliases": [ 00:32:23.019 "0e35b54e-7102-4197-accb-274dba316cf3" 00:32:23.019 ], 00:32:23.019 "product_name": "Malloc disk", 00:32:23.019 "block_size": 512, 00:32:23.019 "num_blocks": 65536, 00:32:23.019 "uuid": "0e35b54e-7102-4197-accb-274dba316cf3", 00:32:23.019 "assigned_rate_limits": { 00:32:23.019 "rw_ios_per_sec": 0, 00:32:23.019 "rw_mbytes_per_sec": 0, 00:32:23.019 "r_mbytes_per_sec": 0, 00:32:23.019 "w_mbytes_per_sec": 0 00:32:23.019 }, 00:32:23.019 "claimed": true, 00:32:23.019 "claim_type": "exclusive_write", 00:32:23.019 "zoned": false, 00:32:23.019 "supported_io_types": { 00:32:23.019 "read": true, 00:32:23.019 "write": true, 00:32:23.019 "unmap": true, 00:32:23.019 "flush": true, 00:32:23.019 "reset": true, 00:32:23.019 "nvme_admin": false, 00:32:23.019 "nvme_io": false, 00:32:23.020 "nvme_io_md": false, 00:32:23.020 "write_zeroes": true, 00:32:23.020 "zcopy": true, 00:32:23.020 "get_zone_info": false, 00:32:23.020 "zone_management": false, 00:32:23.020 "zone_append": false, 00:32:23.020 "compare": false, 00:32:23.020 "compare_and_write": false, 00:32:23.020 "abort": true, 00:32:23.020 "seek_hole": false, 00:32:23.020 "seek_data": false, 00:32:23.020 "copy": true, 00:32:23.020 "nvme_iov_md": false 00:32:23.020 }, 00:32:23.020 "memory_domains": [ 00:32:23.020 { 00:32:23.020 "dma_device_id": "system", 00:32:23.020 "dma_device_type": 1 00:32:23.020 }, 00:32:23.020 { 00:32:23.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:23.020 "dma_device_type": 2 00:32:23.020 } 00:32:23.020 ], 00:32:23.020 "driver_specific": {} 00:32:23.020 }' 00:32:23.020 21:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:23.020 21:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:23.020 21:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:23.020 21:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:23.278 21:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:23.278 21:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:23.278 21:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:23.278 21:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:23.278 21:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:23.278 21:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:23.536 21:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:23.536 21:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:23.536 21:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:32:23.810 [2024-07-15 21:45:56.922533] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:23.810 21:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:32:23.810 21:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:32:23.810 21:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:32:23.810 21:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:32:23.810 21:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:32:23.810 21:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:32:23.810 21:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:23.810 21:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:23.810 21:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:23.810 21:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:23.810 21:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:23.810 21:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:23.810 21:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:23.810 21:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:23.810 21:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:23.810 21:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:23.810 21:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:24.068 21:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:24.068 "name": "Existed_Raid", 00:32:24.068 "uuid": "062b56fb-00c2-4ebe-8861-ae3421117941", 00:32:24.068 "strip_size_kb": 64, 00:32:24.068 "state": "online", 00:32:24.068 "raid_level": "raid5f", 00:32:24.068 "superblock": true, 00:32:24.068 "num_base_bdevs": 3, 00:32:24.068 "num_base_bdevs_discovered": 2, 00:32:24.068 "num_base_bdevs_operational": 2, 00:32:24.068 "base_bdevs_list": [ 00:32:24.068 { 00:32:24.068 "name": null, 00:32:24.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:24.068 "is_configured": false, 00:32:24.068 "data_offset": 2048, 00:32:24.068 "data_size": 63488 00:32:24.068 }, 00:32:24.068 { 00:32:24.068 "name": "BaseBdev2", 00:32:24.068 "uuid": "efca0ef2-090c-4daa-a778-c50d448a4680", 00:32:24.068 "is_configured": true, 00:32:24.068 "data_offset": 2048, 00:32:24.068 "data_size": 63488 00:32:24.068 }, 00:32:24.068 { 00:32:24.068 "name": "BaseBdev3", 00:32:24.068 "uuid": "0e35b54e-7102-4197-accb-274dba316cf3", 00:32:24.068 "is_configured": true, 00:32:24.068 "data_offset": 2048, 00:32:24.068 "data_size": 63488 00:32:24.068 } 00:32:24.068 ] 00:32:24.068 }' 00:32:24.068 21:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:24.068 21:45:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:24.634 21:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:32:24.635 21:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:24.635 21:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:24.635 21:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:32:24.893 21:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:32:24.893 21:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:24.893 21:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:32:25.152 [2024-07-15 21:45:58.405768] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:25.152 [2024-07-15 21:45:58.405971] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:25.152 [2024-07-15 21:45:58.520912] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:25.412 21:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:32:25.412 21:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:25.412 21:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:32:25.412 21:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:25.671 21:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:32:25.671 21:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:25.671 21:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:32:25.671 [2024-07-15 21:45:59.000222] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:25.671 [2024-07-15 21:45:59.000320] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:32:25.930 21:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:32:25.930 21:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:25.930 21:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:25.930 21:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:32:26.188 21:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:32:26.188 21:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:32:26.188 21:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:32:26.188 21:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:32:26.188 21:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:32:26.188 21:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:32:26.446 BaseBdev2 00:32:26.446 21:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:32:26.446 21:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:32:26.446 21:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:26.446 21:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:32:26.446 21:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:26.447 21:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:26.447 21:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:26.705 21:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:26.705 [ 00:32:26.705 { 00:32:26.705 "name": "BaseBdev2", 00:32:26.705 "aliases": [ 00:32:26.705 "501dec28-b745-49bd-b178-e7276802ea16" 00:32:26.705 ], 00:32:26.705 "product_name": "Malloc disk", 00:32:26.705 "block_size": 512, 00:32:26.705 "num_blocks": 65536, 00:32:26.705 "uuid": "501dec28-b745-49bd-b178-e7276802ea16", 00:32:26.705 "assigned_rate_limits": { 00:32:26.705 "rw_ios_per_sec": 0, 00:32:26.705 "rw_mbytes_per_sec": 0, 00:32:26.705 "r_mbytes_per_sec": 0, 00:32:26.705 "w_mbytes_per_sec": 0 00:32:26.705 }, 00:32:26.705 "claimed": false, 00:32:26.705 "zoned": false, 00:32:26.705 "supported_io_types": { 00:32:26.705 "read": true, 00:32:26.705 "write": true, 00:32:26.705 "unmap": true, 00:32:26.705 "flush": true, 00:32:26.705 "reset": true, 00:32:26.705 "nvme_admin": false, 00:32:26.705 "nvme_io": false, 00:32:26.705 "nvme_io_md": false, 00:32:26.705 "write_zeroes": true, 00:32:26.705 "zcopy": true, 00:32:26.705 "get_zone_info": false, 00:32:26.705 "zone_management": false, 00:32:26.705 "zone_append": false, 00:32:26.705 "compare": false, 00:32:26.705 "compare_and_write": false, 00:32:26.705 "abort": true, 00:32:26.705 "seek_hole": false, 00:32:26.705 "seek_data": false, 00:32:26.705 "copy": true, 00:32:26.705 "nvme_iov_md": false 00:32:26.705 }, 00:32:26.705 "memory_domains": [ 00:32:26.705 { 00:32:26.705 "dma_device_id": "system", 00:32:26.705 "dma_device_type": 1 00:32:26.705 }, 00:32:26.705 { 00:32:26.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:26.705 "dma_device_type": 2 00:32:26.705 } 00:32:26.705 ], 00:32:26.705 "driver_specific": {} 00:32:26.705 } 00:32:26.705 ] 00:32:26.963 21:46:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:32:26.963 21:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:32:26.963 21:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:32:26.963 21:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:32:27.222 BaseBdev3 00:32:27.222 21:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:32:27.222 21:46:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:32:27.222 21:46:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:27.222 21:46:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:32:27.222 21:46:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:27.222 21:46:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:27.222 21:46:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:27.479 21:46:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:27.479 [ 00:32:27.479 { 00:32:27.479 "name": "BaseBdev3", 00:32:27.479 "aliases": [ 00:32:27.479 "f74bf225-0837-4dac-8124-4ed2bce73561" 00:32:27.479 ], 00:32:27.479 "product_name": "Malloc disk", 00:32:27.479 "block_size": 512, 00:32:27.479 "num_blocks": 65536, 00:32:27.479 "uuid": "f74bf225-0837-4dac-8124-4ed2bce73561", 00:32:27.479 "assigned_rate_limits": { 00:32:27.479 "rw_ios_per_sec": 0, 00:32:27.479 "rw_mbytes_per_sec": 0, 00:32:27.479 "r_mbytes_per_sec": 0, 00:32:27.479 "w_mbytes_per_sec": 0 00:32:27.479 }, 00:32:27.479 "claimed": false, 00:32:27.479 "zoned": false, 00:32:27.479 "supported_io_types": { 00:32:27.479 "read": true, 00:32:27.479 "write": true, 00:32:27.479 "unmap": true, 00:32:27.479 "flush": true, 00:32:27.479 "reset": true, 00:32:27.479 "nvme_admin": false, 00:32:27.479 "nvme_io": false, 00:32:27.479 "nvme_io_md": false, 00:32:27.479 "write_zeroes": true, 00:32:27.479 "zcopy": true, 00:32:27.479 "get_zone_info": false, 00:32:27.479 "zone_management": false, 00:32:27.479 "zone_append": false, 00:32:27.479 "compare": false, 00:32:27.479 "compare_and_write": false, 00:32:27.479 "abort": true, 00:32:27.479 "seek_hole": false, 00:32:27.479 "seek_data": false, 00:32:27.479 "copy": true, 00:32:27.479 "nvme_iov_md": false 00:32:27.479 }, 00:32:27.479 "memory_domains": [ 00:32:27.479 { 00:32:27.479 "dma_device_id": "system", 00:32:27.479 "dma_device_type": 1 00:32:27.479 }, 00:32:27.479 { 00:32:27.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:27.479 "dma_device_type": 2 00:32:27.479 } 00:32:27.479 ], 00:32:27.479 "driver_specific": {} 00:32:27.479 } 00:32:27.479 ] 00:32:27.479 21:46:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:32:27.479 21:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:32:27.479 21:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:32:27.479 21:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:32:27.738 [2024-07-15 21:46:01.039169] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:27.738 [2024-07-15 21:46:01.039251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:27.738 [2024-07-15 21:46:01.039298] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:27.738 [2024-07-15 21:46:01.041184] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:27.738 21:46:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:27.738 21:46:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:27.738 21:46:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:27.738 21:46:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:27.738 21:46:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:27.738 21:46:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:27.738 21:46:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:27.738 21:46:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:27.738 21:46:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:27.738 21:46:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:27.738 21:46:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:27.738 21:46:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:27.995 21:46:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:27.995 "name": "Existed_Raid", 00:32:27.995 "uuid": "55f28fec-d93f-4ddd-9436-1d9565df630f", 00:32:27.995 "strip_size_kb": 64, 00:32:27.995 "state": "configuring", 00:32:27.995 "raid_level": "raid5f", 00:32:27.995 "superblock": true, 00:32:27.995 "num_base_bdevs": 3, 00:32:27.995 "num_base_bdevs_discovered": 2, 00:32:27.995 "num_base_bdevs_operational": 3, 00:32:27.995 "base_bdevs_list": [ 00:32:27.995 { 00:32:27.995 "name": "BaseBdev1", 00:32:27.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:27.995 "is_configured": false, 00:32:27.995 "data_offset": 0, 00:32:27.995 "data_size": 0 00:32:27.995 }, 00:32:27.995 { 00:32:27.995 "name": "BaseBdev2", 00:32:27.995 "uuid": "501dec28-b745-49bd-b178-e7276802ea16", 00:32:27.995 "is_configured": true, 00:32:27.995 "data_offset": 2048, 00:32:27.995 "data_size": 63488 00:32:27.995 }, 00:32:27.995 { 00:32:27.995 "name": "BaseBdev3", 00:32:27.995 "uuid": "f74bf225-0837-4dac-8124-4ed2bce73561", 00:32:27.995 "is_configured": true, 00:32:27.995 "data_offset": 2048, 00:32:27.995 "data_size": 63488 00:32:27.995 } 00:32:27.995 ] 00:32:27.995 }' 00:32:27.995 21:46:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:27.995 21:46:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:28.926 21:46:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:32:28.926 [2024-07-15 21:46:02.189264] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:28.926 21:46:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:28.926 21:46:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:28.926 21:46:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:28.926 21:46:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:28.926 21:46:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:28.926 21:46:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:28.926 21:46:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:28.926 21:46:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:28.926 21:46:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:28.926 21:46:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:28.926 21:46:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:28.926 21:46:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:29.183 21:46:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:29.183 "name": "Existed_Raid", 00:32:29.183 "uuid": "55f28fec-d93f-4ddd-9436-1d9565df630f", 00:32:29.183 "strip_size_kb": 64, 00:32:29.183 "state": "configuring", 00:32:29.183 "raid_level": "raid5f", 00:32:29.183 "superblock": true, 00:32:29.183 "num_base_bdevs": 3, 00:32:29.183 "num_base_bdevs_discovered": 1, 00:32:29.183 "num_base_bdevs_operational": 3, 00:32:29.183 "base_bdevs_list": [ 00:32:29.183 { 00:32:29.183 "name": "BaseBdev1", 00:32:29.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:29.183 "is_configured": false, 00:32:29.183 "data_offset": 0, 00:32:29.183 "data_size": 0 00:32:29.183 }, 00:32:29.183 { 00:32:29.183 "name": null, 00:32:29.183 "uuid": "501dec28-b745-49bd-b178-e7276802ea16", 00:32:29.183 "is_configured": false, 00:32:29.183 "data_offset": 2048, 00:32:29.183 "data_size": 63488 00:32:29.183 }, 00:32:29.183 { 00:32:29.183 "name": "BaseBdev3", 00:32:29.183 "uuid": "f74bf225-0837-4dac-8124-4ed2bce73561", 00:32:29.183 "is_configured": true, 00:32:29.183 "data_offset": 2048, 00:32:29.183 "data_size": 63488 00:32:29.183 } 00:32:29.183 ] 00:32:29.183 }' 00:32:29.183 21:46:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:29.183 21:46:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:29.748 21:46:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:29.748 21:46:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:30.006 21:46:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:32:30.006 21:46:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:32:30.263 [2024-07-15 21:46:03.573457] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:30.263 BaseBdev1 00:32:30.263 21:46:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:32:30.263 21:46:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:32:30.263 21:46:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:30.263 21:46:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:32:30.263 21:46:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:30.263 21:46:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:30.263 21:46:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:30.520 21:46:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:30.778 [ 00:32:30.778 { 00:32:30.778 "name": "BaseBdev1", 00:32:30.778 "aliases": [ 00:32:30.778 "65081768-7c16-4c14-a86a-bce469d01aba" 00:32:30.778 ], 00:32:30.778 "product_name": "Malloc disk", 00:32:30.778 "block_size": 512, 00:32:30.778 "num_blocks": 65536, 00:32:30.778 "uuid": "65081768-7c16-4c14-a86a-bce469d01aba", 00:32:30.778 "assigned_rate_limits": { 00:32:30.778 "rw_ios_per_sec": 0, 00:32:30.778 "rw_mbytes_per_sec": 0, 00:32:30.778 "r_mbytes_per_sec": 0, 00:32:30.778 "w_mbytes_per_sec": 0 00:32:30.778 }, 00:32:30.778 "claimed": true, 00:32:30.778 "claim_type": "exclusive_write", 00:32:30.778 "zoned": false, 00:32:30.778 "supported_io_types": { 00:32:30.778 "read": true, 00:32:30.778 "write": true, 00:32:30.778 "unmap": true, 00:32:30.778 "flush": true, 00:32:30.778 "reset": true, 00:32:30.778 "nvme_admin": false, 00:32:30.778 "nvme_io": false, 00:32:30.778 "nvme_io_md": false, 00:32:30.778 "write_zeroes": true, 00:32:30.778 "zcopy": true, 00:32:30.778 "get_zone_info": false, 00:32:30.778 "zone_management": false, 00:32:30.778 "zone_append": false, 00:32:30.778 "compare": false, 00:32:30.778 "compare_and_write": false, 00:32:30.778 "abort": true, 00:32:30.778 "seek_hole": false, 00:32:30.778 "seek_data": false, 00:32:30.778 "copy": true, 00:32:30.778 "nvme_iov_md": false 00:32:30.778 }, 00:32:30.778 "memory_domains": [ 00:32:30.778 { 00:32:30.778 "dma_device_id": "system", 00:32:30.778 "dma_device_type": 1 00:32:30.778 }, 00:32:30.778 { 00:32:30.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:30.778 "dma_device_type": 2 00:32:30.778 } 00:32:30.778 ], 00:32:30.778 "driver_specific": {} 00:32:30.778 } 00:32:30.778 ] 00:32:30.778 21:46:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:32:30.778 21:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:30.778 21:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:30.778 21:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:30.778 21:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:30.778 21:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:30.778 21:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:30.778 21:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:30.778 21:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:30.778 21:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:30.778 21:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:30.778 21:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:30.778 21:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:31.036 21:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:31.036 "name": "Existed_Raid", 00:32:31.036 "uuid": "55f28fec-d93f-4ddd-9436-1d9565df630f", 00:32:31.036 "strip_size_kb": 64, 00:32:31.036 "state": "configuring", 00:32:31.036 "raid_level": "raid5f", 00:32:31.036 "superblock": true, 00:32:31.036 "num_base_bdevs": 3, 00:32:31.036 "num_base_bdevs_discovered": 2, 00:32:31.036 "num_base_bdevs_operational": 3, 00:32:31.036 "base_bdevs_list": [ 00:32:31.036 { 00:32:31.036 "name": "BaseBdev1", 00:32:31.036 "uuid": "65081768-7c16-4c14-a86a-bce469d01aba", 00:32:31.036 "is_configured": true, 00:32:31.036 "data_offset": 2048, 00:32:31.036 "data_size": 63488 00:32:31.036 }, 00:32:31.036 { 00:32:31.036 "name": null, 00:32:31.036 "uuid": "501dec28-b745-49bd-b178-e7276802ea16", 00:32:31.036 "is_configured": false, 00:32:31.036 "data_offset": 2048, 00:32:31.036 "data_size": 63488 00:32:31.036 }, 00:32:31.036 { 00:32:31.036 "name": "BaseBdev3", 00:32:31.036 "uuid": "f74bf225-0837-4dac-8124-4ed2bce73561", 00:32:31.036 "is_configured": true, 00:32:31.036 "data_offset": 2048, 00:32:31.036 "data_size": 63488 00:32:31.036 } 00:32:31.036 ] 00:32:31.036 }' 00:32:31.036 21:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:31.036 21:46:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:31.601 21:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:31.601 21:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:31.858 21:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:32:31.858 21:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:32:32.117 [2024-07-15 21:46:05.334611] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:32.117 21:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:32.117 21:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:32.117 21:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:32.117 21:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:32.117 21:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:32.117 21:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:32.117 21:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:32.117 21:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:32.117 21:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:32.117 21:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:32.117 21:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:32.117 21:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:32.375 21:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:32.375 "name": "Existed_Raid", 00:32:32.375 "uuid": "55f28fec-d93f-4ddd-9436-1d9565df630f", 00:32:32.375 "strip_size_kb": 64, 00:32:32.375 "state": "configuring", 00:32:32.375 "raid_level": "raid5f", 00:32:32.375 "superblock": true, 00:32:32.375 "num_base_bdevs": 3, 00:32:32.375 "num_base_bdevs_discovered": 1, 00:32:32.375 "num_base_bdevs_operational": 3, 00:32:32.375 "base_bdevs_list": [ 00:32:32.375 { 00:32:32.375 "name": "BaseBdev1", 00:32:32.375 "uuid": "65081768-7c16-4c14-a86a-bce469d01aba", 00:32:32.375 "is_configured": true, 00:32:32.375 "data_offset": 2048, 00:32:32.375 "data_size": 63488 00:32:32.375 }, 00:32:32.375 { 00:32:32.375 "name": null, 00:32:32.375 "uuid": "501dec28-b745-49bd-b178-e7276802ea16", 00:32:32.375 "is_configured": false, 00:32:32.375 "data_offset": 2048, 00:32:32.375 "data_size": 63488 00:32:32.375 }, 00:32:32.375 { 00:32:32.375 "name": null, 00:32:32.375 "uuid": "f74bf225-0837-4dac-8124-4ed2bce73561", 00:32:32.375 "is_configured": false, 00:32:32.375 "data_offset": 2048, 00:32:32.375 "data_size": 63488 00:32:32.375 } 00:32:32.375 ] 00:32:32.375 }' 00:32:32.375 21:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:32.375 21:46:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:32.941 21:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:32.941 21:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:33.200 21:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:32:33.200 21:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:32:33.459 [2024-07-15 21:46:06.588488] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:33.459 21:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:33.459 21:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:33.459 21:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:33.459 21:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:33.459 21:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:33.459 21:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:33.459 21:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:33.459 21:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:33.459 21:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:33.459 21:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:33.459 21:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:33.459 21:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:33.459 21:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:33.459 "name": "Existed_Raid", 00:32:33.459 "uuid": "55f28fec-d93f-4ddd-9436-1d9565df630f", 00:32:33.459 "strip_size_kb": 64, 00:32:33.459 "state": "configuring", 00:32:33.459 "raid_level": "raid5f", 00:32:33.459 "superblock": true, 00:32:33.459 "num_base_bdevs": 3, 00:32:33.459 "num_base_bdevs_discovered": 2, 00:32:33.459 "num_base_bdevs_operational": 3, 00:32:33.459 "base_bdevs_list": [ 00:32:33.459 { 00:32:33.459 "name": "BaseBdev1", 00:32:33.459 "uuid": "65081768-7c16-4c14-a86a-bce469d01aba", 00:32:33.459 "is_configured": true, 00:32:33.460 "data_offset": 2048, 00:32:33.460 "data_size": 63488 00:32:33.460 }, 00:32:33.460 { 00:32:33.460 "name": null, 00:32:33.460 "uuid": "501dec28-b745-49bd-b178-e7276802ea16", 00:32:33.460 "is_configured": false, 00:32:33.460 "data_offset": 2048, 00:32:33.460 "data_size": 63488 00:32:33.460 }, 00:32:33.460 { 00:32:33.460 "name": "BaseBdev3", 00:32:33.460 "uuid": "f74bf225-0837-4dac-8124-4ed2bce73561", 00:32:33.460 "is_configured": true, 00:32:33.460 "data_offset": 2048, 00:32:33.460 "data_size": 63488 00:32:33.460 } 00:32:33.460 ] 00:32:33.460 }' 00:32:33.460 21:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:33.460 21:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:34.394 21:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:34.394 21:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:34.394 21:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:32:34.394 21:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:32:34.651 [2024-07-15 21:46:07.914252] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:34.909 21:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:34.909 21:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:34.909 21:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:34.909 21:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:34.909 21:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:34.909 21:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:34.909 21:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:34.909 21:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:34.909 21:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:34.909 21:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:34.909 21:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:34.909 21:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:34.909 21:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:34.909 "name": "Existed_Raid", 00:32:34.909 "uuid": "55f28fec-d93f-4ddd-9436-1d9565df630f", 00:32:34.909 "strip_size_kb": 64, 00:32:34.909 "state": "configuring", 00:32:34.909 "raid_level": "raid5f", 00:32:34.909 "superblock": true, 00:32:34.909 "num_base_bdevs": 3, 00:32:34.909 "num_base_bdevs_discovered": 1, 00:32:34.909 "num_base_bdevs_operational": 3, 00:32:34.909 "base_bdevs_list": [ 00:32:34.909 { 00:32:34.909 "name": null, 00:32:34.909 "uuid": "65081768-7c16-4c14-a86a-bce469d01aba", 00:32:34.909 "is_configured": false, 00:32:34.909 "data_offset": 2048, 00:32:34.909 "data_size": 63488 00:32:34.909 }, 00:32:34.909 { 00:32:34.909 "name": null, 00:32:34.909 "uuid": "501dec28-b745-49bd-b178-e7276802ea16", 00:32:34.909 "is_configured": false, 00:32:34.909 "data_offset": 2048, 00:32:34.909 "data_size": 63488 00:32:34.909 }, 00:32:34.909 { 00:32:34.909 "name": "BaseBdev3", 00:32:34.909 "uuid": "f74bf225-0837-4dac-8124-4ed2bce73561", 00:32:34.909 "is_configured": true, 00:32:34.909 "data_offset": 2048, 00:32:34.909 "data_size": 63488 00:32:34.909 } 00:32:34.909 ] 00:32:34.909 }' 00:32:34.909 21:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:34.909 21:46:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.844 21:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:35.844 21:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:35.844 21:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:32:35.844 21:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:32:36.103 [2024-07-15 21:46:09.363267] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:36.103 21:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:36.103 21:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:36.103 21:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:36.103 21:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:36.103 21:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:36.103 21:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:36.103 21:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:36.103 21:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:36.103 21:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:36.103 21:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:36.103 21:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:36.103 21:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:36.359 21:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:36.359 "name": "Existed_Raid", 00:32:36.359 "uuid": "55f28fec-d93f-4ddd-9436-1d9565df630f", 00:32:36.359 "strip_size_kb": 64, 00:32:36.359 "state": "configuring", 00:32:36.359 "raid_level": "raid5f", 00:32:36.359 "superblock": true, 00:32:36.359 "num_base_bdevs": 3, 00:32:36.359 "num_base_bdevs_discovered": 2, 00:32:36.359 "num_base_bdevs_operational": 3, 00:32:36.359 "base_bdevs_list": [ 00:32:36.359 { 00:32:36.359 "name": null, 00:32:36.359 "uuid": "65081768-7c16-4c14-a86a-bce469d01aba", 00:32:36.359 "is_configured": false, 00:32:36.359 "data_offset": 2048, 00:32:36.359 "data_size": 63488 00:32:36.359 }, 00:32:36.359 { 00:32:36.359 "name": "BaseBdev2", 00:32:36.359 "uuid": "501dec28-b745-49bd-b178-e7276802ea16", 00:32:36.359 "is_configured": true, 00:32:36.359 "data_offset": 2048, 00:32:36.359 "data_size": 63488 00:32:36.359 }, 00:32:36.359 { 00:32:36.359 "name": "BaseBdev3", 00:32:36.359 "uuid": "f74bf225-0837-4dac-8124-4ed2bce73561", 00:32:36.359 "is_configured": true, 00:32:36.359 "data_offset": 2048, 00:32:36.359 "data_size": 63488 00:32:36.359 } 00:32:36.359 ] 00:32:36.359 }' 00:32:36.359 21:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:36.359 21:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:36.923 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:36.923 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:37.180 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:32:37.180 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:37.180 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:32:37.439 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 65081768-7c16-4c14-a86a-bce469d01aba 00:32:37.698 [2024-07-15 21:46:11.005675] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:32:37.698 [2024-07-15 21:46:11.005983] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:32:37.698 [2024-07-15 21:46:11.006017] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:37.698 [2024-07-15 21:46:11.006164] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:32:37.698 NewBaseBdev 00:32:37.698 [2024-07-15 21:46:11.012560] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:32:37.698 [2024-07-15 21:46:11.012602] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008d80 00:32:37.698 [2024-07-15 21:46:11.012819] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:37.698 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:32:37.698 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:32:37.698 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:37.698 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:32:37.698 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:37.698 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:37.698 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:37.987 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:32:38.245 [ 00:32:38.245 { 00:32:38.245 "name": "NewBaseBdev", 00:32:38.245 "aliases": [ 00:32:38.245 "65081768-7c16-4c14-a86a-bce469d01aba" 00:32:38.245 ], 00:32:38.245 "product_name": "Malloc disk", 00:32:38.245 "block_size": 512, 00:32:38.245 "num_blocks": 65536, 00:32:38.245 "uuid": "65081768-7c16-4c14-a86a-bce469d01aba", 00:32:38.245 "assigned_rate_limits": { 00:32:38.245 "rw_ios_per_sec": 0, 00:32:38.245 "rw_mbytes_per_sec": 0, 00:32:38.245 "r_mbytes_per_sec": 0, 00:32:38.245 "w_mbytes_per_sec": 0 00:32:38.245 }, 00:32:38.245 "claimed": true, 00:32:38.245 "claim_type": "exclusive_write", 00:32:38.245 "zoned": false, 00:32:38.245 "supported_io_types": { 00:32:38.245 "read": true, 00:32:38.245 "write": true, 00:32:38.245 "unmap": true, 00:32:38.245 "flush": true, 00:32:38.245 "reset": true, 00:32:38.245 "nvme_admin": false, 00:32:38.245 "nvme_io": false, 00:32:38.245 "nvme_io_md": false, 00:32:38.245 "write_zeroes": true, 00:32:38.245 "zcopy": true, 00:32:38.245 "get_zone_info": false, 00:32:38.245 "zone_management": false, 00:32:38.245 "zone_append": false, 00:32:38.245 "compare": false, 00:32:38.245 "compare_and_write": false, 00:32:38.245 "abort": true, 00:32:38.245 "seek_hole": false, 00:32:38.245 "seek_data": false, 00:32:38.245 "copy": true, 00:32:38.245 "nvme_iov_md": false 00:32:38.245 }, 00:32:38.245 "memory_domains": [ 00:32:38.245 { 00:32:38.245 "dma_device_id": "system", 00:32:38.245 "dma_device_type": 1 00:32:38.245 }, 00:32:38.245 { 00:32:38.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:38.245 "dma_device_type": 2 00:32:38.245 } 00:32:38.245 ], 00:32:38.245 "driver_specific": {} 00:32:38.245 } 00:32:38.245 ] 00:32:38.245 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:32:38.245 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:32:38.245 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:38.245 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:38.245 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:38.245 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:38.245 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:38.245 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:38.245 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:38.245 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:38.245 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:38.245 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:38.245 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:38.507 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:38.507 "name": "Existed_Raid", 00:32:38.507 "uuid": "55f28fec-d93f-4ddd-9436-1d9565df630f", 00:32:38.507 "strip_size_kb": 64, 00:32:38.508 "state": "online", 00:32:38.508 "raid_level": "raid5f", 00:32:38.508 "superblock": true, 00:32:38.508 "num_base_bdevs": 3, 00:32:38.508 "num_base_bdevs_discovered": 3, 00:32:38.508 "num_base_bdevs_operational": 3, 00:32:38.508 "base_bdevs_list": [ 00:32:38.508 { 00:32:38.508 "name": "NewBaseBdev", 00:32:38.508 "uuid": "65081768-7c16-4c14-a86a-bce469d01aba", 00:32:38.508 "is_configured": true, 00:32:38.508 "data_offset": 2048, 00:32:38.508 "data_size": 63488 00:32:38.508 }, 00:32:38.508 { 00:32:38.508 "name": "BaseBdev2", 00:32:38.508 "uuid": "501dec28-b745-49bd-b178-e7276802ea16", 00:32:38.508 "is_configured": true, 00:32:38.508 "data_offset": 2048, 00:32:38.508 "data_size": 63488 00:32:38.508 }, 00:32:38.508 { 00:32:38.508 "name": "BaseBdev3", 00:32:38.508 "uuid": "f74bf225-0837-4dac-8124-4ed2bce73561", 00:32:38.508 "is_configured": true, 00:32:38.508 "data_offset": 2048, 00:32:38.508 "data_size": 63488 00:32:38.508 } 00:32:38.508 ] 00:32:38.508 }' 00:32:38.508 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:38.508 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.072 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:32:39.072 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:32:39.072 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:39.072 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:39.072 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:39.072 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:32:39.072 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:32:39.072 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:39.331 [2024-07-15 21:46:12.593795] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:39.331 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:39.331 "name": "Existed_Raid", 00:32:39.331 "aliases": [ 00:32:39.331 "55f28fec-d93f-4ddd-9436-1d9565df630f" 00:32:39.331 ], 00:32:39.331 "product_name": "Raid Volume", 00:32:39.331 "block_size": 512, 00:32:39.331 "num_blocks": 126976, 00:32:39.331 "uuid": "55f28fec-d93f-4ddd-9436-1d9565df630f", 00:32:39.331 "assigned_rate_limits": { 00:32:39.331 "rw_ios_per_sec": 0, 00:32:39.331 "rw_mbytes_per_sec": 0, 00:32:39.331 "r_mbytes_per_sec": 0, 00:32:39.331 "w_mbytes_per_sec": 0 00:32:39.331 }, 00:32:39.331 "claimed": false, 00:32:39.331 "zoned": false, 00:32:39.331 "supported_io_types": { 00:32:39.331 "read": true, 00:32:39.331 "write": true, 00:32:39.331 "unmap": false, 00:32:39.331 "flush": false, 00:32:39.331 "reset": true, 00:32:39.331 "nvme_admin": false, 00:32:39.331 "nvme_io": false, 00:32:39.331 "nvme_io_md": false, 00:32:39.331 "write_zeroes": true, 00:32:39.331 "zcopy": false, 00:32:39.331 "get_zone_info": false, 00:32:39.331 "zone_management": false, 00:32:39.331 "zone_append": false, 00:32:39.331 "compare": false, 00:32:39.331 "compare_and_write": false, 00:32:39.331 "abort": false, 00:32:39.331 "seek_hole": false, 00:32:39.331 "seek_data": false, 00:32:39.331 "copy": false, 00:32:39.331 "nvme_iov_md": false 00:32:39.331 }, 00:32:39.331 "driver_specific": { 00:32:39.331 "raid": { 00:32:39.331 "uuid": "55f28fec-d93f-4ddd-9436-1d9565df630f", 00:32:39.331 "strip_size_kb": 64, 00:32:39.331 "state": "online", 00:32:39.331 "raid_level": "raid5f", 00:32:39.331 "superblock": true, 00:32:39.331 "num_base_bdevs": 3, 00:32:39.331 "num_base_bdevs_discovered": 3, 00:32:39.331 "num_base_bdevs_operational": 3, 00:32:39.331 "base_bdevs_list": [ 00:32:39.331 { 00:32:39.331 "name": "NewBaseBdev", 00:32:39.331 "uuid": "65081768-7c16-4c14-a86a-bce469d01aba", 00:32:39.331 "is_configured": true, 00:32:39.331 "data_offset": 2048, 00:32:39.331 "data_size": 63488 00:32:39.331 }, 00:32:39.331 { 00:32:39.331 "name": "BaseBdev2", 00:32:39.331 "uuid": "501dec28-b745-49bd-b178-e7276802ea16", 00:32:39.331 "is_configured": true, 00:32:39.331 "data_offset": 2048, 00:32:39.331 "data_size": 63488 00:32:39.331 }, 00:32:39.331 { 00:32:39.331 "name": "BaseBdev3", 00:32:39.331 "uuid": "f74bf225-0837-4dac-8124-4ed2bce73561", 00:32:39.331 "is_configured": true, 00:32:39.331 "data_offset": 2048, 00:32:39.331 "data_size": 63488 00:32:39.331 } 00:32:39.331 ] 00:32:39.331 } 00:32:39.331 } 00:32:39.331 }' 00:32:39.331 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:39.331 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:32:39.331 BaseBdev2 00:32:39.331 BaseBdev3' 00:32:39.331 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:39.331 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:32:39.331 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:39.589 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:39.589 "name": "NewBaseBdev", 00:32:39.590 "aliases": [ 00:32:39.590 "65081768-7c16-4c14-a86a-bce469d01aba" 00:32:39.590 ], 00:32:39.590 "product_name": "Malloc disk", 00:32:39.590 "block_size": 512, 00:32:39.590 "num_blocks": 65536, 00:32:39.590 "uuid": "65081768-7c16-4c14-a86a-bce469d01aba", 00:32:39.590 "assigned_rate_limits": { 00:32:39.590 "rw_ios_per_sec": 0, 00:32:39.590 "rw_mbytes_per_sec": 0, 00:32:39.590 "r_mbytes_per_sec": 0, 00:32:39.590 "w_mbytes_per_sec": 0 00:32:39.590 }, 00:32:39.590 "claimed": true, 00:32:39.590 "claim_type": "exclusive_write", 00:32:39.590 "zoned": false, 00:32:39.590 "supported_io_types": { 00:32:39.590 "read": true, 00:32:39.590 "write": true, 00:32:39.590 "unmap": true, 00:32:39.590 "flush": true, 00:32:39.590 "reset": true, 00:32:39.590 "nvme_admin": false, 00:32:39.590 "nvme_io": false, 00:32:39.590 "nvme_io_md": false, 00:32:39.590 "write_zeroes": true, 00:32:39.590 "zcopy": true, 00:32:39.590 "get_zone_info": false, 00:32:39.590 "zone_management": false, 00:32:39.590 "zone_append": false, 00:32:39.590 "compare": false, 00:32:39.590 "compare_and_write": false, 00:32:39.590 "abort": true, 00:32:39.590 "seek_hole": false, 00:32:39.590 "seek_data": false, 00:32:39.590 "copy": true, 00:32:39.590 "nvme_iov_md": false 00:32:39.590 }, 00:32:39.590 "memory_domains": [ 00:32:39.590 { 00:32:39.590 "dma_device_id": "system", 00:32:39.590 "dma_device_type": 1 00:32:39.590 }, 00:32:39.590 { 00:32:39.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:39.590 "dma_device_type": 2 00:32:39.590 } 00:32:39.590 ], 00:32:39.590 "driver_specific": {} 00:32:39.590 }' 00:32:39.590 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:39.590 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:39.848 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:39.848 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:39.848 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:39.848 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:39.848 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:39.848 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:39.848 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:39.848 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:40.106 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:40.106 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:40.106 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:40.106 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:32:40.106 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:40.363 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:40.363 "name": "BaseBdev2", 00:32:40.363 "aliases": [ 00:32:40.363 "501dec28-b745-49bd-b178-e7276802ea16" 00:32:40.363 ], 00:32:40.363 "product_name": "Malloc disk", 00:32:40.363 "block_size": 512, 00:32:40.363 "num_blocks": 65536, 00:32:40.363 "uuid": "501dec28-b745-49bd-b178-e7276802ea16", 00:32:40.363 "assigned_rate_limits": { 00:32:40.363 "rw_ios_per_sec": 0, 00:32:40.363 "rw_mbytes_per_sec": 0, 00:32:40.363 "r_mbytes_per_sec": 0, 00:32:40.363 "w_mbytes_per_sec": 0 00:32:40.363 }, 00:32:40.363 "claimed": true, 00:32:40.363 "claim_type": "exclusive_write", 00:32:40.363 "zoned": false, 00:32:40.363 "supported_io_types": { 00:32:40.363 "read": true, 00:32:40.363 "write": true, 00:32:40.363 "unmap": true, 00:32:40.363 "flush": true, 00:32:40.363 "reset": true, 00:32:40.363 "nvme_admin": false, 00:32:40.363 "nvme_io": false, 00:32:40.363 "nvme_io_md": false, 00:32:40.363 "write_zeroes": true, 00:32:40.363 "zcopy": true, 00:32:40.363 "get_zone_info": false, 00:32:40.363 "zone_management": false, 00:32:40.363 "zone_append": false, 00:32:40.363 "compare": false, 00:32:40.363 "compare_and_write": false, 00:32:40.363 "abort": true, 00:32:40.363 "seek_hole": false, 00:32:40.363 "seek_data": false, 00:32:40.363 "copy": true, 00:32:40.363 "nvme_iov_md": false 00:32:40.363 }, 00:32:40.363 "memory_domains": [ 00:32:40.363 { 00:32:40.363 "dma_device_id": "system", 00:32:40.363 "dma_device_type": 1 00:32:40.363 }, 00:32:40.363 { 00:32:40.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:40.363 "dma_device_type": 2 00:32:40.363 } 00:32:40.363 ], 00:32:40.363 "driver_specific": {} 00:32:40.363 }' 00:32:40.363 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:40.364 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:40.364 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:40.364 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:40.364 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:40.622 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:40.622 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:40.622 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:40.622 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:40.622 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:40.622 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:40.622 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:40.622 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:40.622 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:32:40.880 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:40.880 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:40.880 "name": "BaseBdev3", 00:32:40.880 "aliases": [ 00:32:40.880 "f74bf225-0837-4dac-8124-4ed2bce73561" 00:32:40.880 ], 00:32:40.880 "product_name": "Malloc disk", 00:32:40.880 "block_size": 512, 00:32:40.880 "num_blocks": 65536, 00:32:40.880 "uuid": "f74bf225-0837-4dac-8124-4ed2bce73561", 00:32:40.880 "assigned_rate_limits": { 00:32:40.880 "rw_ios_per_sec": 0, 00:32:40.880 "rw_mbytes_per_sec": 0, 00:32:40.880 "r_mbytes_per_sec": 0, 00:32:40.880 "w_mbytes_per_sec": 0 00:32:40.880 }, 00:32:40.880 "claimed": true, 00:32:40.880 "claim_type": "exclusive_write", 00:32:40.880 "zoned": false, 00:32:40.880 "supported_io_types": { 00:32:40.880 "read": true, 00:32:40.880 "write": true, 00:32:40.880 "unmap": true, 00:32:40.880 "flush": true, 00:32:40.880 "reset": true, 00:32:40.880 "nvme_admin": false, 00:32:40.880 "nvme_io": false, 00:32:40.880 "nvme_io_md": false, 00:32:40.880 "write_zeroes": true, 00:32:40.880 "zcopy": true, 00:32:40.880 "get_zone_info": false, 00:32:40.880 "zone_management": false, 00:32:40.880 "zone_append": false, 00:32:40.880 "compare": false, 00:32:40.880 "compare_and_write": false, 00:32:40.880 "abort": true, 00:32:40.880 "seek_hole": false, 00:32:40.880 "seek_data": false, 00:32:40.880 "copy": true, 00:32:40.880 "nvme_iov_md": false 00:32:40.880 }, 00:32:40.880 "memory_domains": [ 00:32:40.880 { 00:32:40.880 "dma_device_id": "system", 00:32:40.880 "dma_device_type": 1 00:32:40.880 }, 00:32:40.880 { 00:32:40.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:40.880 "dma_device_type": 2 00:32:40.880 } 00:32:40.880 ], 00:32:40.880 "driver_specific": {} 00:32:40.880 }' 00:32:40.880 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:41.203 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:41.203 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:41.203 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:41.203 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:41.203 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:41.203 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:41.203 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:41.203 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:41.203 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:41.462 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:41.462 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:41.462 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:41.720 [2024-07-15 21:46:14.853755] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:41.720 [2024-07-15 21:46:14.853798] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:41.720 [2024-07-15 21:46:14.853879] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:41.720 [2024-07-15 21:46:14.854149] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:41.720 [2024-07-15 21:46:14.854165] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name Existed_Raid, state offline 00:32:41.720 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 152413 00:32:41.720 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 152413 ']' 00:32:41.720 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 152413 00:32:41.720 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:32:41.720 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:41.720 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 152413 00:32:41.720 killing process with pid 152413 00:32:41.720 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:41.720 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:41.720 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 152413' 00:32:41.720 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 152413 00:32:41.720 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 152413 00:32:41.720 [2024-07-15 21:46:14.899841] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:41.979 [2024-07-15 21:46:15.208986] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:43.352 ************************************ 00:32:43.352 END TEST raid5f_state_function_test_sb 00:32:43.352 ************************************ 00:32:43.352 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:32:43.352 00:32:43.352 real 0m30.258s 00:32:43.352 user 0m56.176s 00:32:43.352 sys 0m3.489s 00:32:43.352 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:43.352 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:43.352 21:46:16 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:32:43.352 21:46:16 bdev_raid -- bdev/bdev_raid.sh@888 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:32:43.352 21:46:16 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:32:43.352 21:46:16 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:43.352 21:46:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:43.352 ************************************ 00:32:43.352 START TEST raid5f_superblock_test 00:32:43.352 ************************************ 00:32:43.352 21:46:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid5f 3 00:32:43.352 21:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid5f 00:32:43.352 21:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:32:43.352 21:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:32:43.352 21:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:32:43.352 21:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:32:43.352 21:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:32:43.352 21:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:32:43.352 21:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:32:43.352 21:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:32:43.352 21:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:32:43.352 21:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:32:43.352 21:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:32:43.352 21:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:32:43.352 21:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid5f '!=' raid1 ']' 00:32:43.352 21:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:32:43.352 21:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:32:43.352 21:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=153447 00:32:43.352 21:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 153447 /var/tmp/spdk-raid.sock 00:32:43.352 21:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:32:43.352 21:46:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 153447 ']' 00:32:43.352 21:46:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:43.352 21:46:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:43.352 21:46:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:43.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:43.352 21:46:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:43.352 21:46:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:43.352 [2024-07-15 21:46:16.692343] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:32:43.352 [2024-07-15 21:46:16.692595] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153447 ] 00:32:43.610 [2024-07-15 21:46:16.839527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:43.869 [2024-07-15 21:46:17.049034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:44.127 [2024-07-15 21:46:17.252508] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:44.386 21:46:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:44.386 21:46:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:32:44.386 21:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:32:44.386 21:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:32:44.386 21:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:32:44.386 21:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:32:44.386 21:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:32:44.386 21:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:44.386 21:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:32:44.386 21:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:44.386 21:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:32:44.646 malloc1 00:32:44.646 21:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:44.906 [2024-07-15 21:46:18.044737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:44.906 [2024-07-15 21:46:18.044843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:44.906 [2024-07-15 21:46:18.044881] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:32:44.906 [2024-07-15 21:46:18.044897] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:44.906 [2024-07-15 21:46:18.047138] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:44.906 [2024-07-15 21:46:18.047192] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:44.906 pt1 00:32:44.906 21:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:32:44.906 21:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:32:44.906 21:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:32:44.906 21:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:32:44.906 21:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:32:44.906 21:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:44.906 21:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:32:44.906 21:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:44.906 21:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:32:45.164 malloc2 00:32:45.164 21:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:45.422 [2024-07-15 21:46:18.550733] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:45.422 [2024-07-15 21:46:18.550848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:45.422 [2024-07-15 21:46:18.550897] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:32:45.422 [2024-07-15 21:46:18.550914] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:45.422 [2024-07-15 21:46:18.552958] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:45.422 [2024-07-15 21:46:18.553028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:45.422 pt2 00:32:45.422 21:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:32:45.422 21:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:32:45.422 21:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:32:45.422 21:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:32:45.422 21:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:32:45.422 21:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:45.422 21:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:32:45.422 21:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:45.422 21:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:32:45.681 malloc3 00:32:45.681 21:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:32:45.681 [2024-07-15 21:46:19.041199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:32:45.681 [2024-07-15 21:46:19.041320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:45.681 [2024-07-15 21:46:19.041350] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:32:45.681 [2024-07-15 21:46:19.041372] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:45.681 [2024-07-15 21:46:19.043520] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:45.681 [2024-07-15 21:46:19.043581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:32:45.681 pt3 00:32:45.681 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:32:45.681 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:32:45.681 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:32:45.940 [2024-07-15 21:46:19.260903] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:45.940 [2024-07-15 21:46:19.262836] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:45.940 [2024-07-15 21:46:19.262916] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:32:45.940 [2024-07-15 21:46:19.263114] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:32:45.940 [2024-07-15 21:46:19.263146] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:45.940 [2024-07-15 21:46:19.263294] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:32:45.940 [2024-07-15 21:46:19.269907] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:32:45.940 [2024-07-15 21:46:19.269943] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:32:45.940 [2024-07-15 21:46:19.270176] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:45.940 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:45.940 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:45.940 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:45.940 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:45.940 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:45.940 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:45.940 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:45.940 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:45.940 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:45.940 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:45.940 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:45.940 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:46.199 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:46.199 "name": "raid_bdev1", 00:32:46.199 "uuid": "e780a29a-0d7d-410d-98a2-d6254f9e496a", 00:32:46.199 "strip_size_kb": 64, 00:32:46.199 "state": "online", 00:32:46.199 "raid_level": "raid5f", 00:32:46.199 "superblock": true, 00:32:46.199 "num_base_bdevs": 3, 00:32:46.199 "num_base_bdevs_discovered": 3, 00:32:46.199 "num_base_bdevs_operational": 3, 00:32:46.199 "base_bdevs_list": [ 00:32:46.199 { 00:32:46.199 "name": "pt1", 00:32:46.199 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:46.199 "is_configured": true, 00:32:46.199 "data_offset": 2048, 00:32:46.199 "data_size": 63488 00:32:46.199 }, 00:32:46.199 { 00:32:46.199 "name": "pt2", 00:32:46.199 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:46.199 "is_configured": true, 00:32:46.199 "data_offset": 2048, 00:32:46.199 "data_size": 63488 00:32:46.199 }, 00:32:46.199 { 00:32:46.199 "name": "pt3", 00:32:46.199 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:46.199 "is_configured": true, 00:32:46.199 "data_offset": 2048, 00:32:46.199 "data_size": 63488 00:32:46.199 } 00:32:46.199 ] 00:32:46.199 }' 00:32:46.199 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:46.199 21:46:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.145 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:32:47.145 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:32:47.145 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:47.145 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:47.145 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:47.145 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:32:47.145 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:47.145 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:47.145 [2024-07-15 21:46:20.447620] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:47.145 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:47.145 "name": "raid_bdev1", 00:32:47.145 "aliases": [ 00:32:47.145 "e780a29a-0d7d-410d-98a2-d6254f9e496a" 00:32:47.145 ], 00:32:47.145 "product_name": "Raid Volume", 00:32:47.145 "block_size": 512, 00:32:47.145 "num_blocks": 126976, 00:32:47.145 "uuid": "e780a29a-0d7d-410d-98a2-d6254f9e496a", 00:32:47.145 "assigned_rate_limits": { 00:32:47.145 "rw_ios_per_sec": 0, 00:32:47.145 "rw_mbytes_per_sec": 0, 00:32:47.145 "r_mbytes_per_sec": 0, 00:32:47.145 "w_mbytes_per_sec": 0 00:32:47.145 }, 00:32:47.145 "claimed": false, 00:32:47.145 "zoned": false, 00:32:47.145 "supported_io_types": { 00:32:47.145 "read": true, 00:32:47.145 "write": true, 00:32:47.145 "unmap": false, 00:32:47.145 "flush": false, 00:32:47.145 "reset": true, 00:32:47.145 "nvme_admin": false, 00:32:47.145 "nvme_io": false, 00:32:47.145 "nvme_io_md": false, 00:32:47.145 "write_zeroes": true, 00:32:47.145 "zcopy": false, 00:32:47.145 "get_zone_info": false, 00:32:47.145 "zone_management": false, 00:32:47.145 "zone_append": false, 00:32:47.145 "compare": false, 00:32:47.145 "compare_and_write": false, 00:32:47.145 "abort": false, 00:32:47.145 "seek_hole": false, 00:32:47.145 "seek_data": false, 00:32:47.145 "copy": false, 00:32:47.145 "nvme_iov_md": false 00:32:47.145 }, 00:32:47.145 "driver_specific": { 00:32:47.145 "raid": { 00:32:47.145 "uuid": "e780a29a-0d7d-410d-98a2-d6254f9e496a", 00:32:47.145 "strip_size_kb": 64, 00:32:47.145 "state": "online", 00:32:47.145 "raid_level": "raid5f", 00:32:47.145 "superblock": true, 00:32:47.145 "num_base_bdevs": 3, 00:32:47.145 "num_base_bdevs_discovered": 3, 00:32:47.145 "num_base_bdevs_operational": 3, 00:32:47.145 "base_bdevs_list": [ 00:32:47.145 { 00:32:47.145 "name": "pt1", 00:32:47.145 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:47.145 "is_configured": true, 00:32:47.145 "data_offset": 2048, 00:32:47.145 "data_size": 63488 00:32:47.145 }, 00:32:47.145 { 00:32:47.145 "name": "pt2", 00:32:47.145 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:47.145 "is_configured": true, 00:32:47.145 "data_offset": 2048, 00:32:47.145 "data_size": 63488 00:32:47.145 }, 00:32:47.145 { 00:32:47.145 "name": "pt3", 00:32:47.145 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:47.145 "is_configured": true, 00:32:47.145 "data_offset": 2048, 00:32:47.145 "data_size": 63488 00:32:47.145 } 00:32:47.145 ] 00:32:47.145 } 00:32:47.145 } 00:32:47.145 }' 00:32:47.145 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:47.145 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:32:47.145 pt2 00:32:47.145 pt3' 00:32:47.145 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:47.145 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:32:47.145 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:47.403 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:47.403 "name": "pt1", 00:32:47.403 "aliases": [ 00:32:47.403 "00000000-0000-0000-0000-000000000001" 00:32:47.403 ], 00:32:47.403 "product_name": "passthru", 00:32:47.403 "block_size": 512, 00:32:47.403 "num_blocks": 65536, 00:32:47.403 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:47.403 "assigned_rate_limits": { 00:32:47.403 "rw_ios_per_sec": 0, 00:32:47.404 "rw_mbytes_per_sec": 0, 00:32:47.404 "r_mbytes_per_sec": 0, 00:32:47.404 "w_mbytes_per_sec": 0 00:32:47.404 }, 00:32:47.404 "claimed": true, 00:32:47.404 "claim_type": "exclusive_write", 00:32:47.404 "zoned": false, 00:32:47.404 "supported_io_types": { 00:32:47.404 "read": true, 00:32:47.404 "write": true, 00:32:47.404 "unmap": true, 00:32:47.404 "flush": true, 00:32:47.404 "reset": true, 00:32:47.404 "nvme_admin": false, 00:32:47.404 "nvme_io": false, 00:32:47.404 "nvme_io_md": false, 00:32:47.404 "write_zeroes": true, 00:32:47.404 "zcopy": true, 00:32:47.404 "get_zone_info": false, 00:32:47.404 "zone_management": false, 00:32:47.404 "zone_append": false, 00:32:47.404 "compare": false, 00:32:47.404 "compare_and_write": false, 00:32:47.404 "abort": true, 00:32:47.404 "seek_hole": false, 00:32:47.404 "seek_data": false, 00:32:47.404 "copy": true, 00:32:47.404 "nvme_iov_md": false 00:32:47.404 }, 00:32:47.404 "memory_domains": [ 00:32:47.404 { 00:32:47.404 "dma_device_id": "system", 00:32:47.404 "dma_device_type": 1 00:32:47.404 }, 00:32:47.404 { 00:32:47.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:47.404 "dma_device_type": 2 00:32:47.404 } 00:32:47.404 ], 00:32:47.404 "driver_specific": { 00:32:47.404 "passthru": { 00:32:47.404 "name": "pt1", 00:32:47.404 "base_bdev_name": "malloc1" 00:32:47.404 } 00:32:47.404 } 00:32:47.404 }' 00:32:47.404 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:47.404 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:47.662 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:47.662 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:47.662 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:47.662 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:47.662 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:47.662 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:47.920 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:47.920 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:47.920 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:47.920 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:47.920 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:47.920 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:47.920 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:32:48.178 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:48.178 "name": "pt2", 00:32:48.178 "aliases": [ 00:32:48.178 "00000000-0000-0000-0000-000000000002" 00:32:48.178 ], 00:32:48.178 "product_name": "passthru", 00:32:48.178 "block_size": 512, 00:32:48.178 "num_blocks": 65536, 00:32:48.178 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:48.178 "assigned_rate_limits": { 00:32:48.178 "rw_ios_per_sec": 0, 00:32:48.178 "rw_mbytes_per_sec": 0, 00:32:48.178 "r_mbytes_per_sec": 0, 00:32:48.178 "w_mbytes_per_sec": 0 00:32:48.178 }, 00:32:48.178 "claimed": true, 00:32:48.178 "claim_type": "exclusive_write", 00:32:48.178 "zoned": false, 00:32:48.178 "supported_io_types": { 00:32:48.178 "read": true, 00:32:48.178 "write": true, 00:32:48.178 "unmap": true, 00:32:48.178 "flush": true, 00:32:48.178 "reset": true, 00:32:48.178 "nvme_admin": false, 00:32:48.178 "nvme_io": false, 00:32:48.178 "nvme_io_md": false, 00:32:48.178 "write_zeroes": true, 00:32:48.178 "zcopy": true, 00:32:48.178 "get_zone_info": false, 00:32:48.178 "zone_management": false, 00:32:48.178 "zone_append": false, 00:32:48.178 "compare": false, 00:32:48.178 "compare_and_write": false, 00:32:48.178 "abort": true, 00:32:48.178 "seek_hole": false, 00:32:48.178 "seek_data": false, 00:32:48.178 "copy": true, 00:32:48.178 "nvme_iov_md": false 00:32:48.178 }, 00:32:48.178 "memory_domains": [ 00:32:48.178 { 00:32:48.178 "dma_device_id": "system", 00:32:48.178 "dma_device_type": 1 00:32:48.178 }, 00:32:48.178 { 00:32:48.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:48.178 "dma_device_type": 2 00:32:48.178 } 00:32:48.178 ], 00:32:48.178 "driver_specific": { 00:32:48.178 "passthru": { 00:32:48.178 "name": "pt2", 00:32:48.178 "base_bdev_name": "malloc2" 00:32:48.178 } 00:32:48.178 } 00:32:48.178 }' 00:32:48.178 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:48.178 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:48.178 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:48.178 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:48.437 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:48.437 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:48.437 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:48.438 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:48.438 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:48.438 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:48.438 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:48.696 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:48.696 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:48.696 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:48.696 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:32:48.953 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:48.953 "name": "pt3", 00:32:48.953 "aliases": [ 00:32:48.953 "00000000-0000-0000-0000-000000000003" 00:32:48.953 ], 00:32:48.953 "product_name": "passthru", 00:32:48.953 "block_size": 512, 00:32:48.953 "num_blocks": 65536, 00:32:48.953 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:48.953 "assigned_rate_limits": { 00:32:48.953 "rw_ios_per_sec": 0, 00:32:48.953 "rw_mbytes_per_sec": 0, 00:32:48.953 "r_mbytes_per_sec": 0, 00:32:48.953 "w_mbytes_per_sec": 0 00:32:48.953 }, 00:32:48.953 "claimed": true, 00:32:48.953 "claim_type": "exclusive_write", 00:32:48.953 "zoned": false, 00:32:48.953 "supported_io_types": { 00:32:48.953 "read": true, 00:32:48.953 "write": true, 00:32:48.953 "unmap": true, 00:32:48.953 "flush": true, 00:32:48.953 "reset": true, 00:32:48.953 "nvme_admin": false, 00:32:48.953 "nvme_io": false, 00:32:48.953 "nvme_io_md": false, 00:32:48.953 "write_zeroes": true, 00:32:48.953 "zcopy": true, 00:32:48.953 "get_zone_info": false, 00:32:48.953 "zone_management": false, 00:32:48.954 "zone_append": false, 00:32:48.954 "compare": false, 00:32:48.954 "compare_and_write": false, 00:32:48.954 "abort": true, 00:32:48.954 "seek_hole": false, 00:32:48.954 "seek_data": false, 00:32:48.954 "copy": true, 00:32:48.954 "nvme_iov_md": false 00:32:48.954 }, 00:32:48.954 "memory_domains": [ 00:32:48.954 { 00:32:48.954 "dma_device_id": "system", 00:32:48.954 "dma_device_type": 1 00:32:48.954 }, 00:32:48.954 { 00:32:48.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:48.954 "dma_device_type": 2 00:32:48.954 } 00:32:48.954 ], 00:32:48.954 "driver_specific": { 00:32:48.954 "passthru": { 00:32:48.954 "name": "pt3", 00:32:48.954 "base_bdev_name": "malloc3" 00:32:48.954 } 00:32:48.954 } 00:32:48.954 }' 00:32:48.954 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:48.954 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:48.954 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:48.954 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:48.954 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:49.211 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:49.211 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:49.211 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:49.211 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:49.211 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:49.211 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:49.211 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:49.211 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:49.212 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:32:49.469 [2024-07-15 21:46:22.783867] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:49.469 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=e780a29a-0d7d-410d-98a2-d6254f9e496a 00:32:49.469 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z e780a29a-0d7d-410d-98a2-d6254f9e496a ']' 00:32:49.469 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:49.727 [2024-07-15 21:46:23.011320] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:49.727 [2024-07-15 21:46:23.011372] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:49.727 [2024-07-15 21:46:23.011460] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:49.727 [2024-07-15 21:46:23.011541] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:49.727 [2024-07-15 21:46:23.011551] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:32:49.727 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:49.727 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:32:49.985 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:32:49.985 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:32:49.985 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:32:49.985 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:32:50.243 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:32:50.243 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:32:50.502 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:32:50.502 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:32:50.761 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:32:50.761 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:32:51.020 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:32:51.020 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:32:51.020 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:32:51.020 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:32:51.020 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:51.020 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:51.020 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:51.020 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:51.020 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:51.020 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:51.020 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:51.020 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:32:51.020 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:32:51.278 [2024-07-15 21:46:24.445022] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:32:51.278 [2024-07-15 21:46:24.447034] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:32:51.278 [2024-07-15 21:46:24.447155] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:32:51.278 [2024-07-15 21:46:24.447249] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:32:51.278 [2024-07-15 21:46:24.447377] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:32:51.278 [2024-07-15 21:46:24.447436] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:32:51.278 [2024-07-15 21:46:24.447508] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:51.278 [2024-07-15 21:46:24.447542] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state configuring 00:32:51.278 request: 00:32:51.278 { 00:32:51.278 "name": "raid_bdev1", 00:32:51.278 "raid_level": "raid5f", 00:32:51.278 "base_bdevs": [ 00:32:51.278 "malloc1", 00:32:51.278 "malloc2", 00:32:51.278 "malloc3" 00:32:51.278 ], 00:32:51.278 "strip_size_kb": 64, 00:32:51.278 "superblock": false, 00:32:51.278 "method": "bdev_raid_create", 00:32:51.278 "req_id": 1 00:32:51.278 } 00:32:51.278 Got JSON-RPC error response 00:32:51.278 response: 00:32:51.278 { 00:32:51.278 "code": -17, 00:32:51.278 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:32:51.278 } 00:32:51.278 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:32:51.278 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:51.278 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:51.278 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:51.278 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:51.278 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:32:51.536 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:32:51.536 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:32:51.536 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:51.536 [2024-07-15 21:46:24.892178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:51.536 [2024-07-15 21:46:24.892363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:51.536 [2024-07-15 21:46:24.892437] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:32:51.536 [2024-07-15 21:46:24.892489] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:51.536 [2024-07-15 21:46:24.894846] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:51.536 [2024-07-15 21:46:24.894951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:51.536 [2024-07-15 21:46:24.895133] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:51.536 [2024-07-15 21:46:24.895228] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:51.536 pt1 00:32:51.536 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:32:51.536 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:51.536 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:51.536 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:51.536 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:51.536 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:51.536 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:51.536 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:51.536 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:51.536 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:51.801 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:51.801 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:51.801 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:51.801 "name": "raid_bdev1", 00:32:51.801 "uuid": "e780a29a-0d7d-410d-98a2-d6254f9e496a", 00:32:51.801 "strip_size_kb": 64, 00:32:51.801 "state": "configuring", 00:32:51.801 "raid_level": "raid5f", 00:32:51.801 "superblock": true, 00:32:51.801 "num_base_bdevs": 3, 00:32:51.801 "num_base_bdevs_discovered": 1, 00:32:51.801 "num_base_bdevs_operational": 3, 00:32:51.801 "base_bdevs_list": [ 00:32:51.801 { 00:32:51.801 "name": "pt1", 00:32:51.801 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:51.801 "is_configured": true, 00:32:51.801 "data_offset": 2048, 00:32:51.801 "data_size": 63488 00:32:51.801 }, 00:32:51.801 { 00:32:51.801 "name": null, 00:32:51.801 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:51.801 "is_configured": false, 00:32:51.801 "data_offset": 2048, 00:32:51.801 "data_size": 63488 00:32:51.801 }, 00:32:51.801 { 00:32:51.801 "name": null, 00:32:51.801 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:51.801 "is_configured": false, 00:32:51.801 "data_offset": 2048, 00:32:51.801 "data_size": 63488 00:32:51.801 } 00:32:51.801 ] 00:32:51.801 }' 00:32:51.801 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:51.801 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:52.735 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:32:52.735 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:52.735 [2024-07-15 21:46:26.054228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:52.735 [2024-07-15 21:46:26.054391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:52.735 [2024-07-15 21:46:26.054445] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:32:52.735 [2024-07-15 21:46:26.054491] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:52.735 [2024-07-15 21:46:26.054973] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:52.735 [2024-07-15 21:46:26.055044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:52.735 [2024-07-15 21:46:26.055197] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:52.735 [2024-07-15 21:46:26.055252] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:52.735 pt2 00:32:52.735 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:32:52.994 [2024-07-15 21:46:26.313836] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:32:52.994 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:32:52.994 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:52.994 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:52.994 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:52.994 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:52.994 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:52.994 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:52.994 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:52.994 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:52.994 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:52.994 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:52.994 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:53.251 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:53.251 "name": "raid_bdev1", 00:32:53.251 "uuid": "e780a29a-0d7d-410d-98a2-d6254f9e496a", 00:32:53.251 "strip_size_kb": 64, 00:32:53.251 "state": "configuring", 00:32:53.251 "raid_level": "raid5f", 00:32:53.251 "superblock": true, 00:32:53.251 "num_base_bdevs": 3, 00:32:53.251 "num_base_bdevs_discovered": 1, 00:32:53.251 "num_base_bdevs_operational": 3, 00:32:53.251 "base_bdevs_list": [ 00:32:53.251 { 00:32:53.251 "name": "pt1", 00:32:53.251 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:53.251 "is_configured": true, 00:32:53.251 "data_offset": 2048, 00:32:53.251 "data_size": 63488 00:32:53.251 }, 00:32:53.251 { 00:32:53.251 "name": null, 00:32:53.251 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:53.251 "is_configured": false, 00:32:53.251 "data_offset": 2048, 00:32:53.251 "data_size": 63488 00:32:53.251 }, 00:32:53.251 { 00:32:53.251 "name": null, 00:32:53.251 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:53.251 "is_configured": false, 00:32:53.252 "data_offset": 2048, 00:32:53.252 "data_size": 63488 00:32:53.252 } 00:32:53.252 ] 00:32:53.252 }' 00:32:53.252 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:53.252 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:54.186 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:32:54.186 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:32:54.186 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:54.186 [2024-07-15 21:46:27.456050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:54.186 [2024-07-15 21:46:27.456235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:54.186 [2024-07-15 21:46:27.456304] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:32:54.186 [2024-07-15 21:46:27.456354] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:54.186 [2024-07-15 21:46:27.456877] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:54.186 [2024-07-15 21:46:27.456954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:54.186 [2024-07-15 21:46:27.457122] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:54.187 [2024-07-15 21:46:27.457174] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:54.187 pt2 00:32:54.187 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:32:54.187 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:32:54.187 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:32:54.445 [2024-07-15 21:46:27.679685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:32:54.445 [2024-07-15 21:46:27.679849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:54.445 [2024-07-15 21:46:27.679900] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:32:54.445 [2024-07-15 21:46:27.679946] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:54.445 [2024-07-15 21:46:27.680477] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:54.445 [2024-07-15 21:46:27.680543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:32:54.445 [2024-07-15 21:46:27.680693] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:32:54.445 [2024-07-15 21:46:27.680744] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:32:54.445 [2024-07-15 21:46:27.680899] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:32:54.445 [2024-07-15 21:46:27.680934] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:54.445 [2024-07-15 21:46:27.681072] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:32:54.445 [2024-07-15 21:46:27.687288] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:32:54.445 [2024-07-15 21:46:27.687374] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:32:54.445 [2024-07-15 21:46:27.687618] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:54.445 pt3 00:32:54.445 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:32:54.445 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:32:54.445 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:54.445 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:54.445 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:54.445 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:54.445 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:54.445 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:54.445 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:54.445 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:54.445 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:54.445 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:54.445 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:54.445 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:54.704 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:54.704 "name": "raid_bdev1", 00:32:54.704 "uuid": "e780a29a-0d7d-410d-98a2-d6254f9e496a", 00:32:54.704 "strip_size_kb": 64, 00:32:54.704 "state": "online", 00:32:54.704 "raid_level": "raid5f", 00:32:54.704 "superblock": true, 00:32:54.704 "num_base_bdevs": 3, 00:32:54.704 "num_base_bdevs_discovered": 3, 00:32:54.704 "num_base_bdevs_operational": 3, 00:32:54.704 "base_bdevs_list": [ 00:32:54.704 { 00:32:54.704 "name": "pt1", 00:32:54.704 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:54.704 "is_configured": true, 00:32:54.704 "data_offset": 2048, 00:32:54.704 "data_size": 63488 00:32:54.704 }, 00:32:54.704 { 00:32:54.704 "name": "pt2", 00:32:54.704 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:54.704 "is_configured": true, 00:32:54.704 "data_offset": 2048, 00:32:54.704 "data_size": 63488 00:32:54.704 }, 00:32:54.704 { 00:32:54.704 "name": "pt3", 00:32:54.704 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:54.704 "is_configured": true, 00:32:54.704 "data_offset": 2048, 00:32:54.704 "data_size": 63488 00:32:54.704 } 00:32:54.704 ] 00:32:54.704 }' 00:32:54.704 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:54.704 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.272 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:32:55.272 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:32:55.272 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:55.272 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:55.272 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:55.272 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:32:55.272 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:55.272 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:55.530 [2024-07-15 21:46:28.805460] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:55.530 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:55.530 "name": "raid_bdev1", 00:32:55.530 "aliases": [ 00:32:55.530 "e780a29a-0d7d-410d-98a2-d6254f9e496a" 00:32:55.530 ], 00:32:55.530 "product_name": "Raid Volume", 00:32:55.530 "block_size": 512, 00:32:55.530 "num_blocks": 126976, 00:32:55.530 "uuid": "e780a29a-0d7d-410d-98a2-d6254f9e496a", 00:32:55.530 "assigned_rate_limits": { 00:32:55.530 "rw_ios_per_sec": 0, 00:32:55.530 "rw_mbytes_per_sec": 0, 00:32:55.530 "r_mbytes_per_sec": 0, 00:32:55.530 "w_mbytes_per_sec": 0 00:32:55.530 }, 00:32:55.530 "claimed": false, 00:32:55.530 "zoned": false, 00:32:55.530 "supported_io_types": { 00:32:55.530 "read": true, 00:32:55.530 "write": true, 00:32:55.530 "unmap": false, 00:32:55.530 "flush": false, 00:32:55.530 "reset": true, 00:32:55.530 "nvme_admin": false, 00:32:55.530 "nvme_io": false, 00:32:55.530 "nvme_io_md": false, 00:32:55.530 "write_zeroes": true, 00:32:55.530 "zcopy": false, 00:32:55.530 "get_zone_info": false, 00:32:55.530 "zone_management": false, 00:32:55.530 "zone_append": false, 00:32:55.530 "compare": false, 00:32:55.530 "compare_and_write": false, 00:32:55.530 "abort": false, 00:32:55.530 "seek_hole": false, 00:32:55.530 "seek_data": false, 00:32:55.530 "copy": false, 00:32:55.530 "nvme_iov_md": false 00:32:55.530 }, 00:32:55.530 "driver_specific": { 00:32:55.530 "raid": { 00:32:55.530 "uuid": "e780a29a-0d7d-410d-98a2-d6254f9e496a", 00:32:55.530 "strip_size_kb": 64, 00:32:55.530 "state": "online", 00:32:55.530 "raid_level": "raid5f", 00:32:55.530 "superblock": true, 00:32:55.530 "num_base_bdevs": 3, 00:32:55.530 "num_base_bdevs_discovered": 3, 00:32:55.530 "num_base_bdevs_operational": 3, 00:32:55.530 "base_bdevs_list": [ 00:32:55.530 { 00:32:55.530 "name": "pt1", 00:32:55.530 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:55.530 "is_configured": true, 00:32:55.530 "data_offset": 2048, 00:32:55.530 "data_size": 63488 00:32:55.530 }, 00:32:55.530 { 00:32:55.530 "name": "pt2", 00:32:55.530 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:55.530 "is_configured": true, 00:32:55.530 "data_offset": 2048, 00:32:55.530 "data_size": 63488 00:32:55.530 }, 00:32:55.530 { 00:32:55.530 "name": "pt3", 00:32:55.530 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:55.530 "is_configured": true, 00:32:55.530 "data_offset": 2048, 00:32:55.531 "data_size": 63488 00:32:55.531 } 00:32:55.531 ] 00:32:55.531 } 00:32:55.531 } 00:32:55.531 }' 00:32:55.531 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:55.531 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:32:55.531 pt2 00:32:55.531 pt3' 00:32:55.531 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:55.531 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:32:55.531 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:55.789 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:55.789 "name": "pt1", 00:32:55.789 "aliases": [ 00:32:55.789 "00000000-0000-0000-0000-000000000001" 00:32:55.789 ], 00:32:55.789 "product_name": "passthru", 00:32:55.789 "block_size": 512, 00:32:55.789 "num_blocks": 65536, 00:32:55.789 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:55.789 "assigned_rate_limits": { 00:32:55.789 "rw_ios_per_sec": 0, 00:32:55.789 "rw_mbytes_per_sec": 0, 00:32:55.789 "r_mbytes_per_sec": 0, 00:32:55.789 "w_mbytes_per_sec": 0 00:32:55.789 }, 00:32:55.789 "claimed": true, 00:32:55.789 "claim_type": "exclusive_write", 00:32:55.789 "zoned": false, 00:32:55.789 "supported_io_types": { 00:32:55.789 "read": true, 00:32:55.789 "write": true, 00:32:55.789 "unmap": true, 00:32:55.789 "flush": true, 00:32:55.789 "reset": true, 00:32:55.789 "nvme_admin": false, 00:32:55.789 "nvme_io": false, 00:32:55.789 "nvme_io_md": false, 00:32:55.789 "write_zeroes": true, 00:32:55.789 "zcopy": true, 00:32:55.789 "get_zone_info": false, 00:32:55.789 "zone_management": false, 00:32:55.789 "zone_append": false, 00:32:55.789 "compare": false, 00:32:55.789 "compare_and_write": false, 00:32:55.789 "abort": true, 00:32:55.789 "seek_hole": false, 00:32:55.789 "seek_data": false, 00:32:55.789 "copy": true, 00:32:55.789 "nvme_iov_md": false 00:32:55.789 }, 00:32:55.789 "memory_domains": [ 00:32:55.789 { 00:32:55.789 "dma_device_id": "system", 00:32:55.789 "dma_device_type": 1 00:32:55.789 }, 00:32:55.789 { 00:32:55.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:55.789 "dma_device_type": 2 00:32:55.789 } 00:32:55.789 ], 00:32:55.789 "driver_specific": { 00:32:55.789 "passthru": { 00:32:55.789 "name": "pt1", 00:32:55.789 "base_bdev_name": "malloc1" 00:32:55.789 } 00:32:55.789 } 00:32:55.789 }' 00:32:55.789 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:55.789 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:56.047 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:56.047 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:56.047 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:56.047 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:56.047 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:56.047 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:56.325 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:56.325 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:56.325 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:56.325 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:56.325 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:56.325 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:32:56.325 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:56.605 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:56.605 "name": "pt2", 00:32:56.605 "aliases": [ 00:32:56.605 "00000000-0000-0000-0000-000000000002" 00:32:56.605 ], 00:32:56.605 "product_name": "passthru", 00:32:56.606 "block_size": 512, 00:32:56.606 "num_blocks": 65536, 00:32:56.606 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:56.606 "assigned_rate_limits": { 00:32:56.606 "rw_ios_per_sec": 0, 00:32:56.606 "rw_mbytes_per_sec": 0, 00:32:56.606 "r_mbytes_per_sec": 0, 00:32:56.606 "w_mbytes_per_sec": 0 00:32:56.606 }, 00:32:56.606 "claimed": true, 00:32:56.606 "claim_type": "exclusive_write", 00:32:56.606 "zoned": false, 00:32:56.606 "supported_io_types": { 00:32:56.606 "read": true, 00:32:56.606 "write": true, 00:32:56.606 "unmap": true, 00:32:56.606 "flush": true, 00:32:56.606 "reset": true, 00:32:56.606 "nvme_admin": false, 00:32:56.606 "nvme_io": false, 00:32:56.606 "nvme_io_md": false, 00:32:56.606 "write_zeroes": true, 00:32:56.606 "zcopy": true, 00:32:56.606 "get_zone_info": false, 00:32:56.606 "zone_management": false, 00:32:56.606 "zone_append": false, 00:32:56.606 "compare": false, 00:32:56.606 "compare_and_write": false, 00:32:56.606 "abort": true, 00:32:56.606 "seek_hole": false, 00:32:56.606 "seek_data": false, 00:32:56.606 "copy": true, 00:32:56.606 "nvme_iov_md": false 00:32:56.606 }, 00:32:56.606 "memory_domains": [ 00:32:56.606 { 00:32:56.606 "dma_device_id": "system", 00:32:56.606 "dma_device_type": 1 00:32:56.606 }, 00:32:56.606 { 00:32:56.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:56.606 "dma_device_type": 2 00:32:56.606 } 00:32:56.606 ], 00:32:56.606 "driver_specific": { 00:32:56.606 "passthru": { 00:32:56.606 "name": "pt2", 00:32:56.606 "base_bdev_name": "malloc2" 00:32:56.606 } 00:32:56.606 } 00:32:56.606 }' 00:32:56.606 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:56.606 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:56.606 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:56.606 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:56.606 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:56.864 21:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:56.864 21:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:56.864 21:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:56.864 21:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:56.864 21:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:56.864 21:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:56.864 21:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:56.864 21:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:56.864 21:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:32:56.864 21:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:57.122 21:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:57.122 "name": "pt3", 00:32:57.122 "aliases": [ 00:32:57.122 "00000000-0000-0000-0000-000000000003" 00:32:57.122 ], 00:32:57.122 "product_name": "passthru", 00:32:57.122 "block_size": 512, 00:32:57.122 "num_blocks": 65536, 00:32:57.122 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:57.122 "assigned_rate_limits": { 00:32:57.122 "rw_ios_per_sec": 0, 00:32:57.122 "rw_mbytes_per_sec": 0, 00:32:57.122 "r_mbytes_per_sec": 0, 00:32:57.122 "w_mbytes_per_sec": 0 00:32:57.122 }, 00:32:57.122 "claimed": true, 00:32:57.122 "claim_type": "exclusive_write", 00:32:57.122 "zoned": false, 00:32:57.122 "supported_io_types": { 00:32:57.122 "read": true, 00:32:57.122 "write": true, 00:32:57.122 "unmap": true, 00:32:57.122 "flush": true, 00:32:57.122 "reset": true, 00:32:57.122 "nvme_admin": false, 00:32:57.122 "nvme_io": false, 00:32:57.122 "nvme_io_md": false, 00:32:57.122 "write_zeroes": true, 00:32:57.122 "zcopy": true, 00:32:57.122 "get_zone_info": false, 00:32:57.122 "zone_management": false, 00:32:57.122 "zone_append": false, 00:32:57.122 "compare": false, 00:32:57.122 "compare_and_write": false, 00:32:57.122 "abort": true, 00:32:57.122 "seek_hole": false, 00:32:57.122 "seek_data": false, 00:32:57.122 "copy": true, 00:32:57.122 "nvme_iov_md": false 00:32:57.122 }, 00:32:57.122 "memory_domains": [ 00:32:57.122 { 00:32:57.122 "dma_device_id": "system", 00:32:57.122 "dma_device_type": 1 00:32:57.122 }, 00:32:57.122 { 00:32:57.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:57.122 "dma_device_type": 2 00:32:57.122 } 00:32:57.122 ], 00:32:57.122 "driver_specific": { 00:32:57.122 "passthru": { 00:32:57.122 "name": "pt3", 00:32:57.122 "base_bdev_name": "malloc3" 00:32:57.122 } 00:32:57.122 } 00:32:57.122 }' 00:32:57.123 21:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:57.380 21:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:57.380 21:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:57.380 21:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:57.380 21:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:57.380 21:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:57.380 21:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:57.380 21:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:57.638 21:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:57.638 21:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:57.638 21:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:57.638 21:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:57.638 21:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:57.638 21:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:32:57.908 [2024-07-15 21:46:31.137450] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:57.908 21:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' e780a29a-0d7d-410d-98a2-d6254f9e496a '!=' e780a29a-0d7d-410d-98a2-d6254f9e496a ']' 00:32:57.908 21:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid5f 00:32:57.908 21:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:32:57.908 21:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:32:57.908 21:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:32:58.166 [2024-07-15 21:46:31.364895] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:32:58.166 21:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:58.166 21:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:58.166 21:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:58.166 21:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:58.166 21:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:58.166 21:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:58.166 21:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:58.166 21:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:58.166 21:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:58.166 21:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:58.166 21:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:58.166 21:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:58.425 21:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:58.425 "name": "raid_bdev1", 00:32:58.425 "uuid": "e780a29a-0d7d-410d-98a2-d6254f9e496a", 00:32:58.425 "strip_size_kb": 64, 00:32:58.425 "state": "online", 00:32:58.425 "raid_level": "raid5f", 00:32:58.425 "superblock": true, 00:32:58.425 "num_base_bdevs": 3, 00:32:58.425 "num_base_bdevs_discovered": 2, 00:32:58.425 "num_base_bdevs_operational": 2, 00:32:58.425 "base_bdevs_list": [ 00:32:58.425 { 00:32:58.425 "name": null, 00:32:58.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:58.425 "is_configured": false, 00:32:58.425 "data_offset": 2048, 00:32:58.425 "data_size": 63488 00:32:58.425 }, 00:32:58.425 { 00:32:58.425 "name": "pt2", 00:32:58.425 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:58.425 "is_configured": true, 00:32:58.425 "data_offset": 2048, 00:32:58.425 "data_size": 63488 00:32:58.425 }, 00:32:58.425 { 00:32:58.425 "name": "pt3", 00:32:58.425 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:58.425 "is_configured": true, 00:32:58.425 "data_offset": 2048, 00:32:58.425 "data_size": 63488 00:32:58.425 } 00:32:58.425 ] 00:32:58.425 }' 00:32:58.425 21:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:58.425 21:46:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:58.990 21:46:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:59.248 [2024-07-15 21:46:32.546807] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:59.248 [2024-07-15 21:46:32.546909] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:59.248 [2024-07-15 21:46:32.546994] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:59.248 [2024-07-15 21:46:32.547083] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:59.248 [2024-07-15 21:46:32.547105] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:32:59.248 21:46:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:59.248 21:46:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:32:59.506 21:46:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:32:59.506 21:46:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:32:59.506 21:46:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:32:59.506 21:46:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:32:59.506 21:46:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:32:59.764 21:46:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:32:59.764 21:46:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:32:59.764 21:46:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:33:00.021 21:46:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:33:00.021 21:46:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:33:00.021 21:46:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:33:00.021 21:46:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:33:00.021 21:46:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:00.021 [2024-07-15 21:46:33.381320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:00.021 [2024-07-15 21:46:33.381459] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:00.021 [2024-07-15 21:46:33.381526] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:33:00.021 [2024-07-15 21:46:33.381571] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:00.021 [2024-07-15 21:46:33.383693] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:00.021 [2024-07-15 21:46:33.383788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:00.021 [2024-07-15 21:46:33.383941] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:00.021 [2024-07-15 21:46:33.384027] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:00.021 pt2 00:33:00.279 21:46:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:33:00.279 21:46:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:00.279 21:46:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:00.279 21:46:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:00.279 21:46:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:00.279 21:46:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:00.279 21:46:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:00.279 21:46:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:00.279 21:46:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:00.279 21:46:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:00.279 21:46:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:00.279 21:46:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:00.279 21:46:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:00.279 "name": "raid_bdev1", 00:33:00.279 "uuid": "e780a29a-0d7d-410d-98a2-d6254f9e496a", 00:33:00.279 "strip_size_kb": 64, 00:33:00.279 "state": "configuring", 00:33:00.279 "raid_level": "raid5f", 00:33:00.279 "superblock": true, 00:33:00.279 "num_base_bdevs": 3, 00:33:00.279 "num_base_bdevs_discovered": 1, 00:33:00.279 "num_base_bdevs_operational": 2, 00:33:00.279 "base_bdevs_list": [ 00:33:00.279 { 00:33:00.279 "name": null, 00:33:00.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:00.279 "is_configured": false, 00:33:00.279 "data_offset": 2048, 00:33:00.279 "data_size": 63488 00:33:00.279 }, 00:33:00.279 { 00:33:00.279 "name": "pt2", 00:33:00.279 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:00.279 "is_configured": true, 00:33:00.279 "data_offset": 2048, 00:33:00.279 "data_size": 63488 00:33:00.279 }, 00:33:00.279 { 00:33:00.279 "name": null, 00:33:00.279 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:00.279 "is_configured": false, 00:33:00.279 "data_offset": 2048, 00:33:00.279 "data_size": 63488 00:33:00.279 } 00:33:00.279 ] 00:33:00.279 }' 00:33:00.279 21:46:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:00.279 21:46:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.212 21:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:33:01.212 21:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:33:01.212 21:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:33:01.212 21:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:33:01.212 [2024-07-15 21:46:34.507402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:33:01.212 [2024-07-15 21:46:34.507568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:01.212 [2024-07-15 21:46:34.507634] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:33:01.212 [2024-07-15 21:46:34.507681] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:01.212 [2024-07-15 21:46:34.508148] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:01.212 [2024-07-15 21:46:34.508208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:33:01.212 [2024-07-15 21:46:34.508336] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:33:01.212 [2024-07-15 21:46:34.508457] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:33:01.212 [2024-07-15 21:46:34.508598] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b180 00:33:01.212 [2024-07-15 21:46:34.508633] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:33:01.212 [2024-07-15 21:46:34.508763] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:33:01.212 [2024-07-15 21:46:34.514771] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b180 00:33:01.212 [2024-07-15 21:46:34.514838] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b180 00:33:01.212 [2024-07-15 21:46:34.515187] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:01.212 pt3 00:33:01.212 21:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:33:01.212 21:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:01.212 21:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:01.212 21:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:01.212 21:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:01.212 21:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:01.212 21:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:01.212 21:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:01.212 21:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:01.212 21:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:01.212 21:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:01.212 21:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:01.508 21:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:01.508 "name": "raid_bdev1", 00:33:01.508 "uuid": "e780a29a-0d7d-410d-98a2-d6254f9e496a", 00:33:01.508 "strip_size_kb": 64, 00:33:01.508 "state": "online", 00:33:01.508 "raid_level": "raid5f", 00:33:01.508 "superblock": true, 00:33:01.508 "num_base_bdevs": 3, 00:33:01.508 "num_base_bdevs_discovered": 2, 00:33:01.508 "num_base_bdevs_operational": 2, 00:33:01.508 "base_bdevs_list": [ 00:33:01.508 { 00:33:01.508 "name": null, 00:33:01.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:01.508 "is_configured": false, 00:33:01.508 "data_offset": 2048, 00:33:01.508 "data_size": 63488 00:33:01.508 }, 00:33:01.508 { 00:33:01.508 "name": "pt2", 00:33:01.508 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:01.508 "is_configured": true, 00:33:01.508 "data_offset": 2048, 00:33:01.508 "data_size": 63488 00:33:01.508 }, 00:33:01.508 { 00:33:01.508 "name": "pt3", 00:33:01.508 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:01.508 "is_configured": true, 00:33:01.508 "data_offset": 2048, 00:33:01.508 "data_size": 63488 00:33:01.508 } 00:33:01.508 ] 00:33:01.508 }' 00:33:01.508 21:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:01.508 21:46:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:02.096 21:46:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:02.354 [2024-07-15 21:46:35.536771] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:02.354 [2024-07-15 21:46:35.536874] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:02.354 [2024-07-15 21:46:35.536960] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:02.354 [2024-07-15 21:46:35.537033] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:02.354 [2024-07-15 21:46:35.537073] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b180 name raid_bdev1, state offline 00:33:02.354 21:46:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:02.354 21:46:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:33:02.612 21:46:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:33:02.612 21:46:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:33:02.612 21:46:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:33:02.612 21:46:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:33:02.612 21:46:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:33:02.612 21:46:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:02.871 [2024-07-15 21:46:36.147727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:02.871 [2024-07-15 21:46:36.147880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:02.871 [2024-07-15 21:46:36.147936] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:33:02.871 [2024-07-15 21:46:36.147977] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:02.871 [2024-07-15 21:46:36.150313] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:02.871 [2024-07-15 21:46:36.150419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:02.871 [2024-07-15 21:46:36.150565] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:02.871 [2024-07-15 21:46:36.150647] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:02.871 [2024-07-15 21:46:36.150806] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:33:02.871 [2024-07-15 21:46:36.150844] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:02.871 [2024-07-15 21:46:36.150880] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state configuring 00:33:02.871 [2024-07-15 21:46:36.150992] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:02.871 pt1 00:33:02.871 21:46:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:33:02.871 21:46:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:33:02.871 21:46:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:02.871 21:46:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:02.871 21:46:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:02.871 21:46:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:02.871 21:46:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:02.871 21:46:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:02.871 21:46:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:02.871 21:46:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:02.871 21:46:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:02.871 21:46:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:02.871 21:46:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:03.154 21:46:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:03.154 "name": "raid_bdev1", 00:33:03.154 "uuid": "e780a29a-0d7d-410d-98a2-d6254f9e496a", 00:33:03.154 "strip_size_kb": 64, 00:33:03.154 "state": "configuring", 00:33:03.154 "raid_level": "raid5f", 00:33:03.154 "superblock": true, 00:33:03.154 "num_base_bdevs": 3, 00:33:03.154 "num_base_bdevs_discovered": 1, 00:33:03.154 "num_base_bdevs_operational": 2, 00:33:03.154 "base_bdevs_list": [ 00:33:03.154 { 00:33:03.154 "name": null, 00:33:03.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:03.154 "is_configured": false, 00:33:03.154 "data_offset": 2048, 00:33:03.154 "data_size": 63488 00:33:03.154 }, 00:33:03.154 { 00:33:03.154 "name": "pt2", 00:33:03.155 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:03.155 "is_configured": true, 00:33:03.155 "data_offset": 2048, 00:33:03.155 "data_size": 63488 00:33:03.155 }, 00:33:03.155 { 00:33:03.155 "name": null, 00:33:03.155 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:03.155 "is_configured": false, 00:33:03.155 "data_offset": 2048, 00:33:03.155 "data_size": 63488 00:33:03.155 } 00:33:03.155 ] 00:33:03.155 }' 00:33:03.155 21:46:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:03.155 21:46:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:03.723 21:46:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:33:03.723 21:46:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:33:03.983 21:46:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:33:03.984 21:46:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:33:04.247 [2024-07-15 21:46:37.469501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:33:04.247 [2024-07-15 21:46:37.469678] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:04.247 [2024-07-15 21:46:37.469730] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:33:04.247 [2024-07-15 21:46:37.469781] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:04.247 [2024-07-15 21:46:37.470285] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:04.247 [2024-07-15 21:46:37.470355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:33:04.247 [2024-07-15 21:46:37.470491] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:33:04.247 [2024-07-15 21:46:37.470538] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:33:04.247 [2024-07-15 21:46:37.470671] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:33:04.247 [2024-07-15 21:46:37.470706] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:33:04.247 [2024-07-15 21:46:37.470813] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:33:04.247 [2024-07-15 21:46:37.476942] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:33:04.247 [2024-07-15 21:46:37.477022] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:33:04.247 [2024-07-15 21:46:37.477381] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:04.247 pt3 00:33:04.247 21:46:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:33:04.247 21:46:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:04.247 21:46:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:04.247 21:46:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:04.247 21:46:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:04.247 21:46:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:04.247 21:46:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:04.247 21:46:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:04.247 21:46:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:04.247 21:46:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:04.247 21:46:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:04.247 21:46:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:04.512 21:46:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:04.512 "name": "raid_bdev1", 00:33:04.512 "uuid": "e780a29a-0d7d-410d-98a2-d6254f9e496a", 00:33:04.512 "strip_size_kb": 64, 00:33:04.512 "state": "online", 00:33:04.512 "raid_level": "raid5f", 00:33:04.512 "superblock": true, 00:33:04.512 "num_base_bdevs": 3, 00:33:04.512 "num_base_bdevs_discovered": 2, 00:33:04.512 "num_base_bdevs_operational": 2, 00:33:04.512 "base_bdevs_list": [ 00:33:04.512 { 00:33:04.512 "name": null, 00:33:04.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:04.512 "is_configured": false, 00:33:04.512 "data_offset": 2048, 00:33:04.512 "data_size": 63488 00:33:04.512 }, 00:33:04.512 { 00:33:04.512 "name": "pt2", 00:33:04.512 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:04.512 "is_configured": true, 00:33:04.512 "data_offset": 2048, 00:33:04.512 "data_size": 63488 00:33:04.512 }, 00:33:04.512 { 00:33:04.512 "name": "pt3", 00:33:04.512 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:04.512 "is_configured": true, 00:33:04.512 "data_offset": 2048, 00:33:04.512 "data_size": 63488 00:33:04.512 } 00:33:04.512 ] 00:33:04.512 }' 00:33:04.512 21:46:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:04.512 21:46:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.080 21:46:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:33:05.080 21:46:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:33:05.340 21:46:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:33:05.340 21:46:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:05.340 21:46:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:33:05.600 [2024-07-15 21:46:38.794788] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:05.600 21:46:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' e780a29a-0d7d-410d-98a2-d6254f9e496a '!=' e780a29a-0d7d-410d-98a2-d6254f9e496a ']' 00:33:05.600 21:46:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 153447 00:33:05.600 21:46:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 153447 ']' 00:33:05.600 21:46:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # kill -0 153447 00:33:05.600 21:46:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@953 -- # uname 00:33:05.600 21:46:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:05.600 21:46:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 153447 00:33:05.600 killing process with pid 153447 00:33:05.600 21:46:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:05.600 21:46:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:05.600 21:46:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 153447' 00:33:05.600 21:46:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@967 -- # kill 153447 00:33:05.600 21:46:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # wait 153447 00:33:05.600 [2024-07-15 21:46:38.839012] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:05.600 [2024-07-15 21:46:38.839083] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:05.600 [2024-07-15 21:46:38.839140] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:05.600 [2024-07-15 21:46:38.839148] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:33:05.860 [2024-07-15 21:46:39.145191] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:07.253 ************************************ 00:33:07.253 END TEST raid5f_superblock_test 00:33:07.253 ************************************ 00:33:07.253 21:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:33:07.253 00:33:07.253 real 0m23.771s 00:33:07.253 user 0m43.934s 00:33:07.253 sys 0m2.920s 00:33:07.253 21:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:07.253 21:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.253 21:46:40 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:33:07.253 21:46:40 bdev_raid -- bdev/bdev_raid.sh@889 -- # '[' true = true ']' 00:33:07.253 21:46:40 bdev_raid -- bdev/bdev_raid.sh@890 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:33:07.253 21:46:40 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:33:07.253 21:46:40 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:07.253 21:46:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:07.253 ************************************ 00:33:07.253 START TEST raid5f_rebuild_test 00:33:07.253 ************************************ 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid5f 3 false false true 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=3 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=154238 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 154238 /var/tmp/spdk-raid.sock 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 154238 ']' 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:07.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:07.254 21:46:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.254 [2024-07-15 21:46:40.517120] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:33:07.254 [2024-07-15 21:46:40.517406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154238 ] 00:33:07.254 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:07.254 Zero copy mechanism will not be used. 00:33:07.513 [2024-07-15 21:46:40.666225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:07.513 [2024-07-15 21:46:40.876278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:07.772 [2024-07-15 21:46:41.092531] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:08.340 21:46:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:08.340 21:46:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:33:08.340 21:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:08.340 21:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:33:08.340 BaseBdev1_malloc 00:33:08.341 21:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:08.601 [2024-07-15 21:46:41.884471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:08.601 [2024-07-15 21:46:41.884658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:08.601 [2024-07-15 21:46:41.884725] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:33:08.601 [2024-07-15 21:46:41.884766] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:08.601 [2024-07-15 21:46:41.887085] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:08.601 [2024-07-15 21:46:41.887187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:08.601 BaseBdev1 00:33:08.601 21:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:08.601 21:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:33:08.861 BaseBdev2_malloc 00:33:08.861 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:33:09.126 [2024-07-15 21:46:42.378531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:33:09.126 [2024-07-15 21:46:42.378725] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:09.126 [2024-07-15 21:46:42.378793] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:33:09.126 [2024-07-15 21:46:42.378832] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:09.126 [2024-07-15 21:46:42.380763] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:09.126 [2024-07-15 21:46:42.380838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:09.126 BaseBdev2 00:33:09.126 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:09.126 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:33:09.400 BaseBdev3_malloc 00:33:09.400 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:33:09.668 [2024-07-15 21:46:42.884435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:33:09.668 [2024-07-15 21:46:42.884598] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:09.668 [2024-07-15 21:46:42.884648] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:33:09.668 [2024-07-15 21:46:42.884699] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:09.668 [2024-07-15 21:46:42.886841] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:09.668 [2024-07-15 21:46:42.886959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:33:09.668 BaseBdev3 00:33:09.668 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:33:09.927 spare_malloc 00:33:09.927 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:33:10.184 spare_delay 00:33:10.184 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:10.184 [2024-07-15 21:46:43.559427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:10.184 [2024-07-15 21:46:43.559615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:10.184 [2024-07-15 21:46:43.559669] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:33:10.184 [2024-07-15 21:46:43.559713] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:10.184 [2024-07-15 21:46:43.561875] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:10.184 [2024-07-15 21:46:43.561976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:10.443 spare 00:33:10.443 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:33:10.443 [2024-07-15 21:46:43.771149] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:10.443 [2024-07-15 21:46:43.773000] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:10.443 [2024-07-15 21:46:43.773096] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:10.443 [2024-07-15 21:46:43.773206] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:33:10.443 [2024-07-15 21:46:43.773270] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:33:10.443 [2024-07-15 21:46:43.773471] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:33:10.443 [2024-07-15 21:46:43.779613] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:33:10.443 [2024-07-15 21:46:43.779673] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:33:10.443 [2024-07-15 21:46:43.779931] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:10.443 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:10.443 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:10.443 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:10.443 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:10.443 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:10.443 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:10.443 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:10.443 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:10.443 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:10.443 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:10.443 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:10.443 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:10.701 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:10.701 "name": "raid_bdev1", 00:33:10.701 "uuid": "3e9929aa-888c-436f-8459-eee6064e179d", 00:33:10.701 "strip_size_kb": 64, 00:33:10.701 "state": "online", 00:33:10.701 "raid_level": "raid5f", 00:33:10.701 "superblock": false, 00:33:10.701 "num_base_bdevs": 3, 00:33:10.701 "num_base_bdevs_discovered": 3, 00:33:10.701 "num_base_bdevs_operational": 3, 00:33:10.701 "base_bdevs_list": [ 00:33:10.701 { 00:33:10.701 "name": "BaseBdev1", 00:33:10.701 "uuid": "f06e5003-5214-5cec-9303-efa465afbf28", 00:33:10.701 "is_configured": true, 00:33:10.701 "data_offset": 0, 00:33:10.701 "data_size": 65536 00:33:10.701 }, 00:33:10.701 { 00:33:10.701 "name": "BaseBdev2", 00:33:10.701 "uuid": "290494f8-f138-5c0c-aa68-9c2e666e4e96", 00:33:10.701 "is_configured": true, 00:33:10.701 "data_offset": 0, 00:33:10.701 "data_size": 65536 00:33:10.701 }, 00:33:10.701 { 00:33:10.701 "name": "BaseBdev3", 00:33:10.701 "uuid": "1a63eeba-aba9-524d-9e63-ee41ca100a8a", 00:33:10.701 "is_configured": true, 00:33:10.701 "data_offset": 0, 00:33:10.701 "data_size": 65536 00:33:10.701 } 00:33:10.701 ] 00:33:10.701 }' 00:33:10.701 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:10.701 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:11.267 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:33:11.267 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:11.527 [2024-07-15 21:46:44.821298] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:11.527 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=131072 00:33:11.527 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:33:11.527 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:11.823 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:33:11.823 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:33:11.823 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:33:11.824 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:33:11.824 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:33:11.824 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:11.824 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:33:11.824 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:11.824 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:33:11.824 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:11.824 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:33:11.824 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:11.824 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:11.824 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:33:12.091 [2024-07-15 21:46:45.244396] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:33:12.091 /dev/nbd0 00:33:12.091 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:12.091 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:12.091 21:46:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:33:12.091 21:46:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:33:12.091 21:46:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:12.091 21:46:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:12.091 21:46:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:33:12.091 21:46:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:33:12.091 21:46:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:12.091 21:46:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:12.091 21:46:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:12.091 1+0 records in 00:33:12.091 1+0 records out 00:33:12.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483277 s, 8.5 MB/s 00:33:12.091 21:46:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:12.091 21:46:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:33:12.091 21:46:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:12.091 21:46:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:12.091 21:46:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:33:12.091 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:12.091 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:12.091 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:33:12.091 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # write_unit_size=256 00:33:12.091 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # echo 128 00:33:12.091 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:33:12.658 512+0 records in 00:33:12.658 512+0 records out 00:33:12.658 67108864 bytes (67 MB, 64 MiB) copied, 0.453889 s, 148 MB/s 00:33:12.658 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:33:12.658 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:12.658 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:33:12.658 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:12.658 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:33:12.658 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:12.658 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:33:12.658 21:46:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:12.658 21:46:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:12.658 21:46:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:12.658 21:46:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:12.658 21:46:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:12.658 21:46:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:12.658 [2024-07-15 21:46:46.013130] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:12.658 21:46:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:33:12.658 21:46:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:33:12.658 21:46:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:33:12.916 [2024-07-15 21:46:46.204160] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:12.916 21:46:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:33:12.916 21:46:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:12.916 21:46:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:12.916 21:46:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:12.916 21:46:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:12.916 21:46:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:12.916 21:46:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:12.916 21:46:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:12.916 21:46:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:12.916 21:46:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:12.916 21:46:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:12.916 21:46:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:13.175 21:46:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:13.175 "name": "raid_bdev1", 00:33:13.175 "uuid": "3e9929aa-888c-436f-8459-eee6064e179d", 00:33:13.175 "strip_size_kb": 64, 00:33:13.175 "state": "online", 00:33:13.175 "raid_level": "raid5f", 00:33:13.175 "superblock": false, 00:33:13.175 "num_base_bdevs": 3, 00:33:13.175 "num_base_bdevs_discovered": 2, 00:33:13.175 "num_base_bdevs_operational": 2, 00:33:13.175 "base_bdevs_list": [ 00:33:13.175 { 00:33:13.175 "name": null, 00:33:13.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:13.175 "is_configured": false, 00:33:13.175 "data_offset": 0, 00:33:13.175 "data_size": 65536 00:33:13.175 }, 00:33:13.175 { 00:33:13.175 "name": "BaseBdev2", 00:33:13.175 "uuid": "290494f8-f138-5c0c-aa68-9c2e666e4e96", 00:33:13.175 "is_configured": true, 00:33:13.175 "data_offset": 0, 00:33:13.175 "data_size": 65536 00:33:13.175 }, 00:33:13.175 { 00:33:13.175 "name": "BaseBdev3", 00:33:13.175 "uuid": "1a63eeba-aba9-524d-9e63-ee41ca100a8a", 00:33:13.175 "is_configured": true, 00:33:13.175 "data_offset": 0, 00:33:13.175 "data_size": 65536 00:33:13.175 } 00:33:13.175 ] 00:33:13.175 }' 00:33:13.175 21:46:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:13.175 21:46:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:13.741 21:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:13.998 [2024-07-15 21:46:47.250396] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:13.998 [2024-07-15 21:46:47.265017] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002c930 00:33:13.998 [2024-07-15 21:46:47.272481] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:13.998 21:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:33:14.933 21:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:14.933 21:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:14.933 21:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:14.933 21:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:14.933 21:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:14.933 21:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:14.933 21:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:15.193 21:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:15.193 "name": "raid_bdev1", 00:33:15.193 "uuid": "3e9929aa-888c-436f-8459-eee6064e179d", 00:33:15.193 "strip_size_kb": 64, 00:33:15.193 "state": "online", 00:33:15.193 "raid_level": "raid5f", 00:33:15.193 "superblock": false, 00:33:15.193 "num_base_bdevs": 3, 00:33:15.193 "num_base_bdevs_discovered": 3, 00:33:15.193 "num_base_bdevs_operational": 3, 00:33:15.193 "process": { 00:33:15.193 "type": "rebuild", 00:33:15.193 "target": "spare", 00:33:15.193 "progress": { 00:33:15.193 "blocks": 22528, 00:33:15.193 "percent": 17 00:33:15.193 } 00:33:15.193 }, 00:33:15.193 "base_bdevs_list": [ 00:33:15.193 { 00:33:15.193 "name": "spare", 00:33:15.193 "uuid": "5af22848-2305-54ac-8fe9-baa526ecef39", 00:33:15.193 "is_configured": true, 00:33:15.193 "data_offset": 0, 00:33:15.193 "data_size": 65536 00:33:15.193 }, 00:33:15.193 { 00:33:15.193 "name": "BaseBdev2", 00:33:15.193 "uuid": "290494f8-f138-5c0c-aa68-9c2e666e4e96", 00:33:15.193 "is_configured": true, 00:33:15.193 "data_offset": 0, 00:33:15.193 "data_size": 65536 00:33:15.193 }, 00:33:15.193 { 00:33:15.193 "name": "BaseBdev3", 00:33:15.193 "uuid": "1a63eeba-aba9-524d-9e63-ee41ca100a8a", 00:33:15.193 "is_configured": true, 00:33:15.193 "data_offset": 0, 00:33:15.193 "data_size": 65536 00:33:15.193 } 00:33:15.193 ] 00:33:15.193 }' 00:33:15.193 21:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:15.193 21:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:15.193 21:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:15.450 21:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:15.450 21:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:33:15.707 [2024-07-15 21:46:48.834758] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:15.707 [2024-07-15 21:46:48.886616] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:15.707 [2024-07-15 21:46:48.886739] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:15.707 [2024-07-15 21:46:48.886783] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:15.707 [2024-07-15 21:46:48.886824] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:15.707 21:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:33:15.707 21:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:15.707 21:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:15.707 21:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:15.707 21:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:15.707 21:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:15.707 21:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:15.707 21:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:15.707 21:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:15.707 21:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:15.707 21:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:15.707 21:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:15.965 21:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:15.965 "name": "raid_bdev1", 00:33:15.965 "uuid": "3e9929aa-888c-436f-8459-eee6064e179d", 00:33:15.965 "strip_size_kb": 64, 00:33:15.965 "state": "online", 00:33:15.965 "raid_level": "raid5f", 00:33:15.965 "superblock": false, 00:33:15.965 "num_base_bdevs": 3, 00:33:15.965 "num_base_bdevs_discovered": 2, 00:33:15.965 "num_base_bdevs_operational": 2, 00:33:15.965 "base_bdevs_list": [ 00:33:15.965 { 00:33:15.965 "name": null, 00:33:15.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:15.965 "is_configured": false, 00:33:15.965 "data_offset": 0, 00:33:15.965 "data_size": 65536 00:33:15.965 }, 00:33:15.965 { 00:33:15.965 "name": "BaseBdev2", 00:33:15.965 "uuid": "290494f8-f138-5c0c-aa68-9c2e666e4e96", 00:33:15.965 "is_configured": true, 00:33:15.965 "data_offset": 0, 00:33:15.965 "data_size": 65536 00:33:15.965 }, 00:33:15.965 { 00:33:15.965 "name": "BaseBdev3", 00:33:15.965 "uuid": "1a63eeba-aba9-524d-9e63-ee41ca100a8a", 00:33:15.965 "is_configured": true, 00:33:15.965 "data_offset": 0, 00:33:15.965 "data_size": 65536 00:33:15.965 } 00:33:15.965 ] 00:33:15.965 }' 00:33:15.965 21:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:15.965 21:46:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:16.543 21:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:16.543 21:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:16.543 21:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:16.543 21:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:16.543 21:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:16.543 21:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:16.543 21:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:16.801 21:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:16.801 "name": "raid_bdev1", 00:33:16.801 "uuid": "3e9929aa-888c-436f-8459-eee6064e179d", 00:33:16.801 "strip_size_kb": 64, 00:33:16.801 "state": "online", 00:33:16.801 "raid_level": "raid5f", 00:33:16.801 "superblock": false, 00:33:16.801 "num_base_bdevs": 3, 00:33:16.801 "num_base_bdevs_discovered": 2, 00:33:16.801 "num_base_bdevs_operational": 2, 00:33:16.801 "base_bdevs_list": [ 00:33:16.801 { 00:33:16.801 "name": null, 00:33:16.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:16.801 "is_configured": false, 00:33:16.801 "data_offset": 0, 00:33:16.801 "data_size": 65536 00:33:16.801 }, 00:33:16.801 { 00:33:16.801 "name": "BaseBdev2", 00:33:16.801 "uuid": "290494f8-f138-5c0c-aa68-9c2e666e4e96", 00:33:16.801 "is_configured": true, 00:33:16.801 "data_offset": 0, 00:33:16.801 "data_size": 65536 00:33:16.801 }, 00:33:16.801 { 00:33:16.801 "name": "BaseBdev3", 00:33:16.801 "uuid": "1a63eeba-aba9-524d-9e63-ee41ca100a8a", 00:33:16.801 "is_configured": true, 00:33:16.801 "data_offset": 0, 00:33:16.801 "data_size": 65536 00:33:16.801 } 00:33:16.801 ] 00:33:16.801 }' 00:33:16.801 21:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:16.801 21:46:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:16.801 21:46:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:16.801 21:46:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:16.801 21:46:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:17.058 [2024-07-15 21:46:50.315743] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:17.058 [2024-07-15 21:46:50.331991] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002cad0 00:33:17.058 [2024-07-15 21:46:50.339284] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:17.058 21:46:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:33:18.012 21:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:18.012 21:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:18.012 21:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:18.012 21:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:18.012 21:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:18.012 21:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:18.012 21:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:18.270 21:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:18.270 "name": "raid_bdev1", 00:33:18.270 "uuid": "3e9929aa-888c-436f-8459-eee6064e179d", 00:33:18.270 "strip_size_kb": 64, 00:33:18.270 "state": "online", 00:33:18.270 "raid_level": "raid5f", 00:33:18.270 "superblock": false, 00:33:18.270 "num_base_bdevs": 3, 00:33:18.270 "num_base_bdevs_discovered": 3, 00:33:18.270 "num_base_bdevs_operational": 3, 00:33:18.270 "process": { 00:33:18.270 "type": "rebuild", 00:33:18.270 "target": "spare", 00:33:18.270 "progress": { 00:33:18.270 "blocks": 24576, 00:33:18.270 "percent": 18 00:33:18.270 } 00:33:18.270 }, 00:33:18.270 "base_bdevs_list": [ 00:33:18.270 { 00:33:18.270 "name": "spare", 00:33:18.270 "uuid": "5af22848-2305-54ac-8fe9-baa526ecef39", 00:33:18.270 "is_configured": true, 00:33:18.270 "data_offset": 0, 00:33:18.270 "data_size": 65536 00:33:18.270 }, 00:33:18.270 { 00:33:18.270 "name": "BaseBdev2", 00:33:18.270 "uuid": "290494f8-f138-5c0c-aa68-9c2e666e4e96", 00:33:18.270 "is_configured": true, 00:33:18.270 "data_offset": 0, 00:33:18.270 "data_size": 65536 00:33:18.270 }, 00:33:18.270 { 00:33:18.270 "name": "BaseBdev3", 00:33:18.270 "uuid": "1a63eeba-aba9-524d-9e63-ee41ca100a8a", 00:33:18.270 "is_configured": true, 00:33:18.270 "data_offset": 0, 00:33:18.270 "data_size": 65536 00:33:18.270 } 00:33:18.270 ] 00:33:18.270 }' 00:33:18.270 21:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:18.270 21:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:18.270 21:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:18.528 21:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:18.528 21:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:33:18.528 21:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=3 00:33:18.528 21:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:33:18.528 21:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=1064 00:33:18.528 21:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:18.528 21:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:18.528 21:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:18.528 21:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:18.528 21:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:18.528 21:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:18.528 21:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:18.528 21:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:18.528 21:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:18.528 "name": "raid_bdev1", 00:33:18.528 "uuid": "3e9929aa-888c-436f-8459-eee6064e179d", 00:33:18.528 "strip_size_kb": 64, 00:33:18.528 "state": "online", 00:33:18.528 "raid_level": "raid5f", 00:33:18.528 "superblock": false, 00:33:18.528 "num_base_bdevs": 3, 00:33:18.528 "num_base_bdevs_discovered": 3, 00:33:18.528 "num_base_bdevs_operational": 3, 00:33:18.528 "process": { 00:33:18.528 "type": "rebuild", 00:33:18.528 "target": "spare", 00:33:18.528 "progress": { 00:33:18.528 "blocks": 30720, 00:33:18.528 "percent": 23 00:33:18.528 } 00:33:18.528 }, 00:33:18.528 "base_bdevs_list": [ 00:33:18.528 { 00:33:18.528 "name": "spare", 00:33:18.528 "uuid": "5af22848-2305-54ac-8fe9-baa526ecef39", 00:33:18.528 "is_configured": true, 00:33:18.528 "data_offset": 0, 00:33:18.528 "data_size": 65536 00:33:18.528 }, 00:33:18.528 { 00:33:18.528 "name": "BaseBdev2", 00:33:18.528 "uuid": "290494f8-f138-5c0c-aa68-9c2e666e4e96", 00:33:18.529 "is_configured": true, 00:33:18.529 "data_offset": 0, 00:33:18.529 "data_size": 65536 00:33:18.529 }, 00:33:18.529 { 00:33:18.529 "name": "BaseBdev3", 00:33:18.529 "uuid": "1a63eeba-aba9-524d-9e63-ee41ca100a8a", 00:33:18.529 "is_configured": true, 00:33:18.529 "data_offset": 0, 00:33:18.529 "data_size": 65536 00:33:18.529 } 00:33:18.529 ] 00:33:18.529 }' 00:33:18.529 21:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:18.786 21:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:18.786 21:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:18.786 21:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:18.786 21:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:19.718 21:46:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:19.718 21:46:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:19.718 21:46:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:19.718 21:46:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:19.718 21:46:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:19.718 21:46:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:19.718 21:46:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:19.718 21:46:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:19.976 21:46:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:19.976 "name": "raid_bdev1", 00:33:19.976 "uuid": "3e9929aa-888c-436f-8459-eee6064e179d", 00:33:19.976 "strip_size_kb": 64, 00:33:19.976 "state": "online", 00:33:19.976 "raid_level": "raid5f", 00:33:19.976 "superblock": false, 00:33:19.976 "num_base_bdevs": 3, 00:33:19.976 "num_base_bdevs_discovered": 3, 00:33:19.976 "num_base_bdevs_operational": 3, 00:33:19.976 "process": { 00:33:19.976 "type": "rebuild", 00:33:19.976 "target": "spare", 00:33:19.976 "progress": { 00:33:19.976 "blocks": 57344, 00:33:19.976 "percent": 43 00:33:19.976 } 00:33:19.976 }, 00:33:19.976 "base_bdevs_list": [ 00:33:19.976 { 00:33:19.976 "name": "spare", 00:33:19.976 "uuid": "5af22848-2305-54ac-8fe9-baa526ecef39", 00:33:19.976 "is_configured": true, 00:33:19.976 "data_offset": 0, 00:33:19.976 "data_size": 65536 00:33:19.976 }, 00:33:19.976 { 00:33:19.976 "name": "BaseBdev2", 00:33:19.976 "uuid": "290494f8-f138-5c0c-aa68-9c2e666e4e96", 00:33:19.976 "is_configured": true, 00:33:19.976 "data_offset": 0, 00:33:19.976 "data_size": 65536 00:33:19.976 }, 00:33:19.976 { 00:33:19.976 "name": "BaseBdev3", 00:33:19.976 "uuid": "1a63eeba-aba9-524d-9e63-ee41ca100a8a", 00:33:19.976 "is_configured": true, 00:33:19.976 "data_offset": 0, 00:33:19.976 "data_size": 65536 00:33:19.976 } 00:33:19.976 ] 00:33:19.976 }' 00:33:19.976 21:46:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:19.976 21:46:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:19.976 21:46:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:20.235 21:46:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:20.235 21:46:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:21.169 21:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:21.169 21:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:21.169 21:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:21.169 21:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:21.170 21:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:21.170 21:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:21.170 21:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:21.170 21:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:21.429 21:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:21.429 "name": "raid_bdev1", 00:33:21.429 "uuid": "3e9929aa-888c-436f-8459-eee6064e179d", 00:33:21.429 "strip_size_kb": 64, 00:33:21.429 "state": "online", 00:33:21.429 "raid_level": "raid5f", 00:33:21.429 "superblock": false, 00:33:21.429 "num_base_bdevs": 3, 00:33:21.429 "num_base_bdevs_discovered": 3, 00:33:21.429 "num_base_bdevs_operational": 3, 00:33:21.429 "process": { 00:33:21.429 "type": "rebuild", 00:33:21.429 "target": "spare", 00:33:21.429 "progress": { 00:33:21.429 "blocks": 83968, 00:33:21.429 "percent": 64 00:33:21.429 } 00:33:21.429 }, 00:33:21.429 "base_bdevs_list": [ 00:33:21.429 { 00:33:21.429 "name": "spare", 00:33:21.429 "uuid": "5af22848-2305-54ac-8fe9-baa526ecef39", 00:33:21.429 "is_configured": true, 00:33:21.429 "data_offset": 0, 00:33:21.429 "data_size": 65536 00:33:21.429 }, 00:33:21.429 { 00:33:21.429 "name": "BaseBdev2", 00:33:21.429 "uuid": "290494f8-f138-5c0c-aa68-9c2e666e4e96", 00:33:21.429 "is_configured": true, 00:33:21.429 "data_offset": 0, 00:33:21.429 "data_size": 65536 00:33:21.429 }, 00:33:21.429 { 00:33:21.429 "name": "BaseBdev3", 00:33:21.429 "uuid": "1a63eeba-aba9-524d-9e63-ee41ca100a8a", 00:33:21.429 "is_configured": true, 00:33:21.429 "data_offset": 0, 00:33:21.429 "data_size": 65536 00:33:21.429 } 00:33:21.429 ] 00:33:21.429 }' 00:33:21.429 21:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:21.429 21:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:21.429 21:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:21.429 21:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:21.429 21:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:22.364 21:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:22.364 21:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:22.364 21:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:22.364 21:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:22.364 21:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:22.364 21:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:22.364 21:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:22.364 21:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:22.624 21:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:22.624 "name": "raid_bdev1", 00:33:22.624 "uuid": "3e9929aa-888c-436f-8459-eee6064e179d", 00:33:22.624 "strip_size_kb": 64, 00:33:22.624 "state": "online", 00:33:22.624 "raid_level": "raid5f", 00:33:22.624 "superblock": false, 00:33:22.624 "num_base_bdevs": 3, 00:33:22.624 "num_base_bdevs_discovered": 3, 00:33:22.624 "num_base_bdevs_operational": 3, 00:33:22.624 "process": { 00:33:22.624 "type": "rebuild", 00:33:22.624 "target": "spare", 00:33:22.624 "progress": { 00:33:22.624 "blocks": 112640, 00:33:22.624 "percent": 85 00:33:22.624 } 00:33:22.624 }, 00:33:22.624 "base_bdevs_list": [ 00:33:22.624 { 00:33:22.624 "name": "spare", 00:33:22.624 "uuid": "5af22848-2305-54ac-8fe9-baa526ecef39", 00:33:22.624 "is_configured": true, 00:33:22.624 "data_offset": 0, 00:33:22.624 "data_size": 65536 00:33:22.624 }, 00:33:22.624 { 00:33:22.624 "name": "BaseBdev2", 00:33:22.624 "uuid": "290494f8-f138-5c0c-aa68-9c2e666e4e96", 00:33:22.624 "is_configured": true, 00:33:22.624 "data_offset": 0, 00:33:22.624 "data_size": 65536 00:33:22.624 }, 00:33:22.624 { 00:33:22.624 "name": "BaseBdev3", 00:33:22.624 "uuid": "1a63eeba-aba9-524d-9e63-ee41ca100a8a", 00:33:22.624 "is_configured": true, 00:33:22.624 "data_offset": 0, 00:33:22.624 "data_size": 65536 00:33:22.624 } 00:33:22.624 ] 00:33:22.624 }' 00:33:22.624 21:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:22.624 21:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:22.624 21:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:22.882 21:46:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:22.882 21:46:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:23.449 [2024-07-15 21:46:56.795037] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:23.449 [2024-07-15 21:46:56.795231] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:23.449 [2024-07-15 21:46:56.795346] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:23.708 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:23.708 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:23.708 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:23.708 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:23.708 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:23.708 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:23.708 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:23.708 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:23.966 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:23.966 "name": "raid_bdev1", 00:33:23.966 "uuid": "3e9929aa-888c-436f-8459-eee6064e179d", 00:33:23.966 "strip_size_kb": 64, 00:33:23.966 "state": "online", 00:33:23.966 "raid_level": "raid5f", 00:33:23.966 "superblock": false, 00:33:23.966 "num_base_bdevs": 3, 00:33:23.966 "num_base_bdevs_discovered": 3, 00:33:23.966 "num_base_bdevs_operational": 3, 00:33:23.966 "base_bdevs_list": [ 00:33:23.966 { 00:33:23.966 "name": "spare", 00:33:23.966 "uuid": "5af22848-2305-54ac-8fe9-baa526ecef39", 00:33:23.966 "is_configured": true, 00:33:23.966 "data_offset": 0, 00:33:23.966 "data_size": 65536 00:33:23.966 }, 00:33:23.966 { 00:33:23.966 "name": "BaseBdev2", 00:33:23.966 "uuid": "290494f8-f138-5c0c-aa68-9c2e666e4e96", 00:33:23.966 "is_configured": true, 00:33:23.966 "data_offset": 0, 00:33:23.966 "data_size": 65536 00:33:23.966 }, 00:33:23.967 { 00:33:23.967 "name": "BaseBdev3", 00:33:23.967 "uuid": "1a63eeba-aba9-524d-9e63-ee41ca100a8a", 00:33:23.967 "is_configured": true, 00:33:23.967 "data_offset": 0, 00:33:23.967 "data_size": 65536 00:33:23.967 } 00:33:23.967 ] 00:33:23.967 }' 00:33:23.967 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:23.967 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:33:23.967 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:24.224 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:33:24.224 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:33:24.224 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:24.224 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:24.224 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:24.224 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:24.224 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:24.224 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:24.224 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:24.224 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:24.224 "name": "raid_bdev1", 00:33:24.224 "uuid": "3e9929aa-888c-436f-8459-eee6064e179d", 00:33:24.224 "strip_size_kb": 64, 00:33:24.224 "state": "online", 00:33:24.224 "raid_level": "raid5f", 00:33:24.224 "superblock": false, 00:33:24.224 "num_base_bdevs": 3, 00:33:24.224 "num_base_bdevs_discovered": 3, 00:33:24.224 "num_base_bdevs_operational": 3, 00:33:24.224 "base_bdevs_list": [ 00:33:24.224 { 00:33:24.224 "name": "spare", 00:33:24.224 "uuid": "5af22848-2305-54ac-8fe9-baa526ecef39", 00:33:24.224 "is_configured": true, 00:33:24.224 "data_offset": 0, 00:33:24.224 "data_size": 65536 00:33:24.224 }, 00:33:24.224 { 00:33:24.224 "name": "BaseBdev2", 00:33:24.224 "uuid": "290494f8-f138-5c0c-aa68-9c2e666e4e96", 00:33:24.224 "is_configured": true, 00:33:24.224 "data_offset": 0, 00:33:24.224 "data_size": 65536 00:33:24.224 }, 00:33:24.224 { 00:33:24.224 "name": "BaseBdev3", 00:33:24.224 "uuid": "1a63eeba-aba9-524d-9e63-ee41ca100a8a", 00:33:24.224 "is_configured": true, 00:33:24.224 "data_offset": 0, 00:33:24.224 "data_size": 65536 00:33:24.224 } 00:33:24.224 ] 00:33:24.224 }' 00:33:24.224 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:24.483 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:24.483 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:24.483 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:24.483 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:24.483 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:24.483 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:24.483 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:24.483 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:24.483 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:24.483 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:24.483 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:24.483 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:24.483 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:24.483 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:24.483 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:24.742 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:24.742 "name": "raid_bdev1", 00:33:24.742 "uuid": "3e9929aa-888c-436f-8459-eee6064e179d", 00:33:24.742 "strip_size_kb": 64, 00:33:24.742 "state": "online", 00:33:24.742 "raid_level": "raid5f", 00:33:24.742 "superblock": false, 00:33:24.742 "num_base_bdevs": 3, 00:33:24.742 "num_base_bdevs_discovered": 3, 00:33:24.742 "num_base_bdevs_operational": 3, 00:33:24.742 "base_bdevs_list": [ 00:33:24.742 { 00:33:24.742 "name": "spare", 00:33:24.742 "uuid": "5af22848-2305-54ac-8fe9-baa526ecef39", 00:33:24.742 "is_configured": true, 00:33:24.742 "data_offset": 0, 00:33:24.742 "data_size": 65536 00:33:24.742 }, 00:33:24.742 { 00:33:24.742 "name": "BaseBdev2", 00:33:24.742 "uuid": "290494f8-f138-5c0c-aa68-9c2e666e4e96", 00:33:24.742 "is_configured": true, 00:33:24.742 "data_offset": 0, 00:33:24.742 "data_size": 65536 00:33:24.742 }, 00:33:24.742 { 00:33:24.742 "name": "BaseBdev3", 00:33:24.742 "uuid": "1a63eeba-aba9-524d-9e63-ee41ca100a8a", 00:33:24.742 "is_configured": true, 00:33:24.742 "data_offset": 0, 00:33:24.742 "data_size": 65536 00:33:24.742 } 00:33:24.742 ] 00:33:24.742 }' 00:33:24.742 21:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:24.743 21:46:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.312 21:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:25.312 [2024-07-15 21:46:58.666804] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:25.312 [2024-07-15 21:46:58.666910] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:25.312 [2024-07-15 21:46:58.667026] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:25.312 [2024-07-15 21:46:58.667131] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:25.312 [2024-07-15 21:46:58.667170] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:33:25.312 21:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:25.312 21:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:33:25.572 21:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:33:25.572 21:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:33:25.572 21:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:33:25.572 21:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:33:25.573 21:46:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:25.573 21:46:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:33:25.573 21:46:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:25.573 21:46:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:33:25.573 21:46:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:25.573 21:46:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:33:25.573 21:46:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:25.573 21:46:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:25.573 21:46:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:33:25.832 /dev/nbd0 00:33:25.832 21:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:25.832 21:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:25.832 21:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:33:25.832 21:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:33:25.832 21:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:25.832 21:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:25.832 21:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:33:25.832 21:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:33:25.832 21:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:25.832 21:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:25.832 21:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:25.832 1+0 records in 00:33:25.832 1+0 records out 00:33:25.832 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301227 s, 13.6 MB/s 00:33:25.832 21:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:25.832 21:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:33:25.832 21:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:25.832 21:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:25.832 21:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:33:25.832 21:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:25.832 21:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:25.832 21:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:33:26.091 /dev/nbd1 00:33:26.091 21:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:26.091 21:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:26.091 21:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:33:26.091 21:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:33:26.091 21:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:26.091 21:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:26.091 21:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:33:26.091 21:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:33:26.091 21:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:26.091 21:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:26.091 21:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:26.091 1+0 records in 00:33:26.091 1+0 records out 00:33:26.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442021 s, 9.3 MB/s 00:33:26.091 21:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:26.091 21:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:33:26.091 21:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:26.091 21:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:26.091 21:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:33:26.091 21:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:26.091 21:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:26.091 21:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:33:26.350 21:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:33:26.350 21:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:26.350 21:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:33:26.350 21:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:26.350 21:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:33:26.350 21:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:26.350 21:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:33:26.608 21:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:26.608 21:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:26.608 21:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:26.608 21:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:26.608 21:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:26.608 21:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:26.608 21:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:33:26.608 21:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:33:26.608 21:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:26.608 21:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:33:26.867 21:47:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:26.867 21:47:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:26.867 21:47:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:26.867 21:47:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:26.867 21:47:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:26.867 21:47:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:26.867 21:47:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:33:26.867 21:47:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:33:26.867 21:47:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:26.867 21:47:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:26.867 21:47:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:33:26.867 21:47:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:33:26.867 21:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:33:26.867 21:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 154238 00:33:26.867 21:47:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 154238 ']' 00:33:26.867 21:47:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 154238 00:33:26.867 21:47:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:33:26.867 21:47:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:26.867 21:47:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 154238 00:33:26.867 killing process with pid 154238 00:33:26.867 Received shutdown signal, test time was about 60.000000 seconds 00:33:26.867 00:33:26.867 Latency(us) 00:33:26.867 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.867 =================================================================================================================== 00:33:26.867 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:26.867 21:47:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:26.867 21:47:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:26.867 21:47:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 154238' 00:33:26.867 21:47:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@967 -- # kill 154238 00:33:26.867 21:47:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # wait 154238 00:33:26.867 [2024-07-15 21:47:00.149754] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:27.436 [2024-07-15 21:47:00.536597] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:28.811 ************************************ 00:33:28.811 END TEST raid5f_rebuild_test 00:33:28.811 ************************************ 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:33:28.811 00:33:28.811 real 0m21.392s 00:33:28.811 user 0m31.741s 00:33:28.811 sys 0m2.707s 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:28.811 21:47:01 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:33:28.811 21:47:01 bdev_raid -- bdev/bdev_raid.sh@891 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:33:28.811 21:47:01 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:33:28.811 21:47:01 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:28.811 21:47:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:28.811 ************************************ 00:33:28.811 START TEST raid5f_rebuild_test_sb 00:33:28.811 ************************************ 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid5f 3 true false true 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=3 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=154820 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 154820 /var/tmp/spdk-raid.sock 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 154820 ']' 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:28.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:28.811 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:28.811 [2024-07-15 21:47:01.985231] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:33:28.811 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:28.811 Zero copy mechanism will not be used. 00:33:28.811 [2024-07-15 21:47:01.985470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154820 ] 00:33:28.811 [2024-07-15 21:47:02.130791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:29.070 [2024-07-15 21:47:02.335525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:29.330 [2024-07-15 21:47:02.534806] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:29.589 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:29.589 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:33:29.589 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:29.589 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:33:29.849 BaseBdev1_malloc 00:33:29.849 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:30.108 [2024-07-15 21:47:03.288760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:30.109 [2024-07-15 21:47:03.288928] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:30.109 [2024-07-15 21:47:03.288991] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:33:30.109 [2024-07-15 21:47:03.289032] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:30.109 [2024-07-15 21:47:03.291351] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:30.109 [2024-07-15 21:47:03.291438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:30.109 BaseBdev1 00:33:30.109 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:30.109 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:33:30.369 BaseBdev2_malloc 00:33:30.369 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:33:30.628 [2024-07-15 21:47:03.766767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:33:30.628 [2024-07-15 21:47:03.766952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:30.628 [2024-07-15 21:47:03.767005] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:33:30.628 [2024-07-15 21:47:03.767065] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:30.628 [2024-07-15 21:47:03.769193] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:30.629 [2024-07-15 21:47:03.769297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:30.629 BaseBdev2 00:33:30.629 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:30.629 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:33:30.888 BaseBdev3_malloc 00:33:30.888 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:33:30.888 [2024-07-15 21:47:04.250393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:33:30.888 [2024-07-15 21:47:04.250567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:30.888 [2024-07-15 21:47:04.250618] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:33:30.888 [2024-07-15 21:47:04.250664] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:30.888 [2024-07-15 21:47:04.252837] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:30.888 [2024-07-15 21:47:04.252924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:33:30.888 BaseBdev3 00:33:30.888 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:33:31.145 spare_malloc 00:33:31.404 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:33:31.404 spare_delay 00:33:31.404 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:31.664 [2024-07-15 21:47:04.968058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:31.664 [2024-07-15 21:47:04.968223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:31.664 [2024-07-15 21:47:04.968275] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:33:31.664 [2024-07-15 21:47:04.968322] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:31.664 [2024-07-15 21:47:04.970489] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:31.664 [2024-07-15 21:47:04.970579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:31.664 spare 00:33:31.664 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:33:31.923 [2024-07-15 21:47:05.195770] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:31.923 [2024-07-15 21:47:05.197649] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:31.923 [2024-07-15 21:47:05.197776] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:31.923 [2024-07-15 21:47:05.197980] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:33:31.923 [2024-07-15 21:47:05.198035] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:33:31.923 [2024-07-15 21:47:05.198186] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:33:31.923 [2024-07-15 21:47:05.204466] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:33:31.923 [2024-07-15 21:47:05.204543] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:33:31.923 [2024-07-15 21:47:05.204757] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:31.923 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:31.923 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:31.923 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:31.923 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:31.923 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:31.923 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:31.923 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:31.923 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:31.923 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:31.923 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:31.923 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:31.923 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:32.182 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:32.182 "name": "raid_bdev1", 00:33:32.182 "uuid": "a35aa4dd-af16-430d-a6f9-a358b957bf9b", 00:33:32.182 "strip_size_kb": 64, 00:33:32.182 "state": "online", 00:33:32.182 "raid_level": "raid5f", 00:33:32.182 "superblock": true, 00:33:32.182 "num_base_bdevs": 3, 00:33:32.182 "num_base_bdevs_discovered": 3, 00:33:32.182 "num_base_bdevs_operational": 3, 00:33:32.182 "base_bdevs_list": [ 00:33:32.183 { 00:33:32.183 "name": "BaseBdev1", 00:33:32.183 "uuid": "b70e97fb-11c8-56f8-8547-35bfe2b31ad0", 00:33:32.183 "is_configured": true, 00:33:32.183 "data_offset": 2048, 00:33:32.183 "data_size": 63488 00:33:32.183 }, 00:33:32.183 { 00:33:32.183 "name": "BaseBdev2", 00:33:32.183 "uuid": "e86ab752-e135-5789-b945-7af5e4a7fad4", 00:33:32.183 "is_configured": true, 00:33:32.183 "data_offset": 2048, 00:33:32.183 "data_size": 63488 00:33:32.183 }, 00:33:32.183 { 00:33:32.183 "name": "BaseBdev3", 00:33:32.183 "uuid": "cb9dbdad-325c-504e-8ec6-52b5ea4a6323", 00:33:32.183 "is_configured": true, 00:33:32.183 "data_offset": 2048, 00:33:32.183 "data_size": 63488 00:33:32.183 } 00:33:32.183 ] 00:33:32.183 }' 00:33:32.183 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:32.183 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:32.751 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:33:32.751 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:33.010 [2024-07-15 21:47:06.310640] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:33.010 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=126976 00:33:33.010 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:33.010 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:33:33.269 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:33:33.269 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:33:33.269 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:33:33.269 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:33:33.269 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:33:33.269 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:33.269 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:33:33.269 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:33.269 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:33:33.269 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:33.269 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:33:33.269 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:33.269 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:33.269 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:33:33.528 [2024-07-15 21:47:06.778102] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:33:33.528 /dev/nbd0 00:33:33.528 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:33.528 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:33.528 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:33:33.528 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:33:33.528 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:33.528 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:33.528 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:33:33.528 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:33:33.528 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:33.528 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:33.528 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:33.528 1+0 records in 00:33:33.528 1+0 records out 00:33:33.528 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286855 s, 14.3 MB/s 00:33:33.528 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:33.528 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:33:33.528 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:33.528 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:33.528 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:33:33.528 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:33.528 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:33.528 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:33:33.528 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # write_unit_size=256 00:33:33.528 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # echo 128 00:33:33.528 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:33:34.096 496+0 records in 00:33:34.096 496+0 records out 00:33:34.096 65011712 bytes (65 MB, 62 MiB) copied, 0.416622 s, 156 MB/s 00:33:34.096 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:33:34.096 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:34.096 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:33:34.096 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:34.096 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:33:34.096 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:34.096 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:33:34.354 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:34.354 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:34.354 [2024-07-15 21:47:07.494261] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:34.354 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:34.354 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:34.354 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:34.354 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:34.354 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:33:34.354 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:33:34.354 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:33:34.354 [2024-07-15 21:47:07.689324] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:34.354 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:33:34.354 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:34.354 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:34.354 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:34.354 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:34.354 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:34.354 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:34.354 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:34.354 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:34.354 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:34.354 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:34.354 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:34.613 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:34.613 "name": "raid_bdev1", 00:33:34.613 "uuid": "a35aa4dd-af16-430d-a6f9-a358b957bf9b", 00:33:34.613 "strip_size_kb": 64, 00:33:34.613 "state": "online", 00:33:34.613 "raid_level": "raid5f", 00:33:34.613 "superblock": true, 00:33:34.613 "num_base_bdevs": 3, 00:33:34.613 "num_base_bdevs_discovered": 2, 00:33:34.613 "num_base_bdevs_operational": 2, 00:33:34.613 "base_bdevs_list": [ 00:33:34.613 { 00:33:34.613 "name": null, 00:33:34.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:34.613 "is_configured": false, 00:33:34.613 "data_offset": 2048, 00:33:34.613 "data_size": 63488 00:33:34.613 }, 00:33:34.613 { 00:33:34.613 "name": "BaseBdev2", 00:33:34.613 "uuid": "e86ab752-e135-5789-b945-7af5e4a7fad4", 00:33:34.613 "is_configured": true, 00:33:34.613 "data_offset": 2048, 00:33:34.613 "data_size": 63488 00:33:34.613 }, 00:33:34.613 { 00:33:34.613 "name": "BaseBdev3", 00:33:34.613 "uuid": "cb9dbdad-325c-504e-8ec6-52b5ea4a6323", 00:33:34.613 "is_configured": true, 00:33:34.613 "data_offset": 2048, 00:33:34.613 "data_size": 63488 00:33:34.613 } 00:33:34.613 ] 00:33:34.613 }' 00:33:34.613 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:34.613 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:35.179 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:35.437 [2024-07-15 21:47:08.745136] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:35.437 [2024-07-15 21:47:08.761327] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002af30 00:33:35.437 [2024-07-15 21:47:08.769860] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:35.437 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:33:36.812 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:36.812 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:36.812 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:36.812 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:36.812 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:36.812 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:36.812 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:36.812 21:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:36.812 "name": "raid_bdev1", 00:33:36.812 "uuid": "a35aa4dd-af16-430d-a6f9-a358b957bf9b", 00:33:36.812 "strip_size_kb": 64, 00:33:36.812 "state": "online", 00:33:36.812 "raid_level": "raid5f", 00:33:36.812 "superblock": true, 00:33:36.812 "num_base_bdevs": 3, 00:33:36.812 "num_base_bdevs_discovered": 3, 00:33:36.812 "num_base_bdevs_operational": 3, 00:33:36.812 "process": { 00:33:36.812 "type": "rebuild", 00:33:36.812 "target": "spare", 00:33:36.812 "progress": { 00:33:36.812 "blocks": 24576, 00:33:36.812 "percent": 19 00:33:36.813 } 00:33:36.813 }, 00:33:36.813 "base_bdevs_list": [ 00:33:36.813 { 00:33:36.813 "name": "spare", 00:33:36.813 "uuid": "37318ec9-fe8c-5885-bcca-bb9cd4f50d39", 00:33:36.813 "is_configured": true, 00:33:36.813 "data_offset": 2048, 00:33:36.813 "data_size": 63488 00:33:36.813 }, 00:33:36.813 { 00:33:36.813 "name": "BaseBdev2", 00:33:36.813 "uuid": "e86ab752-e135-5789-b945-7af5e4a7fad4", 00:33:36.813 "is_configured": true, 00:33:36.813 "data_offset": 2048, 00:33:36.813 "data_size": 63488 00:33:36.813 }, 00:33:36.813 { 00:33:36.813 "name": "BaseBdev3", 00:33:36.813 "uuid": "cb9dbdad-325c-504e-8ec6-52b5ea4a6323", 00:33:36.813 "is_configured": true, 00:33:36.813 "data_offset": 2048, 00:33:36.813 "data_size": 63488 00:33:36.813 } 00:33:36.813 ] 00:33:36.813 }' 00:33:36.813 21:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:36.813 21:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:36.813 21:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:36.813 21:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:36.813 21:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:33:37.072 [2024-07-15 21:47:10.317225] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:37.072 [2024-07-15 21:47:10.384512] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:37.072 [2024-07-15 21:47:10.384665] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:37.072 [2024-07-15 21:47:10.384699] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:37.072 [2024-07-15 21:47:10.384725] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:37.072 21:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:33:37.072 21:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:37.072 21:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:37.072 21:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:37.072 21:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:37.072 21:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:37.072 21:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:37.072 21:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:37.072 21:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:37.072 21:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:37.072 21:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:37.072 21:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:37.331 21:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:37.331 "name": "raid_bdev1", 00:33:37.331 "uuid": "a35aa4dd-af16-430d-a6f9-a358b957bf9b", 00:33:37.331 "strip_size_kb": 64, 00:33:37.331 "state": "online", 00:33:37.331 "raid_level": "raid5f", 00:33:37.331 "superblock": true, 00:33:37.331 "num_base_bdevs": 3, 00:33:37.331 "num_base_bdevs_discovered": 2, 00:33:37.331 "num_base_bdevs_operational": 2, 00:33:37.331 "base_bdevs_list": [ 00:33:37.331 { 00:33:37.332 "name": null, 00:33:37.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:37.332 "is_configured": false, 00:33:37.332 "data_offset": 2048, 00:33:37.332 "data_size": 63488 00:33:37.332 }, 00:33:37.332 { 00:33:37.332 "name": "BaseBdev2", 00:33:37.332 "uuid": "e86ab752-e135-5789-b945-7af5e4a7fad4", 00:33:37.332 "is_configured": true, 00:33:37.332 "data_offset": 2048, 00:33:37.332 "data_size": 63488 00:33:37.332 }, 00:33:37.332 { 00:33:37.332 "name": "BaseBdev3", 00:33:37.332 "uuid": "cb9dbdad-325c-504e-8ec6-52b5ea4a6323", 00:33:37.332 "is_configured": true, 00:33:37.332 "data_offset": 2048, 00:33:37.332 "data_size": 63488 00:33:37.332 } 00:33:37.332 ] 00:33:37.332 }' 00:33:37.332 21:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:37.332 21:47:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:38.270 21:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:38.270 21:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:38.270 21:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:38.270 21:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:38.270 21:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:38.270 21:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:38.270 21:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:38.270 21:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:38.270 "name": "raid_bdev1", 00:33:38.270 "uuid": "a35aa4dd-af16-430d-a6f9-a358b957bf9b", 00:33:38.270 "strip_size_kb": 64, 00:33:38.270 "state": "online", 00:33:38.270 "raid_level": "raid5f", 00:33:38.270 "superblock": true, 00:33:38.270 "num_base_bdevs": 3, 00:33:38.270 "num_base_bdevs_discovered": 2, 00:33:38.270 "num_base_bdevs_operational": 2, 00:33:38.270 "base_bdevs_list": [ 00:33:38.270 { 00:33:38.270 "name": null, 00:33:38.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:38.270 "is_configured": false, 00:33:38.270 "data_offset": 2048, 00:33:38.270 "data_size": 63488 00:33:38.270 }, 00:33:38.270 { 00:33:38.270 "name": "BaseBdev2", 00:33:38.270 "uuid": "e86ab752-e135-5789-b945-7af5e4a7fad4", 00:33:38.270 "is_configured": true, 00:33:38.270 "data_offset": 2048, 00:33:38.270 "data_size": 63488 00:33:38.270 }, 00:33:38.270 { 00:33:38.270 "name": "BaseBdev3", 00:33:38.270 "uuid": "cb9dbdad-325c-504e-8ec6-52b5ea4a6323", 00:33:38.270 "is_configured": true, 00:33:38.270 "data_offset": 2048, 00:33:38.270 "data_size": 63488 00:33:38.271 } 00:33:38.271 ] 00:33:38.271 }' 00:33:38.271 21:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:38.271 21:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:38.271 21:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:38.530 21:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:38.530 21:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:38.530 [2024-07-15 21:47:11.865742] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:38.530 [2024-07-15 21:47:11.883041] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:33:38.530 [2024-07-15 21:47:11.891127] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:38.530 21:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:33:39.945 21:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:39.945 21:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:39.945 21:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:39.945 21:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:39.945 21:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:39.945 21:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:39.945 21:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:39.945 21:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:39.945 "name": "raid_bdev1", 00:33:39.945 "uuid": "a35aa4dd-af16-430d-a6f9-a358b957bf9b", 00:33:39.945 "strip_size_kb": 64, 00:33:39.945 "state": "online", 00:33:39.945 "raid_level": "raid5f", 00:33:39.945 "superblock": true, 00:33:39.945 "num_base_bdevs": 3, 00:33:39.945 "num_base_bdevs_discovered": 3, 00:33:39.945 "num_base_bdevs_operational": 3, 00:33:39.945 "process": { 00:33:39.945 "type": "rebuild", 00:33:39.945 "target": "spare", 00:33:39.945 "progress": { 00:33:39.945 "blocks": 24576, 00:33:39.945 "percent": 19 00:33:39.945 } 00:33:39.945 }, 00:33:39.945 "base_bdevs_list": [ 00:33:39.945 { 00:33:39.945 "name": "spare", 00:33:39.945 "uuid": "37318ec9-fe8c-5885-bcca-bb9cd4f50d39", 00:33:39.945 "is_configured": true, 00:33:39.945 "data_offset": 2048, 00:33:39.945 "data_size": 63488 00:33:39.945 }, 00:33:39.945 { 00:33:39.945 "name": "BaseBdev2", 00:33:39.945 "uuid": "e86ab752-e135-5789-b945-7af5e4a7fad4", 00:33:39.945 "is_configured": true, 00:33:39.945 "data_offset": 2048, 00:33:39.945 "data_size": 63488 00:33:39.945 }, 00:33:39.945 { 00:33:39.945 "name": "BaseBdev3", 00:33:39.945 "uuid": "cb9dbdad-325c-504e-8ec6-52b5ea4a6323", 00:33:39.945 "is_configured": true, 00:33:39.945 "data_offset": 2048, 00:33:39.945 "data_size": 63488 00:33:39.945 } 00:33:39.945 ] 00:33:39.945 }' 00:33:39.945 21:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:39.945 21:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:39.945 21:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:39.945 21:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:39.945 21:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:33:39.945 21:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:33:39.945 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:33:39.945 21:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=3 00:33:39.945 21:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:33:39.945 21:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=1086 00:33:39.945 21:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:39.945 21:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:39.945 21:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:39.945 21:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:39.945 21:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:39.945 21:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:39.945 21:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:39.945 21:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:40.205 21:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:40.205 "name": "raid_bdev1", 00:33:40.205 "uuid": "a35aa4dd-af16-430d-a6f9-a358b957bf9b", 00:33:40.205 "strip_size_kb": 64, 00:33:40.205 "state": "online", 00:33:40.205 "raid_level": "raid5f", 00:33:40.205 "superblock": true, 00:33:40.205 "num_base_bdevs": 3, 00:33:40.205 "num_base_bdevs_discovered": 3, 00:33:40.205 "num_base_bdevs_operational": 3, 00:33:40.205 "process": { 00:33:40.205 "type": "rebuild", 00:33:40.205 "target": "spare", 00:33:40.205 "progress": { 00:33:40.205 "blocks": 30720, 00:33:40.205 "percent": 24 00:33:40.205 } 00:33:40.205 }, 00:33:40.205 "base_bdevs_list": [ 00:33:40.205 { 00:33:40.205 "name": "spare", 00:33:40.205 "uuid": "37318ec9-fe8c-5885-bcca-bb9cd4f50d39", 00:33:40.205 "is_configured": true, 00:33:40.205 "data_offset": 2048, 00:33:40.205 "data_size": 63488 00:33:40.205 }, 00:33:40.205 { 00:33:40.205 "name": "BaseBdev2", 00:33:40.205 "uuid": "e86ab752-e135-5789-b945-7af5e4a7fad4", 00:33:40.205 "is_configured": true, 00:33:40.205 "data_offset": 2048, 00:33:40.205 "data_size": 63488 00:33:40.205 }, 00:33:40.205 { 00:33:40.205 "name": "BaseBdev3", 00:33:40.205 "uuid": "cb9dbdad-325c-504e-8ec6-52b5ea4a6323", 00:33:40.205 "is_configured": true, 00:33:40.205 "data_offset": 2048, 00:33:40.205 "data_size": 63488 00:33:40.205 } 00:33:40.205 ] 00:33:40.205 }' 00:33:40.205 21:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:40.205 21:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:40.205 21:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:40.463 21:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:40.463 21:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:41.400 21:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:41.400 21:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:41.400 21:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:41.400 21:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:41.400 21:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:41.400 21:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:41.400 21:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:41.400 21:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:41.661 21:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:41.661 "name": "raid_bdev1", 00:33:41.661 "uuid": "a35aa4dd-af16-430d-a6f9-a358b957bf9b", 00:33:41.661 "strip_size_kb": 64, 00:33:41.661 "state": "online", 00:33:41.661 "raid_level": "raid5f", 00:33:41.661 "superblock": true, 00:33:41.661 "num_base_bdevs": 3, 00:33:41.661 "num_base_bdevs_discovered": 3, 00:33:41.661 "num_base_bdevs_operational": 3, 00:33:41.661 "process": { 00:33:41.661 "type": "rebuild", 00:33:41.661 "target": "spare", 00:33:41.661 "progress": { 00:33:41.661 "blocks": 59392, 00:33:41.661 "percent": 46 00:33:41.661 } 00:33:41.661 }, 00:33:41.661 "base_bdevs_list": [ 00:33:41.661 { 00:33:41.661 "name": "spare", 00:33:41.661 "uuid": "37318ec9-fe8c-5885-bcca-bb9cd4f50d39", 00:33:41.661 "is_configured": true, 00:33:41.661 "data_offset": 2048, 00:33:41.661 "data_size": 63488 00:33:41.661 }, 00:33:41.661 { 00:33:41.661 "name": "BaseBdev2", 00:33:41.661 "uuid": "e86ab752-e135-5789-b945-7af5e4a7fad4", 00:33:41.661 "is_configured": true, 00:33:41.661 "data_offset": 2048, 00:33:41.661 "data_size": 63488 00:33:41.661 }, 00:33:41.661 { 00:33:41.661 "name": "BaseBdev3", 00:33:41.661 "uuid": "cb9dbdad-325c-504e-8ec6-52b5ea4a6323", 00:33:41.661 "is_configured": true, 00:33:41.661 "data_offset": 2048, 00:33:41.661 "data_size": 63488 00:33:41.661 } 00:33:41.661 ] 00:33:41.661 }' 00:33:41.661 21:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:41.661 21:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:41.661 21:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:41.661 21:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:41.661 21:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:42.599 21:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:42.599 21:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:42.599 21:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:42.599 21:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:42.599 21:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:42.599 21:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:42.599 21:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:42.599 21:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:42.858 21:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:42.858 "name": "raid_bdev1", 00:33:42.858 "uuid": "a35aa4dd-af16-430d-a6f9-a358b957bf9b", 00:33:42.858 "strip_size_kb": 64, 00:33:42.858 "state": "online", 00:33:42.858 "raid_level": "raid5f", 00:33:42.858 "superblock": true, 00:33:42.858 "num_base_bdevs": 3, 00:33:42.858 "num_base_bdevs_discovered": 3, 00:33:42.858 "num_base_bdevs_operational": 3, 00:33:42.858 "process": { 00:33:42.858 "type": "rebuild", 00:33:42.858 "target": "spare", 00:33:42.858 "progress": { 00:33:42.858 "blocks": 86016, 00:33:42.858 "percent": 67 00:33:42.858 } 00:33:42.858 }, 00:33:42.858 "base_bdevs_list": [ 00:33:42.858 { 00:33:42.858 "name": "spare", 00:33:42.858 "uuid": "37318ec9-fe8c-5885-bcca-bb9cd4f50d39", 00:33:42.858 "is_configured": true, 00:33:42.858 "data_offset": 2048, 00:33:42.858 "data_size": 63488 00:33:42.858 }, 00:33:42.858 { 00:33:42.858 "name": "BaseBdev2", 00:33:42.858 "uuid": "e86ab752-e135-5789-b945-7af5e4a7fad4", 00:33:42.858 "is_configured": true, 00:33:42.858 "data_offset": 2048, 00:33:42.858 "data_size": 63488 00:33:42.858 }, 00:33:42.858 { 00:33:42.858 "name": "BaseBdev3", 00:33:42.858 "uuid": "cb9dbdad-325c-504e-8ec6-52b5ea4a6323", 00:33:42.858 "is_configured": true, 00:33:42.858 "data_offset": 2048, 00:33:42.858 "data_size": 63488 00:33:42.858 } 00:33:42.858 ] 00:33:42.858 }' 00:33:42.858 21:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:43.117 21:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:43.117 21:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:43.117 21:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:43.117 21:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:44.056 21:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:44.056 21:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:44.056 21:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:44.056 21:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:44.056 21:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:44.056 21:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:44.056 21:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:44.056 21:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:44.314 21:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:44.314 "name": "raid_bdev1", 00:33:44.314 "uuid": "a35aa4dd-af16-430d-a6f9-a358b957bf9b", 00:33:44.314 "strip_size_kb": 64, 00:33:44.314 "state": "online", 00:33:44.314 "raid_level": "raid5f", 00:33:44.314 "superblock": true, 00:33:44.314 "num_base_bdevs": 3, 00:33:44.314 "num_base_bdevs_discovered": 3, 00:33:44.314 "num_base_bdevs_operational": 3, 00:33:44.314 "process": { 00:33:44.314 "type": "rebuild", 00:33:44.314 "target": "spare", 00:33:44.314 "progress": { 00:33:44.314 "blocks": 112640, 00:33:44.314 "percent": 88 00:33:44.314 } 00:33:44.314 }, 00:33:44.314 "base_bdevs_list": [ 00:33:44.314 { 00:33:44.314 "name": "spare", 00:33:44.314 "uuid": "37318ec9-fe8c-5885-bcca-bb9cd4f50d39", 00:33:44.314 "is_configured": true, 00:33:44.314 "data_offset": 2048, 00:33:44.314 "data_size": 63488 00:33:44.314 }, 00:33:44.314 { 00:33:44.314 "name": "BaseBdev2", 00:33:44.314 "uuid": "e86ab752-e135-5789-b945-7af5e4a7fad4", 00:33:44.314 "is_configured": true, 00:33:44.314 "data_offset": 2048, 00:33:44.314 "data_size": 63488 00:33:44.314 }, 00:33:44.314 { 00:33:44.314 "name": "BaseBdev3", 00:33:44.314 "uuid": "cb9dbdad-325c-504e-8ec6-52b5ea4a6323", 00:33:44.314 "is_configured": true, 00:33:44.314 "data_offset": 2048, 00:33:44.314 "data_size": 63488 00:33:44.314 } 00:33:44.314 ] 00:33:44.314 }' 00:33:44.314 21:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:44.314 21:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:44.314 21:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:44.314 21:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:44.314 21:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:44.881 [2024-07-15 21:47:18.148141] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:44.881 [2024-07-15 21:47:18.148354] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:44.881 [2024-07-15 21:47:18.148554] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:45.449 21:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:45.449 21:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:45.449 21:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:45.449 21:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:45.449 21:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:45.449 21:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:45.449 21:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:45.449 21:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:45.708 21:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:45.708 "name": "raid_bdev1", 00:33:45.708 "uuid": "a35aa4dd-af16-430d-a6f9-a358b957bf9b", 00:33:45.708 "strip_size_kb": 64, 00:33:45.708 "state": "online", 00:33:45.708 "raid_level": "raid5f", 00:33:45.708 "superblock": true, 00:33:45.708 "num_base_bdevs": 3, 00:33:45.708 "num_base_bdevs_discovered": 3, 00:33:45.708 "num_base_bdevs_operational": 3, 00:33:45.708 "base_bdevs_list": [ 00:33:45.708 { 00:33:45.708 "name": "spare", 00:33:45.708 "uuid": "37318ec9-fe8c-5885-bcca-bb9cd4f50d39", 00:33:45.708 "is_configured": true, 00:33:45.708 "data_offset": 2048, 00:33:45.708 "data_size": 63488 00:33:45.708 }, 00:33:45.708 { 00:33:45.708 "name": "BaseBdev2", 00:33:45.708 "uuid": "e86ab752-e135-5789-b945-7af5e4a7fad4", 00:33:45.708 "is_configured": true, 00:33:45.708 "data_offset": 2048, 00:33:45.708 "data_size": 63488 00:33:45.708 }, 00:33:45.708 { 00:33:45.708 "name": "BaseBdev3", 00:33:45.708 "uuid": "cb9dbdad-325c-504e-8ec6-52b5ea4a6323", 00:33:45.708 "is_configured": true, 00:33:45.708 "data_offset": 2048, 00:33:45.708 "data_size": 63488 00:33:45.708 } 00:33:45.708 ] 00:33:45.708 }' 00:33:45.708 21:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:45.708 21:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:33:45.708 21:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:45.708 21:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:33:45.708 21:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:33:45.708 21:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:45.708 21:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:45.708 21:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:45.708 21:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:45.708 21:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:45.708 21:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:45.708 21:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:45.966 21:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:45.966 "name": "raid_bdev1", 00:33:45.966 "uuid": "a35aa4dd-af16-430d-a6f9-a358b957bf9b", 00:33:45.966 "strip_size_kb": 64, 00:33:45.966 "state": "online", 00:33:45.966 "raid_level": "raid5f", 00:33:45.966 "superblock": true, 00:33:45.966 "num_base_bdevs": 3, 00:33:45.966 "num_base_bdevs_discovered": 3, 00:33:45.966 "num_base_bdevs_operational": 3, 00:33:45.966 "base_bdevs_list": [ 00:33:45.966 { 00:33:45.966 "name": "spare", 00:33:45.966 "uuid": "37318ec9-fe8c-5885-bcca-bb9cd4f50d39", 00:33:45.966 "is_configured": true, 00:33:45.966 "data_offset": 2048, 00:33:45.966 "data_size": 63488 00:33:45.966 }, 00:33:45.966 { 00:33:45.966 "name": "BaseBdev2", 00:33:45.966 "uuid": "e86ab752-e135-5789-b945-7af5e4a7fad4", 00:33:45.966 "is_configured": true, 00:33:45.966 "data_offset": 2048, 00:33:45.966 "data_size": 63488 00:33:45.966 }, 00:33:45.966 { 00:33:45.966 "name": "BaseBdev3", 00:33:45.966 "uuid": "cb9dbdad-325c-504e-8ec6-52b5ea4a6323", 00:33:45.966 "is_configured": true, 00:33:45.966 "data_offset": 2048, 00:33:45.966 "data_size": 63488 00:33:45.966 } 00:33:45.966 ] 00:33:45.966 }' 00:33:45.966 21:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:45.966 21:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:45.966 21:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:45.966 21:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:45.966 21:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:45.966 21:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:45.966 21:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:45.966 21:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:45.966 21:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:45.966 21:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:45.966 21:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:45.966 21:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:45.966 21:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:45.966 21:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:45.966 21:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:45.966 21:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:46.225 21:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:46.225 "name": "raid_bdev1", 00:33:46.225 "uuid": "a35aa4dd-af16-430d-a6f9-a358b957bf9b", 00:33:46.225 "strip_size_kb": 64, 00:33:46.225 "state": "online", 00:33:46.225 "raid_level": "raid5f", 00:33:46.225 "superblock": true, 00:33:46.225 "num_base_bdevs": 3, 00:33:46.225 "num_base_bdevs_discovered": 3, 00:33:46.225 "num_base_bdevs_operational": 3, 00:33:46.225 "base_bdevs_list": [ 00:33:46.225 { 00:33:46.225 "name": "spare", 00:33:46.225 "uuid": "37318ec9-fe8c-5885-bcca-bb9cd4f50d39", 00:33:46.225 "is_configured": true, 00:33:46.225 "data_offset": 2048, 00:33:46.225 "data_size": 63488 00:33:46.225 }, 00:33:46.225 { 00:33:46.225 "name": "BaseBdev2", 00:33:46.225 "uuid": "e86ab752-e135-5789-b945-7af5e4a7fad4", 00:33:46.225 "is_configured": true, 00:33:46.225 "data_offset": 2048, 00:33:46.225 "data_size": 63488 00:33:46.225 }, 00:33:46.225 { 00:33:46.225 "name": "BaseBdev3", 00:33:46.225 "uuid": "cb9dbdad-325c-504e-8ec6-52b5ea4a6323", 00:33:46.225 "is_configured": true, 00:33:46.225 "data_offset": 2048, 00:33:46.225 "data_size": 63488 00:33:46.225 } 00:33:46.225 ] 00:33:46.225 }' 00:33:46.225 21:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:46.225 21:47:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:47.164 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:47.164 [2024-07-15 21:47:20.464249] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:47.164 [2024-07-15 21:47:20.464357] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:47.164 [2024-07-15 21:47:20.464457] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:47.164 [2024-07-15 21:47:20.464549] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:47.164 [2024-07-15 21:47:20.464581] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:33:47.164 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:47.164 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:33:47.424 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:33:47.424 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:33:47.424 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:33:47.424 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:33:47.424 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:47.424 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:33:47.424 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:47.424 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:33:47.424 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:47.424 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:33:47.424 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:47.424 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:47.424 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:33:47.685 /dev/nbd0 00:33:47.685 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:47.685 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:47.685 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:33:47.685 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:33:47.685 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:47.685 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:47.685 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:33:47.685 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:33:47.685 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:47.685 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:47.685 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:47.685 1+0 records in 00:33:47.685 1+0 records out 00:33:47.685 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359851 s, 11.4 MB/s 00:33:47.685 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:47.685 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:33:47.685 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:47.685 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:47.685 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:33:47.685 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:47.685 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:47.685 21:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:33:47.944 /dev/nbd1 00:33:47.944 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:47.944 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:47.944 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:33:47.944 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:33:47.944 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:47.944 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:47.944 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:33:47.944 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:33:47.944 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:47.944 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:47.944 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:47.944 1+0 records in 00:33:47.944 1+0 records out 00:33:47.944 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000501358 s, 8.2 MB/s 00:33:47.944 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:47.944 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:33:47.944 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:47.944 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:47.944 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:33:47.944 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:47.944 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:47.944 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:33:48.202 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:33:48.202 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:48.202 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:33:48.202 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:48.202 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:33:48.202 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:48.202 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:33:48.460 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:48.460 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:48.460 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:48.461 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:48.461 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:48.461 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:48.461 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:33:48.461 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:33:48.461 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:48.461 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:48.461 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:33:48.461 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:33:48.461 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:48.461 21:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:33:48.720 21:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:48.720 21:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:48.720 21:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:48.720 21:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:48.720 21:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:48.720 21:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:48.720 21:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:33:48.720 21:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:33:48.720 21:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:33:48.720 21:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:33:48.978 21:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:49.236 [2024-07-15 21:47:22.469169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:49.236 [2024-07-15 21:47:22.469377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:49.236 [2024-07-15 21:47:22.469448] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:33:49.236 [2024-07-15 21:47:22.469494] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:49.236 [2024-07-15 21:47:22.471829] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:49.236 [2024-07-15 21:47:22.471925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:49.236 [2024-07-15 21:47:22.472104] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:49.236 [2024-07-15 21:47:22.472204] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:49.236 [2024-07-15 21:47:22.472382] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:49.236 [2024-07-15 21:47:22.472508] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:49.236 spare 00:33:49.236 21:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:49.236 21:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:49.236 21:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:49.236 21:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:49.236 21:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:49.236 21:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:49.236 21:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:49.236 21:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:49.236 21:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:49.236 21:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:49.236 21:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:49.236 21:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:49.236 [2024-07-15 21:47:22.572439] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b480 00:33:49.236 [2024-07-15 21:47:22.572528] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:33:49.236 [2024-07-15 21:47:22.572724] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004bb40 00:33:49.236 [2024-07-15 21:47:22.578954] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b480 00:33:49.236 [2024-07-15 21:47:22.579016] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b480 00:33:49.236 [2024-07-15 21:47:22.579219] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:49.494 21:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:49.494 "name": "raid_bdev1", 00:33:49.494 "uuid": "a35aa4dd-af16-430d-a6f9-a358b957bf9b", 00:33:49.494 "strip_size_kb": 64, 00:33:49.494 "state": "online", 00:33:49.494 "raid_level": "raid5f", 00:33:49.494 "superblock": true, 00:33:49.494 "num_base_bdevs": 3, 00:33:49.494 "num_base_bdevs_discovered": 3, 00:33:49.494 "num_base_bdevs_operational": 3, 00:33:49.494 "base_bdevs_list": [ 00:33:49.494 { 00:33:49.494 "name": "spare", 00:33:49.494 "uuid": "37318ec9-fe8c-5885-bcca-bb9cd4f50d39", 00:33:49.494 "is_configured": true, 00:33:49.494 "data_offset": 2048, 00:33:49.494 "data_size": 63488 00:33:49.494 }, 00:33:49.494 { 00:33:49.494 "name": "BaseBdev2", 00:33:49.494 "uuid": "e86ab752-e135-5789-b945-7af5e4a7fad4", 00:33:49.494 "is_configured": true, 00:33:49.494 "data_offset": 2048, 00:33:49.494 "data_size": 63488 00:33:49.494 }, 00:33:49.494 { 00:33:49.494 "name": "BaseBdev3", 00:33:49.494 "uuid": "cb9dbdad-325c-504e-8ec6-52b5ea4a6323", 00:33:49.494 "is_configured": true, 00:33:49.494 "data_offset": 2048, 00:33:49.494 "data_size": 63488 00:33:49.494 } 00:33:49.494 ] 00:33:49.494 }' 00:33:49.494 21:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:49.494 21:47:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:50.063 21:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:50.063 21:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:50.063 21:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:50.064 21:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:50.064 21:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:50.064 21:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:50.064 21:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:50.321 21:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:50.321 "name": "raid_bdev1", 00:33:50.321 "uuid": "a35aa4dd-af16-430d-a6f9-a358b957bf9b", 00:33:50.321 "strip_size_kb": 64, 00:33:50.321 "state": "online", 00:33:50.321 "raid_level": "raid5f", 00:33:50.321 "superblock": true, 00:33:50.321 "num_base_bdevs": 3, 00:33:50.321 "num_base_bdevs_discovered": 3, 00:33:50.321 "num_base_bdevs_operational": 3, 00:33:50.321 "base_bdevs_list": [ 00:33:50.321 { 00:33:50.321 "name": "spare", 00:33:50.321 "uuid": "37318ec9-fe8c-5885-bcca-bb9cd4f50d39", 00:33:50.321 "is_configured": true, 00:33:50.321 "data_offset": 2048, 00:33:50.321 "data_size": 63488 00:33:50.321 }, 00:33:50.321 { 00:33:50.321 "name": "BaseBdev2", 00:33:50.321 "uuid": "e86ab752-e135-5789-b945-7af5e4a7fad4", 00:33:50.321 "is_configured": true, 00:33:50.321 "data_offset": 2048, 00:33:50.321 "data_size": 63488 00:33:50.321 }, 00:33:50.321 { 00:33:50.321 "name": "BaseBdev3", 00:33:50.321 "uuid": "cb9dbdad-325c-504e-8ec6-52b5ea4a6323", 00:33:50.321 "is_configured": true, 00:33:50.321 "data_offset": 2048, 00:33:50.321 "data_size": 63488 00:33:50.321 } 00:33:50.321 ] 00:33:50.321 }' 00:33:50.321 21:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:50.580 21:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:50.580 21:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:50.580 21:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:50.580 21:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:50.580 21:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:33:50.838 21:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:33:50.838 21:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:33:50.838 [2024-07-15 21:47:24.167197] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:50.838 21:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:33:50.838 21:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:50.838 21:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:50.838 21:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:50.838 21:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:50.838 21:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:50.838 21:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:50.838 21:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:50.838 21:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:50.838 21:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:50.838 21:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:50.838 21:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:51.097 21:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:51.097 "name": "raid_bdev1", 00:33:51.097 "uuid": "a35aa4dd-af16-430d-a6f9-a358b957bf9b", 00:33:51.097 "strip_size_kb": 64, 00:33:51.097 "state": "online", 00:33:51.097 "raid_level": "raid5f", 00:33:51.097 "superblock": true, 00:33:51.097 "num_base_bdevs": 3, 00:33:51.097 "num_base_bdevs_discovered": 2, 00:33:51.097 "num_base_bdevs_operational": 2, 00:33:51.097 "base_bdevs_list": [ 00:33:51.097 { 00:33:51.097 "name": null, 00:33:51.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:51.097 "is_configured": false, 00:33:51.097 "data_offset": 2048, 00:33:51.097 "data_size": 63488 00:33:51.097 }, 00:33:51.097 { 00:33:51.097 "name": "BaseBdev2", 00:33:51.097 "uuid": "e86ab752-e135-5789-b945-7af5e4a7fad4", 00:33:51.097 "is_configured": true, 00:33:51.097 "data_offset": 2048, 00:33:51.097 "data_size": 63488 00:33:51.097 }, 00:33:51.097 { 00:33:51.097 "name": "BaseBdev3", 00:33:51.097 "uuid": "cb9dbdad-325c-504e-8ec6-52b5ea4a6323", 00:33:51.097 "is_configured": true, 00:33:51.097 "data_offset": 2048, 00:33:51.097 "data_size": 63488 00:33:51.097 } 00:33:51.097 ] 00:33:51.097 }' 00:33:51.097 21:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:51.097 21:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:52.034 21:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:52.034 [2024-07-15 21:47:25.245436] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:52.034 [2024-07-15 21:47:25.245721] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:33:52.034 [2024-07-15 21:47:25.245805] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:52.034 [2024-07-15 21:47:25.245886] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:52.034 [2024-07-15 21:47:25.262027] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004bce0 00:33:52.034 [2024-07-15 21:47:25.269309] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:52.034 21:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:33:52.969 21:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:52.969 21:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:52.969 21:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:52.969 21:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:52.969 21:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:52.969 21:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:52.969 21:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:53.227 21:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:53.227 "name": "raid_bdev1", 00:33:53.227 "uuid": "a35aa4dd-af16-430d-a6f9-a358b957bf9b", 00:33:53.227 "strip_size_kb": 64, 00:33:53.227 "state": "online", 00:33:53.227 "raid_level": "raid5f", 00:33:53.227 "superblock": true, 00:33:53.227 "num_base_bdevs": 3, 00:33:53.227 "num_base_bdevs_discovered": 3, 00:33:53.227 "num_base_bdevs_operational": 3, 00:33:53.227 "process": { 00:33:53.227 "type": "rebuild", 00:33:53.227 "target": "spare", 00:33:53.227 "progress": { 00:33:53.227 "blocks": 22528, 00:33:53.227 "percent": 17 00:33:53.227 } 00:33:53.227 }, 00:33:53.227 "base_bdevs_list": [ 00:33:53.227 { 00:33:53.227 "name": "spare", 00:33:53.227 "uuid": "37318ec9-fe8c-5885-bcca-bb9cd4f50d39", 00:33:53.227 "is_configured": true, 00:33:53.227 "data_offset": 2048, 00:33:53.227 "data_size": 63488 00:33:53.227 }, 00:33:53.227 { 00:33:53.227 "name": "BaseBdev2", 00:33:53.227 "uuid": "e86ab752-e135-5789-b945-7af5e4a7fad4", 00:33:53.227 "is_configured": true, 00:33:53.227 "data_offset": 2048, 00:33:53.227 "data_size": 63488 00:33:53.227 }, 00:33:53.227 { 00:33:53.227 "name": "BaseBdev3", 00:33:53.227 "uuid": "cb9dbdad-325c-504e-8ec6-52b5ea4a6323", 00:33:53.227 "is_configured": true, 00:33:53.227 "data_offset": 2048, 00:33:53.227 "data_size": 63488 00:33:53.227 } 00:33:53.227 ] 00:33:53.227 }' 00:33:53.227 21:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:53.227 21:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:53.227 21:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:53.227 21:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:53.227 21:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:33:53.485 [2024-07-15 21:47:26.781503] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:53.485 [2024-07-15 21:47:26.781694] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:53.485 [2024-07-15 21:47:26.781778] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:53.485 [2024-07-15 21:47:26.781806] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:53.485 [2024-07-15 21:47:26.781839] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:53.485 21:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:33:53.485 21:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:53.485 21:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:53.485 21:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:53.485 21:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:53.485 21:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:53.485 21:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:53.485 21:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:53.485 21:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:53.485 21:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:53.485 21:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:53.485 21:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:53.743 21:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:53.743 "name": "raid_bdev1", 00:33:53.743 "uuid": "a35aa4dd-af16-430d-a6f9-a358b957bf9b", 00:33:53.743 "strip_size_kb": 64, 00:33:53.743 "state": "online", 00:33:53.743 "raid_level": "raid5f", 00:33:53.743 "superblock": true, 00:33:53.743 "num_base_bdevs": 3, 00:33:53.743 "num_base_bdevs_discovered": 2, 00:33:53.743 "num_base_bdevs_operational": 2, 00:33:53.743 "base_bdevs_list": [ 00:33:53.743 { 00:33:53.743 "name": null, 00:33:53.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:53.743 "is_configured": false, 00:33:53.743 "data_offset": 2048, 00:33:53.743 "data_size": 63488 00:33:53.743 }, 00:33:53.743 { 00:33:53.743 "name": "BaseBdev2", 00:33:53.743 "uuid": "e86ab752-e135-5789-b945-7af5e4a7fad4", 00:33:53.743 "is_configured": true, 00:33:53.743 "data_offset": 2048, 00:33:53.743 "data_size": 63488 00:33:53.743 }, 00:33:53.743 { 00:33:53.743 "name": "BaseBdev3", 00:33:53.743 "uuid": "cb9dbdad-325c-504e-8ec6-52b5ea4a6323", 00:33:53.743 "is_configured": true, 00:33:53.743 "data_offset": 2048, 00:33:53.743 "data_size": 63488 00:33:53.743 } 00:33:53.743 ] 00:33:53.743 }' 00:33:53.743 21:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:53.743 21:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:54.341 21:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:54.600 [2024-07-15 21:47:27.802214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:54.600 [2024-07-15 21:47:27.802374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:54.600 [2024-07-15 21:47:27.802442] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:33:54.600 [2024-07-15 21:47:27.802490] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:54.600 [2024-07-15 21:47:27.803015] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:54.600 [2024-07-15 21:47:27.803087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:54.600 [2024-07-15 21:47:27.803243] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:54.600 [2024-07-15 21:47:27.803283] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:33:54.600 [2024-07-15 21:47:27.803308] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:54.600 [2024-07-15 21:47:27.803382] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:54.600 [2024-07-15 21:47:27.818647] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004c020 00:33:54.600 spare 00:33:54.600 [2024-07-15 21:47:27.826508] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:54.600 21:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:33:55.533 21:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:55.533 21:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:55.533 21:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:55.533 21:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:55.533 21:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:55.533 21:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:55.533 21:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:55.791 21:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:55.791 "name": "raid_bdev1", 00:33:55.791 "uuid": "a35aa4dd-af16-430d-a6f9-a358b957bf9b", 00:33:55.791 "strip_size_kb": 64, 00:33:55.791 "state": "online", 00:33:55.791 "raid_level": "raid5f", 00:33:55.791 "superblock": true, 00:33:55.791 "num_base_bdevs": 3, 00:33:55.791 "num_base_bdevs_discovered": 3, 00:33:55.791 "num_base_bdevs_operational": 3, 00:33:55.791 "process": { 00:33:55.791 "type": "rebuild", 00:33:55.791 "target": "spare", 00:33:55.791 "progress": { 00:33:55.791 "blocks": 22528, 00:33:55.791 "percent": 17 00:33:55.791 } 00:33:55.791 }, 00:33:55.791 "base_bdevs_list": [ 00:33:55.791 { 00:33:55.791 "name": "spare", 00:33:55.791 "uuid": "37318ec9-fe8c-5885-bcca-bb9cd4f50d39", 00:33:55.791 "is_configured": true, 00:33:55.791 "data_offset": 2048, 00:33:55.791 "data_size": 63488 00:33:55.791 }, 00:33:55.791 { 00:33:55.791 "name": "BaseBdev2", 00:33:55.791 "uuid": "e86ab752-e135-5789-b945-7af5e4a7fad4", 00:33:55.791 "is_configured": true, 00:33:55.791 "data_offset": 2048, 00:33:55.791 "data_size": 63488 00:33:55.791 }, 00:33:55.791 { 00:33:55.791 "name": "BaseBdev3", 00:33:55.791 "uuid": "cb9dbdad-325c-504e-8ec6-52b5ea4a6323", 00:33:55.791 "is_configured": true, 00:33:55.791 "data_offset": 2048, 00:33:55.791 "data_size": 63488 00:33:55.791 } 00:33:55.791 ] 00:33:55.791 }' 00:33:55.791 21:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:55.791 21:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:55.791 21:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:55.791 21:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:55.791 21:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:33:56.049 [2024-07-15 21:47:29.354007] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:56.307 [2024-07-15 21:47:29.439939] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:56.307 [2024-07-15 21:47:29.440060] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:56.307 [2024-07-15 21:47:29.440090] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:56.307 [2024-07-15 21:47:29.440114] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:56.307 21:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:33:56.307 21:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:56.307 21:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:56.307 21:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:56.307 21:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:56.307 21:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:56.307 21:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:56.307 21:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:56.307 21:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:56.307 21:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:56.307 21:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:56.307 21:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:56.564 21:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:56.564 "name": "raid_bdev1", 00:33:56.564 "uuid": "a35aa4dd-af16-430d-a6f9-a358b957bf9b", 00:33:56.564 "strip_size_kb": 64, 00:33:56.564 "state": "online", 00:33:56.564 "raid_level": "raid5f", 00:33:56.564 "superblock": true, 00:33:56.564 "num_base_bdevs": 3, 00:33:56.564 "num_base_bdevs_discovered": 2, 00:33:56.564 "num_base_bdevs_operational": 2, 00:33:56.564 "base_bdevs_list": [ 00:33:56.564 { 00:33:56.564 "name": null, 00:33:56.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:56.564 "is_configured": false, 00:33:56.564 "data_offset": 2048, 00:33:56.564 "data_size": 63488 00:33:56.564 }, 00:33:56.564 { 00:33:56.564 "name": "BaseBdev2", 00:33:56.564 "uuid": "e86ab752-e135-5789-b945-7af5e4a7fad4", 00:33:56.564 "is_configured": true, 00:33:56.564 "data_offset": 2048, 00:33:56.564 "data_size": 63488 00:33:56.564 }, 00:33:56.564 { 00:33:56.564 "name": "BaseBdev3", 00:33:56.564 "uuid": "cb9dbdad-325c-504e-8ec6-52b5ea4a6323", 00:33:56.564 "is_configured": true, 00:33:56.564 "data_offset": 2048, 00:33:56.564 "data_size": 63488 00:33:56.564 } 00:33:56.564 ] 00:33:56.564 }' 00:33:56.564 21:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:56.564 21:47:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:57.128 21:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:57.128 21:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:57.128 21:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:57.128 21:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:57.128 21:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:57.128 21:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:57.128 21:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:57.386 21:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:57.386 "name": "raid_bdev1", 00:33:57.386 "uuid": "a35aa4dd-af16-430d-a6f9-a358b957bf9b", 00:33:57.386 "strip_size_kb": 64, 00:33:57.386 "state": "online", 00:33:57.386 "raid_level": "raid5f", 00:33:57.386 "superblock": true, 00:33:57.386 "num_base_bdevs": 3, 00:33:57.386 "num_base_bdevs_discovered": 2, 00:33:57.386 "num_base_bdevs_operational": 2, 00:33:57.386 "base_bdevs_list": [ 00:33:57.386 { 00:33:57.386 "name": null, 00:33:57.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:57.386 "is_configured": false, 00:33:57.386 "data_offset": 2048, 00:33:57.386 "data_size": 63488 00:33:57.386 }, 00:33:57.386 { 00:33:57.386 "name": "BaseBdev2", 00:33:57.386 "uuid": "e86ab752-e135-5789-b945-7af5e4a7fad4", 00:33:57.386 "is_configured": true, 00:33:57.386 "data_offset": 2048, 00:33:57.386 "data_size": 63488 00:33:57.386 }, 00:33:57.386 { 00:33:57.386 "name": "BaseBdev3", 00:33:57.386 "uuid": "cb9dbdad-325c-504e-8ec6-52b5ea4a6323", 00:33:57.386 "is_configured": true, 00:33:57.386 "data_offset": 2048, 00:33:57.386 "data_size": 63488 00:33:57.386 } 00:33:57.386 ] 00:33:57.386 }' 00:33:57.386 21:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:57.386 21:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:57.386 21:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:57.386 21:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:57.386 21:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:33:57.644 21:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:57.904 [2024-07-15 21:47:31.126432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:57.904 [2024-07-15 21:47:31.126592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:57.904 [2024-07-15 21:47:31.126648] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:33:57.904 [2024-07-15 21:47:31.126687] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:57.904 [2024-07-15 21:47:31.127193] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:57.904 [2024-07-15 21:47:31.127261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:57.904 [2024-07-15 21:47:31.127423] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:33:57.904 [2024-07-15 21:47:31.127475] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:33:57.904 [2024-07-15 21:47:31.127499] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:57.904 BaseBdev1 00:33:57.904 21:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:33:58.840 21:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:33:58.840 21:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:58.840 21:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:58.840 21:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:58.840 21:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:58.840 21:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:58.840 21:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:58.840 21:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:58.840 21:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:58.840 21:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:58.840 21:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:58.840 21:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:59.098 21:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:59.098 "name": "raid_bdev1", 00:33:59.098 "uuid": "a35aa4dd-af16-430d-a6f9-a358b957bf9b", 00:33:59.098 "strip_size_kb": 64, 00:33:59.098 "state": "online", 00:33:59.098 "raid_level": "raid5f", 00:33:59.098 "superblock": true, 00:33:59.098 "num_base_bdevs": 3, 00:33:59.098 "num_base_bdevs_discovered": 2, 00:33:59.098 "num_base_bdevs_operational": 2, 00:33:59.098 "base_bdevs_list": [ 00:33:59.098 { 00:33:59.098 "name": null, 00:33:59.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:59.098 "is_configured": false, 00:33:59.098 "data_offset": 2048, 00:33:59.098 "data_size": 63488 00:33:59.098 }, 00:33:59.098 { 00:33:59.098 "name": "BaseBdev2", 00:33:59.098 "uuid": "e86ab752-e135-5789-b945-7af5e4a7fad4", 00:33:59.098 "is_configured": true, 00:33:59.098 "data_offset": 2048, 00:33:59.098 "data_size": 63488 00:33:59.098 }, 00:33:59.098 { 00:33:59.098 "name": "BaseBdev3", 00:33:59.098 "uuid": "cb9dbdad-325c-504e-8ec6-52b5ea4a6323", 00:33:59.098 "is_configured": true, 00:33:59.098 "data_offset": 2048, 00:33:59.098 "data_size": 63488 00:33:59.098 } 00:33:59.098 ] 00:33:59.098 }' 00:33:59.098 21:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:59.098 21:47:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:00.034 21:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:00.034 21:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:00.034 21:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:00.034 21:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:00.034 21:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:00.034 21:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:00.034 21:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:00.034 21:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:00.034 "name": "raid_bdev1", 00:34:00.034 "uuid": "a35aa4dd-af16-430d-a6f9-a358b957bf9b", 00:34:00.034 "strip_size_kb": 64, 00:34:00.034 "state": "online", 00:34:00.034 "raid_level": "raid5f", 00:34:00.034 "superblock": true, 00:34:00.034 "num_base_bdevs": 3, 00:34:00.034 "num_base_bdevs_discovered": 2, 00:34:00.034 "num_base_bdevs_operational": 2, 00:34:00.034 "base_bdevs_list": [ 00:34:00.034 { 00:34:00.034 "name": null, 00:34:00.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:00.034 "is_configured": false, 00:34:00.034 "data_offset": 2048, 00:34:00.034 "data_size": 63488 00:34:00.034 }, 00:34:00.034 { 00:34:00.034 "name": "BaseBdev2", 00:34:00.034 "uuid": "e86ab752-e135-5789-b945-7af5e4a7fad4", 00:34:00.034 "is_configured": true, 00:34:00.034 "data_offset": 2048, 00:34:00.034 "data_size": 63488 00:34:00.034 }, 00:34:00.034 { 00:34:00.034 "name": "BaseBdev3", 00:34:00.034 "uuid": "cb9dbdad-325c-504e-8ec6-52b5ea4a6323", 00:34:00.034 "is_configured": true, 00:34:00.034 "data_offset": 2048, 00:34:00.034 "data_size": 63488 00:34:00.034 } 00:34:00.034 ] 00:34:00.034 }' 00:34:00.034 21:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:00.034 21:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:00.034 21:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:00.293 21:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:00.293 21:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:00.293 21:47:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:34:00.293 21:47:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:00.293 21:47:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:00.293 21:47:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:00.293 21:47:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:00.293 21:47:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:00.293 21:47:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:00.293 21:47:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:00.293 21:47:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:00.293 21:47:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:34:00.293 21:47:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:00.293 [2024-07-15 21:47:33.634511] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:00.293 [2024-07-15 21:47:33.634839] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:00.293 [2024-07-15 21:47:33.634885] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:00.293 request: 00:34:00.293 { 00:34:00.293 "base_bdev": "BaseBdev1", 00:34:00.293 "raid_bdev": "raid_bdev1", 00:34:00.293 "method": "bdev_raid_add_base_bdev", 00:34:00.293 "req_id": 1 00:34:00.293 } 00:34:00.293 Got JSON-RPC error response 00:34:00.293 response: 00:34:00.293 { 00:34:00.293 "code": -22, 00:34:00.293 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:34:00.293 } 00:34:00.293 21:47:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:34:00.293 21:47:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:00.293 21:47:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:00.293 21:47:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:00.293 21:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:34:01.677 21:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:34:01.677 21:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:01.677 21:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:01.677 21:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:01.677 21:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:01.677 21:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:34:01.677 21:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:01.677 21:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:01.677 21:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:01.677 21:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:01.677 21:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:01.677 21:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:01.677 21:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:01.677 "name": "raid_bdev1", 00:34:01.677 "uuid": "a35aa4dd-af16-430d-a6f9-a358b957bf9b", 00:34:01.677 "strip_size_kb": 64, 00:34:01.677 "state": "online", 00:34:01.677 "raid_level": "raid5f", 00:34:01.677 "superblock": true, 00:34:01.677 "num_base_bdevs": 3, 00:34:01.677 "num_base_bdevs_discovered": 2, 00:34:01.677 "num_base_bdevs_operational": 2, 00:34:01.677 "base_bdevs_list": [ 00:34:01.677 { 00:34:01.677 "name": null, 00:34:01.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:01.677 "is_configured": false, 00:34:01.677 "data_offset": 2048, 00:34:01.677 "data_size": 63488 00:34:01.677 }, 00:34:01.677 { 00:34:01.677 "name": "BaseBdev2", 00:34:01.677 "uuid": "e86ab752-e135-5789-b945-7af5e4a7fad4", 00:34:01.677 "is_configured": true, 00:34:01.677 "data_offset": 2048, 00:34:01.677 "data_size": 63488 00:34:01.677 }, 00:34:01.677 { 00:34:01.677 "name": "BaseBdev3", 00:34:01.677 "uuid": "cb9dbdad-325c-504e-8ec6-52b5ea4a6323", 00:34:01.677 "is_configured": true, 00:34:01.677 "data_offset": 2048, 00:34:01.677 "data_size": 63488 00:34:01.677 } 00:34:01.677 ] 00:34:01.677 }' 00:34:01.677 21:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:01.677 21:47:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:02.246 21:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:02.246 21:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:02.246 21:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:02.246 21:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:02.246 21:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:02.246 21:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:02.246 21:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:02.505 21:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:02.505 "name": "raid_bdev1", 00:34:02.505 "uuid": "a35aa4dd-af16-430d-a6f9-a358b957bf9b", 00:34:02.505 "strip_size_kb": 64, 00:34:02.505 "state": "online", 00:34:02.505 "raid_level": "raid5f", 00:34:02.505 "superblock": true, 00:34:02.505 "num_base_bdevs": 3, 00:34:02.505 "num_base_bdevs_discovered": 2, 00:34:02.505 "num_base_bdevs_operational": 2, 00:34:02.505 "base_bdevs_list": [ 00:34:02.505 { 00:34:02.505 "name": null, 00:34:02.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:02.505 "is_configured": false, 00:34:02.505 "data_offset": 2048, 00:34:02.505 "data_size": 63488 00:34:02.505 }, 00:34:02.505 { 00:34:02.505 "name": "BaseBdev2", 00:34:02.505 "uuid": "e86ab752-e135-5789-b945-7af5e4a7fad4", 00:34:02.505 "is_configured": true, 00:34:02.505 "data_offset": 2048, 00:34:02.505 "data_size": 63488 00:34:02.505 }, 00:34:02.505 { 00:34:02.505 "name": "BaseBdev3", 00:34:02.505 "uuid": "cb9dbdad-325c-504e-8ec6-52b5ea4a6323", 00:34:02.505 "is_configured": true, 00:34:02.505 "data_offset": 2048, 00:34:02.505 "data_size": 63488 00:34:02.505 } 00:34:02.505 ] 00:34:02.505 }' 00:34:02.505 21:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:02.505 21:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:02.505 21:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:02.764 21:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:02.764 21:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 154820 00:34:02.764 21:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 154820 ']' 00:34:02.764 21:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 154820 00:34:02.764 21:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:34:02.764 21:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:02.764 21:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 154820 00:34:02.764 killing process with pid 154820 00:34:02.764 21:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:02.764 21:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:02.764 21:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 154820' 00:34:02.764 21:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 154820 00:34:02.764 21:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 154820 00:34:02.764 Received shutdown signal, test time was about 60.000000 seconds 00:34:02.764 00:34:02.764 Latency(us) 00:34:02.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:02.764 =================================================================================================================== 00:34:02.764 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:02.764 [2024-07-15 21:47:35.921795] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:02.764 [2024-07-15 21:47:35.922073] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:02.764 [2024-07-15 21:47:35.922177] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:02.764 [2024-07-15 21:47:35.922210] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b480 name raid_bdev1, state offline 00:34:03.023 [2024-07-15 21:47:36.365167] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:04.403 ************************************ 00:34:04.403 END TEST raid5f_rebuild_test_sb 00:34:04.403 ************************************ 00:34:04.403 21:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:34:04.403 00:34:04.403 real 0m35.851s 00:34:04.403 user 0m56.223s 00:34:04.403 sys 0m3.814s 00:34:04.403 21:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:04.403 21:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:04.663 21:47:37 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:34:04.663 21:47:37 bdev_raid -- bdev/bdev_raid.sh@885 -- # for n in {3..4} 00:34:04.663 21:47:37 bdev_raid -- bdev/bdev_raid.sh@886 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:34:04.663 21:47:37 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:34:04.663 21:47:37 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:04.663 21:47:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:04.663 ************************************ 00:34:04.663 START TEST raid5f_state_function_test 00:34:04.663 ************************************ 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid5f 4 false 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=155800 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 155800' 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:34:04.663 Process raid pid: 155800 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 155800 /var/tmp/spdk-raid.sock 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 155800 ']' 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:04.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:04.663 21:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:04.663 [2024-07-15 21:47:37.899554] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:34:04.663 [2024-07-15 21:47:37.900238] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:04.922 [2024-07-15 21:47:38.064067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:04.922 [2024-07-15 21:47:38.293505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:05.181 [2024-07-15 21:47:38.518667] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:05.440 21:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:05.440 21:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:34:05.440 21:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:34:05.699 [2024-07-15 21:47:39.024206] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:05.699 [2024-07-15 21:47:39.024374] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:05.699 [2024-07-15 21:47:39.024411] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:05.699 [2024-07-15 21:47:39.024446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:05.699 [2024-07-15 21:47:39.024466] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:05.699 [2024-07-15 21:47:39.024489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:05.699 [2024-07-15 21:47:39.024523] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:34:05.699 [2024-07-15 21:47:39.024555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:34:05.699 21:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:05.699 21:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:05.699 21:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:05.699 21:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:05.699 21:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:05.699 21:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:05.699 21:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:05.699 21:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:05.699 21:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:05.699 21:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:05.699 21:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:05.699 21:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:05.959 21:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:05.959 "name": "Existed_Raid", 00:34:05.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:05.959 "strip_size_kb": 64, 00:34:05.959 "state": "configuring", 00:34:05.959 "raid_level": "raid5f", 00:34:05.959 "superblock": false, 00:34:05.959 "num_base_bdevs": 4, 00:34:05.959 "num_base_bdevs_discovered": 0, 00:34:05.959 "num_base_bdevs_operational": 4, 00:34:05.959 "base_bdevs_list": [ 00:34:05.959 { 00:34:05.959 "name": "BaseBdev1", 00:34:05.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:05.959 "is_configured": false, 00:34:05.959 "data_offset": 0, 00:34:05.959 "data_size": 0 00:34:05.959 }, 00:34:05.959 { 00:34:05.959 "name": "BaseBdev2", 00:34:05.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:05.959 "is_configured": false, 00:34:05.959 "data_offset": 0, 00:34:05.959 "data_size": 0 00:34:05.959 }, 00:34:05.959 { 00:34:05.959 "name": "BaseBdev3", 00:34:05.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:05.959 "is_configured": false, 00:34:05.959 "data_offset": 0, 00:34:05.959 "data_size": 0 00:34:05.959 }, 00:34:05.959 { 00:34:05.959 "name": "BaseBdev4", 00:34:05.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:05.959 "is_configured": false, 00:34:05.959 "data_offset": 0, 00:34:05.959 "data_size": 0 00:34:05.959 } 00:34:05.959 ] 00:34:05.959 }' 00:34:05.959 21:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:05.959 21:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:06.528 21:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:06.786 [2024-07-15 21:47:40.090436] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:06.786 [2024-07-15 21:47:40.090584] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:34:06.786 21:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:34:07.080 [2024-07-15 21:47:40.322008] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:07.080 [2024-07-15 21:47:40.322148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:07.080 [2024-07-15 21:47:40.322208] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:07.080 [2024-07-15 21:47:40.322286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:07.080 [2024-07-15 21:47:40.322322] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:07.080 [2024-07-15 21:47:40.322387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:07.080 [2024-07-15 21:47:40.322416] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:34:07.080 [2024-07-15 21:47:40.322456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:34:07.080 21:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:34:07.369 [2024-07-15 21:47:40.568752] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:07.369 BaseBdev1 00:34:07.369 21:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:34:07.369 21:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:34:07.369 21:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:07.369 21:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:34:07.369 21:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:07.370 21:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:07.370 21:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:07.629 21:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:07.629 [ 00:34:07.629 { 00:34:07.629 "name": "BaseBdev1", 00:34:07.629 "aliases": [ 00:34:07.629 "2e12ea22-4287-4471-bb03-e3c1a77a57ae" 00:34:07.629 ], 00:34:07.629 "product_name": "Malloc disk", 00:34:07.629 "block_size": 512, 00:34:07.629 "num_blocks": 65536, 00:34:07.629 "uuid": "2e12ea22-4287-4471-bb03-e3c1a77a57ae", 00:34:07.629 "assigned_rate_limits": { 00:34:07.629 "rw_ios_per_sec": 0, 00:34:07.629 "rw_mbytes_per_sec": 0, 00:34:07.629 "r_mbytes_per_sec": 0, 00:34:07.629 "w_mbytes_per_sec": 0 00:34:07.629 }, 00:34:07.629 "claimed": true, 00:34:07.629 "claim_type": "exclusive_write", 00:34:07.629 "zoned": false, 00:34:07.629 "supported_io_types": { 00:34:07.629 "read": true, 00:34:07.629 "write": true, 00:34:07.629 "unmap": true, 00:34:07.629 "flush": true, 00:34:07.629 "reset": true, 00:34:07.629 "nvme_admin": false, 00:34:07.629 "nvme_io": false, 00:34:07.629 "nvme_io_md": false, 00:34:07.629 "write_zeroes": true, 00:34:07.629 "zcopy": true, 00:34:07.629 "get_zone_info": false, 00:34:07.629 "zone_management": false, 00:34:07.629 "zone_append": false, 00:34:07.629 "compare": false, 00:34:07.629 "compare_and_write": false, 00:34:07.629 "abort": true, 00:34:07.629 "seek_hole": false, 00:34:07.629 "seek_data": false, 00:34:07.629 "copy": true, 00:34:07.629 "nvme_iov_md": false 00:34:07.629 }, 00:34:07.629 "memory_domains": [ 00:34:07.629 { 00:34:07.629 "dma_device_id": "system", 00:34:07.629 "dma_device_type": 1 00:34:07.629 }, 00:34:07.629 { 00:34:07.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:07.629 "dma_device_type": 2 00:34:07.629 } 00:34:07.629 ], 00:34:07.629 "driver_specific": {} 00:34:07.629 } 00:34:07.629 ] 00:34:07.888 21:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:34:07.888 21:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:07.888 21:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:07.888 21:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:07.888 21:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:07.888 21:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:07.888 21:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:07.888 21:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:07.888 21:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:07.888 21:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:07.888 21:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:07.888 21:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:07.888 21:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:07.888 21:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:07.888 "name": "Existed_Raid", 00:34:07.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:07.888 "strip_size_kb": 64, 00:34:07.888 "state": "configuring", 00:34:07.888 "raid_level": "raid5f", 00:34:07.888 "superblock": false, 00:34:07.888 "num_base_bdevs": 4, 00:34:07.888 "num_base_bdevs_discovered": 1, 00:34:07.888 "num_base_bdevs_operational": 4, 00:34:07.888 "base_bdevs_list": [ 00:34:07.888 { 00:34:07.888 "name": "BaseBdev1", 00:34:07.888 "uuid": "2e12ea22-4287-4471-bb03-e3c1a77a57ae", 00:34:07.888 "is_configured": true, 00:34:07.888 "data_offset": 0, 00:34:07.888 "data_size": 65536 00:34:07.888 }, 00:34:07.888 { 00:34:07.888 "name": "BaseBdev2", 00:34:07.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:07.888 "is_configured": false, 00:34:07.888 "data_offset": 0, 00:34:07.888 "data_size": 0 00:34:07.888 }, 00:34:07.888 { 00:34:07.888 "name": "BaseBdev3", 00:34:07.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:07.888 "is_configured": false, 00:34:07.888 "data_offset": 0, 00:34:07.888 "data_size": 0 00:34:07.888 }, 00:34:07.888 { 00:34:07.888 "name": "BaseBdev4", 00:34:07.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:07.888 "is_configured": false, 00:34:07.888 "data_offset": 0, 00:34:07.888 "data_size": 0 00:34:07.888 } 00:34:07.888 ] 00:34:07.888 }' 00:34:07.888 21:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:07.888 21:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:08.823 21:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:08.823 [2024-07-15 21:47:42.054346] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:08.823 [2024-07-15 21:47:42.054508] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:34:08.823 21:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:34:09.081 [2024-07-15 21:47:42.277991] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:09.081 [2024-07-15 21:47:42.279861] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:09.081 [2024-07-15 21:47:42.279965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:09.081 [2024-07-15 21:47:42.280018] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:09.081 [2024-07-15 21:47:42.280078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:09.081 [2024-07-15 21:47:42.280122] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:34:09.081 [2024-07-15 21:47:42.280181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:34:09.081 21:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:34:09.081 21:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:34:09.081 21:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:09.081 21:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:09.081 21:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:09.081 21:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:09.081 21:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:09.081 21:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:09.081 21:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:09.081 21:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:09.081 21:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:09.081 21:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:09.081 21:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:09.081 21:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:09.341 21:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:09.341 "name": "Existed_Raid", 00:34:09.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:09.341 "strip_size_kb": 64, 00:34:09.341 "state": "configuring", 00:34:09.341 "raid_level": "raid5f", 00:34:09.341 "superblock": false, 00:34:09.341 "num_base_bdevs": 4, 00:34:09.341 "num_base_bdevs_discovered": 1, 00:34:09.341 "num_base_bdevs_operational": 4, 00:34:09.341 "base_bdevs_list": [ 00:34:09.341 { 00:34:09.341 "name": "BaseBdev1", 00:34:09.341 "uuid": "2e12ea22-4287-4471-bb03-e3c1a77a57ae", 00:34:09.341 "is_configured": true, 00:34:09.341 "data_offset": 0, 00:34:09.341 "data_size": 65536 00:34:09.341 }, 00:34:09.341 { 00:34:09.341 "name": "BaseBdev2", 00:34:09.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:09.341 "is_configured": false, 00:34:09.341 "data_offset": 0, 00:34:09.341 "data_size": 0 00:34:09.341 }, 00:34:09.341 { 00:34:09.341 "name": "BaseBdev3", 00:34:09.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:09.341 "is_configured": false, 00:34:09.341 "data_offset": 0, 00:34:09.341 "data_size": 0 00:34:09.341 }, 00:34:09.341 { 00:34:09.341 "name": "BaseBdev4", 00:34:09.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:09.341 "is_configured": false, 00:34:09.341 "data_offset": 0, 00:34:09.341 "data_size": 0 00:34:09.341 } 00:34:09.341 ] 00:34:09.341 }' 00:34:09.341 21:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:09.341 21:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:09.908 21:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:34:10.170 [2024-07-15 21:47:43.417310] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:10.170 BaseBdev2 00:34:10.170 21:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:34:10.170 21:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:34:10.170 21:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:10.170 21:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:34:10.170 21:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:10.170 21:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:10.170 21:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:10.435 21:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:10.692 [ 00:34:10.692 { 00:34:10.692 "name": "BaseBdev2", 00:34:10.692 "aliases": [ 00:34:10.692 "8f0c0780-6217-4f2a-8f07-85fe29f92c48" 00:34:10.692 ], 00:34:10.692 "product_name": "Malloc disk", 00:34:10.692 "block_size": 512, 00:34:10.692 "num_blocks": 65536, 00:34:10.692 "uuid": "8f0c0780-6217-4f2a-8f07-85fe29f92c48", 00:34:10.692 "assigned_rate_limits": { 00:34:10.692 "rw_ios_per_sec": 0, 00:34:10.692 "rw_mbytes_per_sec": 0, 00:34:10.692 "r_mbytes_per_sec": 0, 00:34:10.692 "w_mbytes_per_sec": 0 00:34:10.692 }, 00:34:10.692 "claimed": true, 00:34:10.692 "claim_type": "exclusive_write", 00:34:10.692 "zoned": false, 00:34:10.692 "supported_io_types": { 00:34:10.692 "read": true, 00:34:10.692 "write": true, 00:34:10.692 "unmap": true, 00:34:10.692 "flush": true, 00:34:10.692 "reset": true, 00:34:10.692 "nvme_admin": false, 00:34:10.692 "nvme_io": false, 00:34:10.692 "nvme_io_md": false, 00:34:10.692 "write_zeroes": true, 00:34:10.692 "zcopy": true, 00:34:10.692 "get_zone_info": false, 00:34:10.692 "zone_management": false, 00:34:10.692 "zone_append": false, 00:34:10.692 "compare": false, 00:34:10.692 "compare_and_write": false, 00:34:10.692 "abort": true, 00:34:10.692 "seek_hole": false, 00:34:10.692 "seek_data": false, 00:34:10.692 "copy": true, 00:34:10.692 "nvme_iov_md": false 00:34:10.692 }, 00:34:10.692 "memory_domains": [ 00:34:10.692 { 00:34:10.692 "dma_device_id": "system", 00:34:10.692 "dma_device_type": 1 00:34:10.692 }, 00:34:10.692 { 00:34:10.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:10.692 "dma_device_type": 2 00:34:10.693 } 00:34:10.693 ], 00:34:10.693 "driver_specific": {} 00:34:10.693 } 00:34:10.693 ] 00:34:10.693 21:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:34:10.693 21:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:34:10.693 21:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:34:10.693 21:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:10.693 21:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:10.693 21:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:10.693 21:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:10.693 21:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:10.693 21:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:10.693 21:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:10.693 21:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:10.693 21:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:10.693 21:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:10.693 21:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:10.693 21:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:10.950 21:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:10.950 "name": "Existed_Raid", 00:34:10.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:10.950 "strip_size_kb": 64, 00:34:10.950 "state": "configuring", 00:34:10.950 "raid_level": "raid5f", 00:34:10.950 "superblock": false, 00:34:10.950 "num_base_bdevs": 4, 00:34:10.950 "num_base_bdevs_discovered": 2, 00:34:10.950 "num_base_bdevs_operational": 4, 00:34:10.950 "base_bdevs_list": [ 00:34:10.950 { 00:34:10.950 "name": "BaseBdev1", 00:34:10.950 "uuid": "2e12ea22-4287-4471-bb03-e3c1a77a57ae", 00:34:10.950 "is_configured": true, 00:34:10.950 "data_offset": 0, 00:34:10.950 "data_size": 65536 00:34:10.950 }, 00:34:10.950 { 00:34:10.950 "name": "BaseBdev2", 00:34:10.950 "uuid": "8f0c0780-6217-4f2a-8f07-85fe29f92c48", 00:34:10.950 "is_configured": true, 00:34:10.950 "data_offset": 0, 00:34:10.950 "data_size": 65536 00:34:10.950 }, 00:34:10.951 { 00:34:10.951 "name": "BaseBdev3", 00:34:10.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:10.951 "is_configured": false, 00:34:10.951 "data_offset": 0, 00:34:10.951 "data_size": 0 00:34:10.951 }, 00:34:10.951 { 00:34:10.951 "name": "BaseBdev4", 00:34:10.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:10.951 "is_configured": false, 00:34:10.951 "data_offset": 0, 00:34:10.951 "data_size": 0 00:34:10.951 } 00:34:10.951 ] 00:34:10.951 }' 00:34:10.951 21:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:10.951 21:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:11.515 21:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:34:11.772 [2024-07-15 21:47:45.027315] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:11.772 BaseBdev3 00:34:11.772 21:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:34:11.772 21:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:34:11.772 21:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:11.772 21:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:34:11.772 21:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:11.772 21:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:11.772 21:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:12.030 21:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:34:12.289 [ 00:34:12.289 { 00:34:12.289 "name": "BaseBdev3", 00:34:12.289 "aliases": [ 00:34:12.289 "975236b6-6e06-45ca-aa20-4878e2105bc2" 00:34:12.289 ], 00:34:12.289 "product_name": "Malloc disk", 00:34:12.289 "block_size": 512, 00:34:12.289 "num_blocks": 65536, 00:34:12.289 "uuid": "975236b6-6e06-45ca-aa20-4878e2105bc2", 00:34:12.289 "assigned_rate_limits": { 00:34:12.289 "rw_ios_per_sec": 0, 00:34:12.289 "rw_mbytes_per_sec": 0, 00:34:12.289 "r_mbytes_per_sec": 0, 00:34:12.289 "w_mbytes_per_sec": 0 00:34:12.289 }, 00:34:12.289 "claimed": true, 00:34:12.289 "claim_type": "exclusive_write", 00:34:12.289 "zoned": false, 00:34:12.289 "supported_io_types": { 00:34:12.289 "read": true, 00:34:12.289 "write": true, 00:34:12.289 "unmap": true, 00:34:12.289 "flush": true, 00:34:12.289 "reset": true, 00:34:12.289 "nvme_admin": false, 00:34:12.289 "nvme_io": false, 00:34:12.289 "nvme_io_md": false, 00:34:12.289 "write_zeroes": true, 00:34:12.289 "zcopy": true, 00:34:12.289 "get_zone_info": false, 00:34:12.289 "zone_management": false, 00:34:12.289 "zone_append": false, 00:34:12.289 "compare": false, 00:34:12.289 "compare_and_write": false, 00:34:12.289 "abort": true, 00:34:12.289 "seek_hole": false, 00:34:12.289 "seek_data": false, 00:34:12.289 "copy": true, 00:34:12.289 "nvme_iov_md": false 00:34:12.289 }, 00:34:12.289 "memory_domains": [ 00:34:12.289 { 00:34:12.289 "dma_device_id": "system", 00:34:12.289 "dma_device_type": 1 00:34:12.289 }, 00:34:12.289 { 00:34:12.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:12.289 "dma_device_type": 2 00:34:12.289 } 00:34:12.289 ], 00:34:12.289 "driver_specific": {} 00:34:12.289 } 00:34:12.289 ] 00:34:12.289 21:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:34:12.289 21:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:34:12.289 21:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:34:12.289 21:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:12.289 21:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:12.289 21:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:12.289 21:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:12.289 21:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:12.289 21:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:12.289 21:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:12.289 21:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:12.289 21:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:12.289 21:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:12.289 21:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:12.289 21:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:12.547 21:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:12.547 "name": "Existed_Raid", 00:34:12.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:12.547 "strip_size_kb": 64, 00:34:12.547 "state": "configuring", 00:34:12.547 "raid_level": "raid5f", 00:34:12.547 "superblock": false, 00:34:12.547 "num_base_bdevs": 4, 00:34:12.547 "num_base_bdevs_discovered": 3, 00:34:12.547 "num_base_bdevs_operational": 4, 00:34:12.547 "base_bdevs_list": [ 00:34:12.547 { 00:34:12.547 "name": "BaseBdev1", 00:34:12.547 "uuid": "2e12ea22-4287-4471-bb03-e3c1a77a57ae", 00:34:12.547 "is_configured": true, 00:34:12.547 "data_offset": 0, 00:34:12.547 "data_size": 65536 00:34:12.547 }, 00:34:12.547 { 00:34:12.547 "name": "BaseBdev2", 00:34:12.547 "uuid": "8f0c0780-6217-4f2a-8f07-85fe29f92c48", 00:34:12.547 "is_configured": true, 00:34:12.547 "data_offset": 0, 00:34:12.547 "data_size": 65536 00:34:12.547 }, 00:34:12.547 { 00:34:12.547 "name": "BaseBdev3", 00:34:12.547 "uuid": "975236b6-6e06-45ca-aa20-4878e2105bc2", 00:34:12.547 "is_configured": true, 00:34:12.547 "data_offset": 0, 00:34:12.547 "data_size": 65536 00:34:12.547 }, 00:34:12.547 { 00:34:12.547 "name": "BaseBdev4", 00:34:12.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:12.547 "is_configured": false, 00:34:12.547 "data_offset": 0, 00:34:12.547 "data_size": 0 00:34:12.547 } 00:34:12.547 ] 00:34:12.547 }' 00:34:12.547 21:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:12.547 21:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:13.113 21:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:34:13.371 [2024-07-15 21:47:46.637999] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:34:13.371 [2024-07-15 21:47:46.638131] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:34:13.371 [2024-07-15 21:47:46.638153] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:34:13.371 [2024-07-15 21:47:46.638301] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:34:13.371 [2024-07-15 21:47:46.645935] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:34:13.371 [2024-07-15 21:47:46.645998] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:34:13.371 [2024-07-15 21:47:46.646301] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:13.371 BaseBdev4 00:34:13.371 21:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:34:13.371 21:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:34:13.371 21:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:13.371 21:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:34:13.371 21:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:13.371 21:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:13.371 21:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:13.711 21:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:34:13.711 [ 00:34:13.711 { 00:34:13.711 "name": "BaseBdev4", 00:34:13.711 "aliases": [ 00:34:13.711 "1966624a-6edd-4fd4-b6e3-32e2303ad8b2" 00:34:13.711 ], 00:34:13.711 "product_name": "Malloc disk", 00:34:13.711 "block_size": 512, 00:34:13.711 "num_blocks": 65536, 00:34:13.711 "uuid": "1966624a-6edd-4fd4-b6e3-32e2303ad8b2", 00:34:13.711 "assigned_rate_limits": { 00:34:13.711 "rw_ios_per_sec": 0, 00:34:13.711 "rw_mbytes_per_sec": 0, 00:34:13.711 "r_mbytes_per_sec": 0, 00:34:13.711 "w_mbytes_per_sec": 0 00:34:13.711 }, 00:34:13.711 "claimed": true, 00:34:13.711 "claim_type": "exclusive_write", 00:34:13.711 "zoned": false, 00:34:13.711 "supported_io_types": { 00:34:13.711 "read": true, 00:34:13.711 "write": true, 00:34:13.711 "unmap": true, 00:34:13.711 "flush": true, 00:34:13.711 "reset": true, 00:34:13.711 "nvme_admin": false, 00:34:13.711 "nvme_io": false, 00:34:13.711 "nvme_io_md": false, 00:34:13.711 "write_zeroes": true, 00:34:13.711 "zcopy": true, 00:34:13.711 "get_zone_info": false, 00:34:13.711 "zone_management": false, 00:34:13.711 "zone_append": false, 00:34:13.711 "compare": false, 00:34:13.711 "compare_and_write": false, 00:34:13.711 "abort": true, 00:34:13.711 "seek_hole": false, 00:34:13.711 "seek_data": false, 00:34:13.711 "copy": true, 00:34:13.711 "nvme_iov_md": false 00:34:13.711 }, 00:34:13.711 "memory_domains": [ 00:34:13.711 { 00:34:13.711 "dma_device_id": "system", 00:34:13.711 "dma_device_type": 1 00:34:13.711 }, 00:34:13.711 { 00:34:13.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:13.711 "dma_device_type": 2 00:34:13.711 } 00:34:13.711 ], 00:34:13.711 "driver_specific": {} 00:34:13.711 } 00:34:13.711 ] 00:34:13.711 21:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:34:13.711 21:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:34:13.711 21:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:34:13.711 21:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:34:13.711 21:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:13.711 21:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:13.711 21:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:13.711 21:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:13.711 21:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:13.711 21:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:13.711 21:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:13.711 21:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:13.711 21:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:13.711 21:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:13.711 21:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:13.969 21:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:13.969 "name": "Existed_Raid", 00:34:13.970 "uuid": "deb2ac60-c596-45e2-92be-9bd2578db664", 00:34:13.970 "strip_size_kb": 64, 00:34:13.970 "state": "online", 00:34:13.970 "raid_level": "raid5f", 00:34:13.970 "superblock": false, 00:34:13.970 "num_base_bdevs": 4, 00:34:13.970 "num_base_bdevs_discovered": 4, 00:34:13.970 "num_base_bdevs_operational": 4, 00:34:13.970 "base_bdevs_list": [ 00:34:13.970 { 00:34:13.970 "name": "BaseBdev1", 00:34:13.970 "uuid": "2e12ea22-4287-4471-bb03-e3c1a77a57ae", 00:34:13.970 "is_configured": true, 00:34:13.970 "data_offset": 0, 00:34:13.970 "data_size": 65536 00:34:13.970 }, 00:34:13.970 { 00:34:13.970 "name": "BaseBdev2", 00:34:13.970 "uuid": "8f0c0780-6217-4f2a-8f07-85fe29f92c48", 00:34:13.970 "is_configured": true, 00:34:13.970 "data_offset": 0, 00:34:13.970 "data_size": 65536 00:34:13.970 }, 00:34:13.970 { 00:34:13.970 "name": "BaseBdev3", 00:34:13.970 "uuid": "975236b6-6e06-45ca-aa20-4878e2105bc2", 00:34:13.970 "is_configured": true, 00:34:13.970 "data_offset": 0, 00:34:13.970 "data_size": 65536 00:34:13.970 }, 00:34:13.970 { 00:34:13.970 "name": "BaseBdev4", 00:34:13.970 "uuid": "1966624a-6edd-4fd4-b6e3-32e2303ad8b2", 00:34:13.970 "is_configured": true, 00:34:13.970 "data_offset": 0, 00:34:13.970 "data_size": 65536 00:34:13.970 } 00:34:13.970 ] 00:34:13.970 }' 00:34:13.970 21:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:13.970 21:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.905 21:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:34:14.905 21:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:34:14.905 21:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:34:14.905 21:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:34:14.905 21:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:34:14.905 21:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:34:14.905 21:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:34:14.905 21:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:34:14.905 [2024-07-15 21:47:48.152547] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:14.905 21:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:34:14.905 "name": "Existed_Raid", 00:34:14.905 "aliases": [ 00:34:14.905 "deb2ac60-c596-45e2-92be-9bd2578db664" 00:34:14.905 ], 00:34:14.905 "product_name": "Raid Volume", 00:34:14.905 "block_size": 512, 00:34:14.905 "num_blocks": 196608, 00:34:14.905 "uuid": "deb2ac60-c596-45e2-92be-9bd2578db664", 00:34:14.905 "assigned_rate_limits": { 00:34:14.905 "rw_ios_per_sec": 0, 00:34:14.905 "rw_mbytes_per_sec": 0, 00:34:14.905 "r_mbytes_per_sec": 0, 00:34:14.905 "w_mbytes_per_sec": 0 00:34:14.905 }, 00:34:14.905 "claimed": false, 00:34:14.905 "zoned": false, 00:34:14.905 "supported_io_types": { 00:34:14.905 "read": true, 00:34:14.905 "write": true, 00:34:14.905 "unmap": false, 00:34:14.905 "flush": false, 00:34:14.905 "reset": true, 00:34:14.905 "nvme_admin": false, 00:34:14.905 "nvme_io": false, 00:34:14.905 "nvme_io_md": false, 00:34:14.905 "write_zeroes": true, 00:34:14.905 "zcopy": false, 00:34:14.905 "get_zone_info": false, 00:34:14.905 "zone_management": false, 00:34:14.905 "zone_append": false, 00:34:14.905 "compare": false, 00:34:14.905 "compare_and_write": false, 00:34:14.905 "abort": false, 00:34:14.905 "seek_hole": false, 00:34:14.905 "seek_data": false, 00:34:14.905 "copy": false, 00:34:14.905 "nvme_iov_md": false 00:34:14.905 }, 00:34:14.905 "driver_specific": { 00:34:14.905 "raid": { 00:34:14.905 "uuid": "deb2ac60-c596-45e2-92be-9bd2578db664", 00:34:14.905 "strip_size_kb": 64, 00:34:14.905 "state": "online", 00:34:14.905 "raid_level": "raid5f", 00:34:14.905 "superblock": false, 00:34:14.905 "num_base_bdevs": 4, 00:34:14.905 "num_base_bdevs_discovered": 4, 00:34:14.905 "num_base_bdevs_operational": 4, 00:34:14.905 "base_bdevs_list": [ 00:34:14.905 { 00:34:14.905 "name": "BaseBdev1", 00:34:14.905 "uuid": "2e12ea22-4287-4471-bb03-e3c1a77a57ae", 00:34:14.905 "is_configured": true, 00:34:14.905 "data_offset": 0, 00:34:14.905 "data_size": 65536 00:34:14.905 }, 00:34:14.905 { 00:34:14.905 "name": "BaseBdev2", 00:34:14.905 "uuid": "8f0c0780-6217-4f2a-8f07-85fe29f92c48", 00:34:14.905 "is_configured": true, 00:34:14.905 "data_offset": 0, 00:34:14.905 "data_size": 65536 00:34:14.905 }, 00:34:14.905 { 00:34:14.905 "name": "BaseBdev3", 00:34:14.905 "uuid": "975236b6-6e06-45ca-aa20-4878e2105bc2", 00:34:14.905 "is_configured": true, 00:34:14.905 "data_offset": 0, 00:34:14.905 "data_size": 65536 00:34:14.905 }, 00:34:14.905 { 00:34:14.905 "name": "BaseBdev4", 00:34:14.905 "uuid": "1966624a-6edd-4fd4-b6e3-32e2303ad8b2", 00:34:14.905 "is_configured": true, 00:34:14.905 "data_offset": 0, 00:34:14.905 "data_size": 65536 00:34:14.905 } 00:34:14.905 ] 00:34:14.905 } 00:34:14.905 } 00:34:14.905 }' 00:34:14.905 21:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:14.905 21:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:34:14.905 BaseBdev2 00:34:14.905 BaseBdev3 00:34:14.905 BaseBdev4' 00:34:14.905 21:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:14.905 21:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:34:14.905 21:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:15.164 21:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:15.164 "name": "BaseBdev1", 00:34:15.164 "aliases": [ 00:34:15.164 "2e12ea22-4287-4471-bb03-e3c1a77a57ae" 00:34:15.164 ], 00:34:15.164 "product_name": "Malloc disk", 00:34:15.164 "block_size": 512, 00:34:15.164 "num_blocks": 65536, 00:34:15.164 "uuid": "2e12ea22-4287-4471-bb03-e3c1a77a57ae", 00:34:15.164 "assigned_rate_limits": { 00:34:15.164 "rw_ios_per_sec": 0, 00:34:15.164 "rw_mbytes_per_sec": 0, 00:34:15.164 "r_mbytes_per_sec": 0, 00:34:15.164 "w_mbytes_per_sec": 0 00:34:15.164 }, 00:34:15.164 "claimed": true, 00:34:15.164 "claim_type": "exclusive_write", 00:34:15.164 "zoned": false, 00:34:15.164 "supported_io_types": { 00:34:15.164 "read": true, 00:34:15.164 "write": true, 00:34:15.164 "unmap": true, 00:34:15.164 "flush": true, 00:34:15.164 "reset": true, 00:34:15.164 "nvme_admin": false, 00:34:15.164 "nvme_io": false, 00:34:15.164 "nvme_io_md": false, 00:34:15.164 "write_zeroes": true, 00:34:15.164 "zcopy": true, 00:34:15.164 "get_zone_info": false, 00:34:15.164 "zone_management": false, 00:34:15.164 "zone_append": false, 00:34:15.164 "compare": false, 00:34:15.164 "compare_and_write": false, 00:34:15.164 "abort": true, 00:34:15.164 "seek_hole": false, 00:34:15.164 "seek_data": false, 00:34:15.164 "copy": true, 00:34:15.164 "nvme_iov_md": false 00:34:15.164 }, 00:34:15.164 "memory_domains": [ 00:34:15.164 { 00:34:15.164 "dma_device_id": "system", 00:34:15.164 "dma_device_type": 1 00:34:15.164 }, 00:34:15.164 { 00:34:15.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:15.164 "dma_device_type": 2 00:34:15.164 } 00:34:15.164 ], 00:34:15.164 "driver_specific": {} 00:34:15.164 }' 00:34:15.164 21:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:15.164 21:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:15.423 21:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:15.423 21:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:15.423 21:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:15.423 21:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:15.423 21:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:15.423 21:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:15.680 21:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:15.680 21:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:15.680 21:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:15.680 21:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:15.680 21:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:15.680 21:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:34:15.680 21:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:15.938 21:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:15.938 "name": "BaseBdev2", 00:34:15.938 "aliases": [ 00:34:15.938 "8f0c0780-6217-4f2a-8f07-85fe29f92c48" 00:34:15.938 ], 00:34:15.938 "product_name": "Malloc disk", 00:34:15.938 "block_size": 512, 00:34:15.938 "num_blocks": 65536, 00:34:15.938 "uuid": "8f0c0780-6217-4f2a-8f07-85fe29f92c48", 00:34:15.938 "assigned_rate_limits": { 00:34:15.938 "rw_ios_per_sec": 0, 00:34:15.938 "rw_mbytes_per_sec": 0, 00:34:15.938 "r_mbytes_per_sec": 0, 00:34:15.938 "w_mbytes_per_sec": 0 00:34:15.938 }, 00:34:15.938 "claimed": true, 00:34:15.938 "claim_type": "exclusive_write", 00:34:15.938 "zoned": false, 00:34:15.938 "supported_io_types": { 00:34:15.938 "read": true, 00:34:15.938 "write": true, 00:34:15.938 "unmap": true, 00:34:15.938 "flush": true, 00:34:15.938 "reset": true, 00:34:15.938 "nvme_admin": false, 00:34:15.938 "nvme_io": false, 00:34:15.938 "nvme_io_md": false, 00:34:15.938 "write_zeroes": true, 00:34:15.938 "zcopy": true, 00:34:15.938 "get_zone_info": false, 00:34:15.938 "zone_management": false, 00:34:15.938 "zone_append": false, 00:34:15.938 "compare": false, 00:34:15.938 "compare_and_write": false, 00:34:15.938 "abort": true, 00:34:15.938 "seek_hole": false, 00:34:15.938 "seek_data": false, 00:34:15.938 "copy": true, 00:34:15.938 "nvme_iov_md": false 00:34:15.938 }, 00:34:15.938 "memory_domains": [ 00:34:15.938 { 00:34:15.938 "dma_device_id": "system", 00:34:15.938 "dma_device_type": 1 00:34:15.938 }, 00:34:15.938 { 00:34:15.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:15.938 "dma_device_type": 2 00:34:15.938 } 00:34:15.938 ], 00:34:15.938 "driver_specific": {} 00:34:15.938 }' 00:34:15.938 21:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:15.938 21:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:15.938 21:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:15.938 21:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:15.938 21:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:16.196 21:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:16.196 21:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:16.196 21:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:16.196 21:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:16.196 21:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:16.196 21:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:16.455 21:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:16.455 21:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:16.455 21:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:34:16.455 21:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:16.714 21:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:16.714 "name": "BaseBdev3", 00:34:16.714 "aliases": [ 00:34:16.714 "975236b6-6e06-45ca-aa20-4878e2105bc2" 00:34:16.714 ], 00:34:16.714 "product_name": "Malloc disk", 00:34:16.714 "block_size": 512, 00:34:16.714 "num_blocks": 65536, 00:34:16.714 "uuid": "975236b6-6e06-45ca-aa20-4878e2105bc2", 00:34:16.714 "assigned_rate_limits": { 00:34:16.714 "rw_ios_per_sec": 0, 00:34:16.714 "rw_mbytes_per_sec": 0, 00:34:16.714 "r_mbytes_per_sec": 0, 00:34:16.714 "w_mbytes_per_sec": 0 00:34:16.714 }, 00:34:16.714 "claimed": true, 00:34:16.714 "claim_type": "exclusive_write", 00:34:16.714 "zoned": false, 00:34:16.714 "supported_io_types": { 00:34:16.714 "read": true, 00:34:16.714 "write": true, 00:34:16.714 "unmap": true, 00:34:16.714 "flush": true, 00:34:16.714 "reset": true, 00:34:16.714 "nvme_admin": false, 00:34:16.714 "nvme_io": false, 00:34:16.714 "nvme_io_md": false, 00:34:16.714 "write_zeroes": true, 00:34:16.714 "zcopy": true, 00:34:16.714 "get_zone_info": false, 00:34:16.714 "zone_management": false, 00:34:16.714 "zone_append": false, 00:34:16.714 "compare": false, 00:34:16.714 "compare_and_write": false, 00:34:16.714 "abort": true, 00:34:16.714 "seek_hole": false, 00:34:16.714 "seek_data": false, 00:34:16.714 "copy": true, 00:34:16.714 "nvme_iov_md": false 00:34:16.714 }, 00:34:16.714 "memory_domains": [ 00:34:16.714 { 00:34:16.714 "dma_device_id": "system", 00:34:16.714 "dma_device_type": 1 00:34:16.714 }, 00:34:16.714 { 00:34:16.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:16.714 "dma_device_type": 2 00:34:16.714 } 00:34:16.714 ], 00:34:16.714 "driver_specific": {} 00:34:16.714 }' 00:34:16.714 21:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:16.714 21:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:16.714 21:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:16.714 21:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:16.714 21:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:16.714 21:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:16.714 21:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:16.972 21:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:16.972 21:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:16.972 21:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:16.972 21:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:16.972 21:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:16.972 21:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:16.972 21:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:34:16.972 21:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:17.230 21:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:17.230 "name": "BaseBdev4", 00:34:17.230 "aliases": [ 00:34:17.230 "1966624a-6edd-4fd4-b6e3-32e2303ad8b2" 00:34:17.230 ], 00:34:17.230 "product_name": "Malloc disk", 00:34:17.230 "block_size": 512, 00:34:17.230 "num_blocks": 65536, 00:34:17.230 "uuid": "1966624a-6edd-4fd4-b6e3-32e2303ad8b2", 00:34:17.230 "assigned_rate_limits": { 00:34:17.230 "rw_ios_per_sec": 0, 00:34:17.230 "rw_mbytes_per_sec": 0, 00:34:17.230 "r_mbytes_per_sec": 0, 00:34:17.230 "w_mbytes_per_sec": 0 00:34:17.230 }, 00:34:17.230 "claimed": true, 00:34:17.230 "claim_type": "exclusive_write", 00:34:17.230 "zoned": false, 00:34:17.230 "supported_io_types": { 00:34:17.230 "read": true, 00:34:17.230 "write": true, 00:34:17.230 "unmap": true, 00:34:17.230 "flush": true, 00:34:17.230 "reset": true, 00:34:17.230 "nvme_admin": false, 00:34:17.230 "nvme_io": false, 00:34:17.230 "nvme_io_md": false, 00:34:17.230 "write_zeroes": true, 00:34:17.230 "zcopy": true, 00:34:17.230 "get_zone_info": false, 00:34:17.230 "zone_management": false, 00:34:17.230 "zone_append": false, 00:34:17.230 "compare": false, 00:34:17.230 "compare_and_write": false, 00:34:17.230 "abort": true, 00:34:17.230 "seek_hole": false, 00:34:17.230 "seek_data": false, 00:34:17.230 "copy": true, 00:34:17.230 "nvme_iov_md": false 00:34:17.230 }, 00:34:17.230 "memory_domains": [ 00:34:17.230 { 00:34:17.230 "dma_device_id": "system", 00:34:17.230 "dma_device_type": 1 00:34:17.230 }, 00:34:17.230 { 00:34:17.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:17.230 "dma_device_type": 2 00:34:17.230 } 00:34:17.230 ], 00:34:17.230 "driver_specific": {} 00:34:17.231 }' 00:34:17.231 21:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:17.231 21:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:17.231 21:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:17.231 21:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:17.231 21:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:17.489 21:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:17.489 21:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:17.489 21:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:17.489 21:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:17.489 21:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:17.489 21:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:17.748 21:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:17.748 21:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:34:17.748 [2024-07-15 21:47:51.055492] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:18.006 21:47:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:34:18.006 21:47:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:34:18.006 21:47:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:34:18.006 21:47:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:34:18.006 21:47:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:34:18.006 21:47:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:34:18.006 21:47:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:18.006 21:47:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:18.006 21:47:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:18.006 21:47:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:18.006 21:47:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:18.006 21:47:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:18.006 21:47:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:18.006 21:47:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:18.006 21:47:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:18.006 21:47:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:18.006 21:47:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:18.264 21:47:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:18.264 "name": "Existed_Raid", 00:34:18.264 "uuid": "deb2ac60-c596-45e2-92be-9bd2578db664", 00:34:18.264 "strip_size_kb": 64, 00:34:18.264 "state": "online", 00:34:18.264 "raid_level": "raid5f", 00:34:18.264 "superblock": false, 00:34:18.264 "num_base_bdevs": 4, 00:34:18.264 "num_base_bdevs_discovered": 3, 00:34:18.264 "num_base_bdevs_operational": 3, 00:34:18.264 "base_bdevs_list": [ 00:34:18.264 { 00:34:18.264 "name": null, 00:34:18.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:18.264 "is_configured": false, 00:34:18.264 "data_offset": 0, 00:34:18.264 "data_size": 65536 00:34:18.264 }, 00:34:18.264 { 00:34:18.264 "name": "BaseBdev2", 00:34:18.264 "uuid": "8f0c0780-6217-4f2a-8f07-85fe29f92c48", 00:34:18.264 "is_configured": true, 00:34:18.264 "data_offset": 0, 00:34:18.264 "data_size": 65536 00:34:18.264 }, 00:34:18.264 { 00:34:18.264 "name": "BaseBdev3", 00:34:18.264 "uuid": "975236b6-6e06-45ca-aa20-4878e2105bc2", 00:34:18.264 "is_configured": true, 00:34:18.264 "data_offset": 0, 00:34:18.264 "data_size": 65536 00:34:18.264 }, 00:34:18.264 { 00:34:18.264 "name": "BaseBdev4", 00:34:18.264 "uuid": "1966624a-6edd-4fd4-b6e3-32e2303ad8b2", 00:34:18.264 "is_configured": true, 00:34:18.264 "data_offset": 0, 00:34:18.264 "data_size": 65536 00:34:18.264 } 00:34:18.264 ] 00:34:18.264 }' 00:34:18.264 21:47:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:18.264 21:47:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:18.830 21:47:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:34:18.830 21:47:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:34:18.830 21:47:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:18.830 21:47:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:34:19.089 21:47:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:34:19.089 21:47:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:19.089 21:47:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:34:19.089 [2024-07-15 21:47:52.428458] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:19.089 [2024-07-15 21:47:52.428634] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:19.348 [2024-07-15 21:47:52.527691] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:19.348 21:47:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:34:19.348 21:47:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:34:19.348 21:47:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:34:19.348 21:47:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:19.607 21:47:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:34:19.607 21:47:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:19.607 21:47:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:34:19.607 [2024-07-15 21:47:52.947139] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:34:19.867 21:47:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:34:19.867 21:47:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:34:19.867 21:47:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:19.867 21:47:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:34:20.126 21:47:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:34:20.126 21:47:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:20.126 21:47:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:34:20.384 [2024-07-15 21:47:53.538853] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:34:20.384 [2024-07-15 21:47:53.538989] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:34:20.384 21:47:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:34:20.384 21:47:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:34:20.384 21:47:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:34:20.384 21:47:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:20.641 21:47:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:34:20.641 21:47:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:34:20.641 21:47:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:34:20.641 21:47:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:34:20.641 21:47:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:34:20.641 21:47:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:34:20.898 BaseBdev2 00:34:20.898 21:47:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:34:20.898 21:47:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:34:20.898 21:47:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:20.898 21:47:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:34:20.898 21:47:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:20.898 21:47:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:20.898 21:47:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:21.156 21:47:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:21.415 [ 00:34:21.415 { 00:34:21.415 "name": "BaseBdev2", 00:34:21.415 "aliases": [ 00:34:21.415 "86c1a4d0-0979-44dd-b626-589cd6ee504f" 00:34:21.415 ], 00:34:21.415 "product_name": "Malloc disk", 00:34:21.415 "block_size": 512, 00:34:21.415 "num_blocks": 65536, 00:34:21.415 "uuid": "86c1a4d0-0979-44dd-b626-589cd6ee504f", 00:34:21.415 "assigned_rate_limits": { 00:34:21.415 "rw_ios_per_sec": 0, 00:34:21.415 "rw_mbytes_per_sec": 0, 00:34:21.415 "r_mbytes_per_sec": 0, 00:34:21.415 "w_mbytes_per_sec": 0 00:34:21.415 }, 00:34:21.415 "claimed": false, 00:34:21.415 "zoned": false, 00:34:21.415 "supported_io_types": { 00:34:21.415 "read": true, 00:34:21.415 "write": true, 00:34:21.415 "unmap": true, 00:34:21.415 "flush": true, 00:34:21.415 "reset": true, 00:34:21.415 "nvme_admin": false, 00:34:21.415 "nvme_io": false, 00:34:21.415 "nvme_io_md": false, 00:34:21.415 "write_zeroes": true, 00:34:21.415 "zcopy": true, 00:34:21.415 "get_zone_info": false, 00:34:21.415 "zone_management": false, 00:34:21.415 "zone_append": false, 00:34:21.415 "compare": false, 00:34:21.415 "compare_and_write": false, 00:34:21.415 "abort": true, 00:34:21.415 "seek_hole": false, 00:34:21.415 "seek_data": false, 00:34:21.415 "copy": true, 00:34:21.415 "nvme_iov_md": false 00:34:21.415 }, 00:34:21.415 "memory_domains": [ 00:34:21.415 { 00:34:21.415 "dma_device_id": "system", 00:34:21.415 "dma_device_type": 1 00:34:21.415 }, 00:34:21.415 { 00:34:21.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:21.415 "dma_device_type": 2 00:34:21.415 } 00:34:21.415 ], 00:34:21.415 "driver_specific": {} 00:34:21.415 } 00:34:21.415 ] 00:34:21.415 21:47:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:34:21.415 21:47:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:34:21.415 21:47:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:34:21.415 21:47:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:34:21.675 BaseBdev3 00:34:21.675 21:47:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:34:21.675 21:47:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:34:21.675 21:47:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:21.675 21:47:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:34:21.675 21:47:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:21.675 21:47:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:21.675 21:47:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:21.933 21:47:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:34:22.191 [ 00:34:22.191 { 00:34:22.191 "name": "BaseBdev3", 00:34:22.191 "aliases": [ 00:34:22.191 "dba1994c-aa26-47d9-85a5-68e80b2d891b" 00:34:22.191 ], 00:34:22.191 "product_name": "Malloc disk", 00:34:22.191 "block_size": 512, 00:34:22.191 "num_blocks": 65536, 00:34:22.191 "uuid": "dba1994c-aa26-47d9-85a5-68e80b2d891b", 00:34:22.191 "assigned_rate_limits": { 00:34:22.191 "rw_ios_per_sec": 0, 00:34:22.191 "rw_mbytes_per_sec": 0, 00:34:22.191 "r_mbytes_per_sec": 0, 00:34:22.191 "w_mbytes_per_sec": 0 00:34:22.191 }, 00:34:22.191 "claimed": false, 00:34:22.191 "zoned": false, 00:34:22.191 "supported_io_types": { 00:34:22.191 "read": true, 00:34:22.191 "write": true, 00:34:22.191 "unmap": true, 00:34:22.191 "flush": true, 00:34:22.191 "reset": true, 00:34:22.191 "nvme_admin": false, 00:34:22.191 "nvme_io": false, 00:34:22.191 "nvme_io_md": false, 00:34:22.191 "write_zeroes": true, 00:34:22.191 "zcopy": true, 00:34:22.191 "get_zone_info": false, 00:34:22.191 "zone_management": false, 00:34:22.191 "zone_append": false, 00:34:22.191 "compare": false, 00:34:22.191 "compare_and_write": false, 00:34:22.191 "abort": true, 00:34:22.191 "seek_hole": false, 00:34:22.191 "seek_data": false, 00:34:22.191 "copy": true, 00:34:22.191 "nvme_iov_md": false 00:34:22.191 }, 00:34:22.191 "memory_domains": [ 00:34:22.191 { 00:34:22.191 "dma_device_id": "system", 00:34:22.191 "dma_device_type": 1 00:34:22.191 }, 00:34:22.191 { 00:34:22.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:22.191 "dma_device_type": 2 00:34:22.191 } 00:34:22.191 ], 00:34:22.191 "driver_specific": {} 00:34:22.191 } 00:34:22.191 ] 00:34:22.191 21:47:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:34:22.191 21:47:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:34:22.191 21:47:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:34:22.191 21:47:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:34:22.449 BaseBdev4 00:34:22.449 21:47:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:34:22.449 21:47:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:34:22.449 21:47:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:22.449 21:47:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:34:22.449 21:47:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:22.449 21:47:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:22.449 21:47:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:22.708 21:47:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:34:22.970 [ 00:34:22.970 { 00:34:22.970 "name": "BaseBdev4", 00:34:22.970 "aliases": [ 00:34:22.970 "8b0d6da7-9a98-413d-be1b-05cf584ea4ae" 00:34:22.970 ], 00:34:22.970 "product_name": "Malloc disk", 00:34:22.970 "block_size": 512, 00:34:22.970 "num_blocks": 65536, 00:34:22.970 "uuid": "8b0d6da7-9a98-413d-be1b-05cf584ea4ae", 00:34:22.970 "assigned_rate_limits": { 00:34:22.970 "rw_ios_per_sec": 0, 00:34:22.970 "rw_mbytes_per_sec": 0, 00:34:22.970 "r_mbytes_per_sec": 0, 00:34:22.970 "w_mbytes_per_sec": 0 00:34:22.970 }, 00:34:22.970 "claimed": false, 00:34:22.970 "zoned": false, 00:34:22.970 "supported_io_types": { 00:34:22.970 "read": true, 00:34:22.970 "write": true, 00:34:22.970 "unmap": true, 00:34:22.970 "flush": true, 00:34:22.970 "reset": true, 00:34:22.970 "nvme_admin": false, 00:34:22.970 "nvme_io": false, 00:34:22.970 "nvme_io_md": false, 00:34:22.970 "write_zeroes": true, 00:34:22.970 "zcopy": true, 00:34:22.970 "get_zone_info": false, 00:34:22.970 "zone_management": false, 00:34:22.970 "zone_append": false, 00:34:22.970 "compare": false, 00:34:22.970 "compare_and_write": false, 00:34:22.970 "abort": true, 00:34:22.970 "seek_hole": false, 00:34:22.970 "seek_data": false, 00:34:22.970 "copy": true, 00:34:22.970 "nvme_iov_md": false 00:34:22.970 }, 00:34:22.970 "memory_domains": [ 00:34:22.970 { 00:34:22.970 "dma_device_id": "system", 00:34:22.970 "dma_device_type": 1 00:34:22.970 }, 00:34:22.970 { 00:34:22.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:22.970 "dma_device_type": 2 00:34:22.970 } 00:34:22.970 ], 00:34:22.970 "driver_specific": {} 00:34:22.970 } 00:34:22.970 ] 00:34:22.970 21:47:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:34:22.970 21:47:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:34:22.970 21:47:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:34:22.970 21:47:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:34:22.970 [2024-07-15 21:47:56.344606] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:22.970 [2024-07-15 21:47:56.344766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:22.970 [2024-07-15 21:47:56.344819] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:22.970 [2024-07-15 21:47:56.346705] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:22.970 [2024-07-15 21:47:56.346826] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:34:23.238 21:47:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:23.238 21:47:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:23.238 21:47:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:23.238 21:47:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:23.238 21:47:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:23.238 21:47:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:23.238 21:47:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:23.238 21:47:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:23.238 21:47:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:23.238 21:47:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:23.238 21:47:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:23.238 21:47:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:23.238 21:47:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:23.238 "name": "Existed_Raid", 00:34:23.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:23.238 "strip_size_kb": 64, 00:34:23.238 "state": "configuring", 00:34:23.238 "raid_level": "raid5f", 00:34:23.238 "superblock": false, 00:34:23.238 "num_base_bdevs": 4, 00:34:23.238 "num_base_bdevs_discovered": 3, 00:34:23.238 "num_base_bdevs_operational": 4, 00:34:23.238 "base_bdevs_list": [ 00:34:23.238 { 00:34:23.238 "name": "BaseBdev1", 00:34:23.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:23.238 "is_configured": false, 00:34:23.238 "data_offset": 0, 00:34:23.238 "data_size": 0 00:34:23.238 }, 00:34:23.238 { 00:34:23.238 "name": "BaseBdev2", 00:34:23.238 "uuid": "86c1a4d0-0979-44dd-b626-589cd6ee504f", 00:34:23.238 "is_configured": true, 00:34:23.238 "data_offset": 0, 00:34:23.238 "data_size": 65536 00:34:23.238 }, 00:34:23.238 { 00:34:23.238 "name": "BaseBdev3", 00:34:23.238 "uuid": "dba1994c-aa26-47d9-85a5-68e80b2d891b", 00:34:23.238 "is_configured": true, 00:34:23.238 "data_offset": 0, 00:34:23.238 "data_size": 65536 00:34:23.238 }, 00:34:23.238 { 00:34:23.238 "name": "BaseBdev4", 00:34:23.238 "uuid": "8b0d6da7-9a98-413d-be1b-05cf584ea4ae", 00:34:23.238 "is_configured": true, 00:34:23.238 "data_offset": 0, 00:34:23.238 "data_size": 65536 00:34:23.238 } 00:34:23.238 ] 00:34:23.238 }' 00:34:23.238 21:47:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:23.238 21:47:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:24.173 21:47:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:34:24.173 [2024-07-15 21:47:57.482709] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:24.173 21:47:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:24.173 21:47:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:24.173 21:47:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:24.173 21:47:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:24.173 21:47:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:24.173 21:47:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:24.173 21:47:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:24.173 21:47:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:24.173 21:47:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:24.173 21:47:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:24.173 21:47:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:24.173 21:47:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:24.432 21:47:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:24.432 "name": "Existed_Raid", 00:34:24.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:24.432 "strip_size_kb": 64, 00:34:24.432 "state": "configuring", 00:34:24.432 "raid_level": "raid5f", 00:34:24.432 "superblock": false, 00:34:24.432 "num_base_bdevs": 4, 00:34:24.432 "num_base_bdevs_discovered": 2, 00:34:24.432 "num_base_bdevs_operational": 4, 00:34:24.432 "base_bdevs_list": [ 00:34:24.432 { 00:34:24.432 "name": "BaseBdev1", 00:34:24.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:24.432 "is_configured": false, 00:34:24.432 "data_offset": 0, 00:34:24.432 "data_size": 0 00:34:24.432 }, 00:34:24.432 { 00:34:24.432 "name": null, 00:34:24.432 "uuid": "86c1a4d0-0979-44dd-b626-589cd6ee504f", 00:34:24.432 "is_configured": false, 00:34:24.432 "data_offset": 0, 00:34:24.432 "data_size": 65536 00:34:24.432 }, 00:34:24.432 { 00:34:24.432 "name": "BaseBdev3", 00:34:24.432 "uuid": "dba1994c-aa26-47d9-85a5-68e80b2d891b", 00:34:24.432 "is_configured": true, 00:34:24.432 "data_offset": 0, 00:34:24.432 "data_size": 65536 00:34:24.432 }, 00:34:24.432 { 00:34:24.432 "name": "BaseBdev4", 00:34:24.432 "uuid": "8b0d6da7-9a98-413d-be1b-05cf584ea4ae", 00:34:24.432 "is_configured": true, 00:34:24.432 "data_offset": 0, 00:34:24.432 "data_size": 65536 00:34:24.432 } 00:34:24.432 ] 00:34:24.432 }' 00:34:24.432 21:47:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:24.432 21:47:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:25.367 21:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:25.367 21:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:34:25.367 21:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:34:25.367 21:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:34:25.657 [2024-07-15 21:47:58.917702] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:25.657 BaseBdev1 00:34:25.657 21:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:34:25.657 21:47:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:34:25.657 21:47:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:25.657 21:47:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:34:25.657 21:47:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:25.657 21:47:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:25.657 21:47:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:25.940 21:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:26.199 [ 00:34:26.199 { 00:34:26.199 "name": "BaseBdev1", 00:34:26.199 "aliases": [ 00:34:26.199 "d6d58c80-a3ff-4c1a-b52d-542f5963ce25" 00:34:26.199 ], 00:34:26.199 "product_name": "Malloc disk", 00:34:26.199 "block_size": 512, 00:34:26.199 "num_blocks": 65536, 00:34:26.199 "uuid": "d6d58c80-a3ff-4c1a-b52d-542f5963ce25", 00:34:26.199 "assigned_rate_limits": { 00:34:26.199 "rw_ios_per_sec": 0, 00:34:26.199 "rw_mbytes_per_sec": 0, 00:34:26.199 "r_mbytes_per_sec": 0, 00:34:26.199 "w_mbytes_per_sec": 0 00:34:26.199 }, 00:34:26.199 "claimed": true, 00:34:26.199 "claim_type": "exclusive_write", 00:34:26.199 "zoned": false, 00:34:26.199 "supported_io_types": { 00:34:26.199 "read": true, 00:34:26.199 "write": true, 00:34:26.199 "unmap": true, 00:34:26.199 "flush": true, 00:34:26.199 "reset": true, 00:34:26.199 "nvme_admin": false, 00:34:26.199 "nvme_io": false, 00:34:26.199 "nvme_io_md": false, 00:34:26.199 "write_zeroes": true, 00:34:26.199 "zcopy": true, 00:34:26.199 "get_zone_info": false, 00:34:26.199 "zone_management": false, 00:34:26.199 "zone_append": false, 00:34:26.199 "compare": false, 00:34:26.199 "compare_and_write": false, 00:34:26.199 "abort": true, 00:34:26.199 "seek_hole": false, 00:34:26.199 "seek_data": false, 00:34:26.199 "copy": true, 00:34:26.199 "nvme_iov_md": false 00:34:26.199 }, 00:34:26.199 "memory_domains": [ 00:34:26.199 { 00:34:26.199 "dma_device_id": "system", 00:34:26.199 "dma_device_type": 1 00:34:26.199 }, 00:34:26.199 { 00:34:26.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:26.199 "dma_device_type": 2 00:34:26.199 } 00:34:26.199 ], 00:34:26.199 "driver_specific": {} 00:34:26.199 } 00:34:26.199 ] 00:34:26.199 21:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:34:26.199 21:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:26.199 21:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:26.199 21:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:26.199 21:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:26.199 21:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:26.199 21:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:26.199 21:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:26.199 21:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:26.199 21:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:26.199 21:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:26.199 21:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:26.199 21:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:26.458 21:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:26.458 "name": "Existed_Raid", 00:34:26.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:26.458 "strip_size_kb": 64, 00:34:26.458 "state": "configuring", 00:34:26.458 "raid_level": "raid5f", 00:34:26.458 "superblock": false, 00:34:26.458 "num_base_bdevs": 4, 00:34:26.458 "num_base_bdevs_discovered": 3, 00:34:26.458 "num_base_bdevs_operational": 4, 00:34:26.458 "base_bdevs_list": [ 00:34:26.458 { 00:34:26.458 "name": "BaseBdev1", 00:34:26.458 "uuid": "d6d58c80-a3ff-4c1a-b52d-542f5963ce25", 00:34:26.458 "is_configured": true, 00:34:26.458 "data_offset": 0, 00:34:26.458 "data_size": 65536 00:34:26.458 }, 00:34:26.458 { 00:34:26.458 "name": null, 00:34:26.458 "uuid": "86c1a4d0-0979-44dd-b626-589cd6ee504f", 00:34:26.458 "is_configured": false, 00:34:26.458 "data_offset": 0, 00:34:26.458 "data_size": 65536 00:34:26.458 }, 00:34:26.458 { 00:34:26.458 "name": "BaseBdev3", 00:34:26.458 "uuid": "dba1994c-aa26-47d9-85a5-68e80b2d891b", 00:34:26.458 "is_configured": true, 00:34:26.458 "data_offset": 0, 00:34:26.458 "data_size": 65536 00:34:26.458 }, 00:34:26.458 { 00:34:26.458 "name": "BaseBdev4", 00:34:26.458 "uuid": "8b0d6da7-9a98-413d-be1b-05cf584ea4ae", 00:34:26.458 "is_configured": true, 00:34:26.458 "data_offset": 0, 00:34:26.458 "data_size": 65536 00:34:26.458 } 00:34:26.458 ] 00:34:26.458 }' 00:34:26.458 21:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:26.458 21:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:27.025 21:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:27.025 21:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:34:27.283 21:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:34:27.283 21:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:34:27.541 [2024-07-15 21:48:00.746790] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:34:27.541 21:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:27.541 21:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:27.541 21:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:27.541 21:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:27.541 21:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:27.541 21:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:27.541 21:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:27.542 21:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:27.542 21:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:27.542 21:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:27.542 21:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:27.542 21:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:27.799 21:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:27.799 "name": "Existed_Raid", 00:34:27.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:27.799 "strip_size_kb": 64, 00:34:27.799 "state": "configuring", 00:34:27.799 "raid_level": "raid5f", 00:34:27.799 "superblock": false, 00:34:27.799 "num_base_bdevs": 4, 00:34:27.799 "num_base_bdevs_discovered": 2, 00:34:27.800 "num_base_bdevs_operational": 4, 00:34:27.800 "base_bdevs_list": [ 00:34:27.800 { 00:34:27.800 "name": "BaseBdev1", 00:34:27.800 "uuid": "d6d58c80-a3ff-4c1a-b52d-542f5963ce25", 00:34:27.800 "is_configured": true, 00:34:27.800 "data_offset": 0, 00:34:27.800 "data_size": 65536 00:34:27.800 }, 00:34:27.800 { 00:34:27.800 "name": null, 00:34:27.800 "uuid": "86c1a4d0-0979-44dd-b626-589cd6ee504f", 00:34:27.800 "is_configured": false, 00:34:27.800 "data_offset": 0, 00:34:27.800 "data_size": 65536 00:34:27.800 }, 00:34:27.800 { 00:34:27.800 "name": null, 00:34:27.800 "uuid": "dba1994c-aa26-47d9-85a5-68e80b2d891b", 00:34:27.800 "is_configured": false, 00:34:27.800 "data_offset": 0, 00:34:27.800 "data_size": 65536 00:34:27.800 }, 00:34:27.800 { 00:34:27.800 "name": "BaseBdev4", 00:34:27.800 "uuid": "8b0d6da7-9a98-413d-be1b-05cf584ea4ae", 00:34:27.800 "is_configured": true, 00:34:27.800 "data_offset": 0, 00:34:27.800 "data_size": 65536 00:34:27.800 } 00:34:27.800 ] 00:34:27.800 }' 00:34:27.800 21:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:27.800 21:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:28.434 21:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:28.434 21:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:34:28.693 21:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:34:28.693 21:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:34:28.951 [2024-07-15 21:48:02.120535] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:28.951 21:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:28.951 21:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:28.951 21:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:28.951 21:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:28.951 21:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:28.951 21:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:28.951 21:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:28.951 21:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:28.951 21:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:28.951 21:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:28.951 21:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:28.951 21:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:28.951 21:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:28.951 "name": "Existed_Raid", 00:34:28.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:28.951 "strip_size_kb": 64, 00:34:28.951 "state": "configuring", 00:34:28.951 "raid_level": "raid5f", 00:34:28.951 "superblock": false, 00:34:28.951 "num_base_bdevs": 4, 00:34:28.951 "num_base_bdevs_discovered": 3, 00:34:28.951 "num_base_bdevs_operational": 4, 00:34:28.951 "base_bdevs_list": [ 00:34:28.951 { 00:34:28.951 "name": "BaseBdev1", 00:34:28.951 "uuid": "d6d58c80-a3ff-4c1a-b52d-542f5963ce25", 00:34:28.951 "is_configured": true, 00:34:28.951 "data_offset": 0, 00:34:28.951 "data_size": 65536 00:34:28.951 }, 00:34:28.951 { 00:34:28.951 "name": null, 00:34:28.951 "uuid": "86c1a4d0-0979-44dd-b626-589cd6ee504f", 00:34:28.951 "is_configured": false, 00:34:28.951 "data_offset": 0, 00:34:28.951 "data_size": 65536 00:34:28.951 }, 00:34:28.951 { 00:34:28.951 "name": "BaseBdev3", 00:34:28.951 "uuid": "dba1994c-aa26-47d9-85a5-68e80b2d891b", 00:34:28.951 "is_configured": true, 00:34:28.951 "data_offset": 0, 00:34:28.951 "data_size": 65536 00:34:28.951 }, 00:34:28.951 { 00:34:28.951 "name": "BaseBdev4", 00:34:28.951 "uuid": "8b0d6da7-9a98-413d-be1b-05cf584ea4ae", 00:34:28.951 "is_configured": true, 00:34:28.951 "data_offset": 0, 00:34:28.951 "data_size": 65536 00:34:28.951 } 00:34:28.951 ] 00:34:28.951 }' 00:34:28.951 21:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:28.951 21:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:29.886 21:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:29.886 21:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:34:29.886 21:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:34:29.886 21:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:34:30.143 [2024-07-15 21:48:03.346470] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:30.143 21:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:30.143 21:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:30.143 21:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:30.143 21:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:30.143 21:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:30.143 21:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:30.143 21:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:30.143 21:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:30.143 21:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:30.143 21:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:30.143 21:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:30.143 21:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:30.401 21:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:30.401 "name": "Existed_Raid", 00:34:30.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:30.401 "strip_size_kb": 64, 00:34:30.401 "state": "configuring", 00:34:30.401 "raid_level": "raid5f", 00:34:30.401 "superblock": false, 00:34:30.401 "num_base_bdevs": 4, 00:34:30.401 "num_base_bdevs_discovered": 2, 00:34:30.401 "num_base_bdevs_operational": 4, 00:34:30.401 "base_bdevs_list": [ 00:34:30.401 { 00:34:30.401 "name": null, 00:34:30.401 "uuid": "d6d58c80-a3ff-4c1a-b52d-542f5963ce25", 00:34:30.401 "is_configured": false, 00:34:30.401 "data_offset": 0, 00:34:30.401 "data_size": 65536 00:34:30.401 }, 00:34:30.401 { 00:34:30.401 "name": null, 00:34:30.401 "uuid": "86c1a4d0-0979-44dd-b626-589cd6ee504f", 00:34:30.401 "is_configured": false, 00:34:30.401 "data_offset": 0, 00:34:30.401 "data_size": 65536 00:34:30.401 }, 00:34:30.401 { 00:34:30.401 "name": "BaseBdev3", 00:34:30.401 "uuid": "dba1994c-aa26-47d9-85a5-68e80b2d891b", 00:34:30.401 "is_configured": true, 00:34:30.401 "data_offset": 0, 00:34:30.401 "data_size": 65536 00:34:30.401 }, 00:34:30.401 { 00:34:30.401 "name": "BaseBdev4", 00:34:30.401 "uuid": "8b0d6da7-9a98-413d-be1b-05cf584ea4ae", 00:34:30.401 "is_configured": true, 00:34:30.401 "data_offset": 0, 00:34:30.401 "data_size": 65536 00:34:30.401 } 00:34:30.401 ] 00:34:30.401 }' 00:34:30.401 21:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:30.401 21:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:30.968 21:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:30.968 21:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:34:31.281 21:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:34:31.281 21:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:34:31.541 [2024-07-15 21:48:04.666960] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:31.541 21:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:31.541 21:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:31.541 21:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:31.541 21:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:31.541 21:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:31.541 21:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:31.541 21:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:31.541 21:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:31.541 21:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:31.541 21:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:31.541 21:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:31.541 21:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:31.541 21:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:31.541 "name": "Existed_Raid", 00:34:31.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:31.541 "strip_size_kb": 64, 00:34:31.541 "state": "configuring", 00:34:31.541 "raid_level": "raid5f", 00:34:31.541 "superblock": false, 00:34:31.541 "num_base_bdevs": 4, 00:34:31.541 "num_base_bdevs_discovered": 3, 00:34:31.541 "num_base_bdevs_operational": 4, 00:34:31.541 "base_bdevs_list": [ 00:34:31.541 { 00:34:31.541 "name": null, 00:34:31.541 "uuid": "d6d58c80-a3ff-4c1a-b52d-542f5963ce25", 00:34:31.541 "is_configured": false, 00:34:31.541 "data_offset": 0, 00:34:31.541 "data_size": 65536 00:34:31.541 }, 00:34:31.541 { 00:34:31.541 "name": "BaseBdev2", 00:34:31.541 "uuid": "86c1a4d0-0979-44dd-b626-589cd6ee504f", 00:34:31.541 "is_configured": true, 00:34:31.541 "data_offset": 0, 00:34:31.541 "data_size": 65536 00:34:31.541 }, 00:34:31.541 { 00:34:31.541 "name": "BaseBdev3", 00:34:31.541 "uuid": "dba1994c-aa26-47d9-85a5-68e80b2d891b", 00:34:31.541 "is_configured": true, 00:34:31.541 "data_offset": 0, 00:34:31.541 "data_size": 65536 00:34:31.541 }, 00:34:31.541 { 00:34:31.541 "name": "BaseBdev4", 00:34:31.541 "uuid": "8b0d6da7-9a98-413d-be1b-05cf584ea4ae", 00:34:31.541 "is_configured": true, 00:34:31.541 "data_offset": 0, 00:34:31.541 "data_size": 65536 00:34:31.541 } 00:34:31.541 ] 00:34:31.541 }' 00:34:31.541 21:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:31.541 21:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:32.477 21:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:32.477 21:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:34:32.477 21:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:34:32.477 21:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:32.477 21:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:34:32.735 21:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u d6d58c80-a3ff-4c1a-b52d-542f5963ce25 00:34:32.994 [2024-07-15 21:48:06.133532] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:34:32.994 [2024-07-15 21:48:06.133654] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:34:32.994 [2024-07-15 21:48:06.133674] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:34:32.994 [2024-07-15 21:48:06.133789] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:34:32.994 [2024-07-15 21:48:06.140972] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:34:32.994 [2024-07-15 21:48:06.141034] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009680 00:34:32.994 [2024-07-15 21:48:06.141320] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:32.994 NewBaseBdev 00:34:32.994 21:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:34:32.994 21:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:34:32.994 21:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:32.994 21:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:34:32.994 21:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:32.994 21:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:32.994 21:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:32.994 21:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:34:33.253 [ 00:34:33.253 { 00:34:33.253 "name": "NewBaseBdev", 00:34:33.253 "aliases": [ 00:34:33.253 "d6d58c80-a3ff-4c1a-b52d-542f5963ce25" 00:34:33.253 ], 00:34:33.253 "product_name": "Malloc disk", 00:34:33.253 "block_size": 512, 00:34:33.253 "num_blocks": 65536, 00:34:33.253 "uuid": "d6d58c80-a3ff-4c1a-b52d-542f5963ce25", 00:34:33.253 "assigned_rate_limits": { 00:34:33.253 "rw_ios_per_sec": 0, 00:34:33.253 "rw_mbytes_per_sec": 0, 00:34:33.253 "r_mbytes_per_sec": 0, 00:34:33.253 "w_mbytes_per_sec": 0 00:34:33.253 }, 00:34:33.253 "claimed": true, 00:34:33.253 "claim_type": "exclusive_write", 00:34:33.253 "zoned": false, 00:34:33.253 "supported_io_types": { 00:34:33.253 "read": true, 00:34:33.253 "write": true, 00:34:33.253 "unmap": true, 00:34:33.253 "flush": true, 00:34:33.253 "reset": true, 00:34:33.253 "nvme_admin": false, 00:34:33.253 "nvme_io": false, 00:34:33.253 "nvme_io_md": false, 00:34:33.253 "write_zeroes": true, 00:34:33.253 "zcopy": true, 00:34:33.253 "get_zone_info": false, 00:34:33.253 "zone_management": false, 00:34:33.253 "zone_append": false, 00:34:33.253 "compare": false, 00:34:33.253 "compare_and_write": false, 00:34:33.253 "abort": true, 00:34:33.253 "seek_hole": false, 00:34:33.253 "seek_data": false, 00:34:33.253 "copy": true, 00:34:33.253 "nvme_iov_md": false 00:34:33.253 }, 00:34:33.253 "memory_domains": [ 00:34:33.253 { 00:34:33.253 "dma_device_id": "system", 00:34:33.253 "dma_device_type": 1 00:34:33.253 }, 00:34:33.253 { 00:34:33.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:33.253 "dma_device_type": 2 00:34:33.253 } 00:34:33.253 ], 00:34:33.253 "driver_specific": {} 00:34:33.253 } 00:34:33.253 ] 00:34:33.253 21:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:34:33.253 21:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:34:33.253 21:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:33.253 21:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:33.253 21:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:33.253 21:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:33.253 21:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:33.253 21:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:33.253 21:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:33.253 21:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:33.253 21:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:33.253 21:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:33.253 21:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:33.511 21:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:33.511 "name": "Existed_Raid", 00:34:33.512 "uuid": "ba9c90d9-673d-4f53-a0dd-8495291e7119", 00:34:33.512 "strip_size_kb": 64, 00:34:33.512 "state": "online", 00:34:33.512 "raid_level": "raid5f", 00:34:33.512 "superblock": false, 00:34:33.512 "num_base_bdevs": 4, 00:34:33.512 "num_base_bdevs_discovered": 4, 00:34:33.512 "num_base_bdevs_operational": 4, 00:34:33.512 "base_bdevs_list": [ 00:34:33.512 { 00:34:33.512 "name": "NewBaseBdev", 00:34:33.512 "uuid": "d6d58c80-a3ff-4c1a-b52d-542f5963ce25", 00:34:33.512 "is_configured": true, 00:34:33.512 "data_offset": 0, 00:34:33.512 "data_size": 65536 00:34:33.512 }, 00:34:33.512 { 00:34:33.512 "name": "BaseBdev2", 00:34:33.512 "uuid": "86c1a4d0-0979-44dd-b626-589cd6ee504f", 00:34:33.512 "is_configured": true, 00:34:33.512 "data_offset": 0, 00:34:33.512 "data_size": 65536 00:34:33.512 }, 00:34:33.512 { 00:34:33.512 "name": "BaseBdev3", 00:34:33.512 "uuid": "dba1994c-aa26-47d9-85a5-68e80b2d891b", 00:34:33.512 "is_configured": true, 00:34:33.512 "data_offset": 0, 00:34:33.512 "data_size": 65536 00:34:33.512 }, 00:34:33.512 { 00:34:33.512 "name": "BaseBdev4", 00:34:33.512 "uuid": "8b0d6da7-9a98-413d-be1b-05cf584ea4ae", 00:34:33.512 "is_configured": true, 00:34:33.512 "data_offset": 0, 00:34:33.512 "data_size": 65536 00:34:33.512 } 00:34:33.512 ] 00:34:33.512 }' 00:34:33.512 21:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:33.512 21:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:34.076 21:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:34:34.076 21:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:34:34.076 21:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:34:34.076 21:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:34:34.076 21:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:34:34.076 21:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:34:34.076 21:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:34:34.076 21:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:34:34.334 [2024-07-15 21:48:07.587379] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:34.334 21:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:34:34.334 "name": "Existed_Raid", 00:34:34.334 "aliases": [ 00:34:34.334 "ba9c90d9-673d-4f53-a0dd-8495291e7119" 00:34:34.334 ], 00:34:34.334 "product_name": "Raid Volume", 00:34:34.334 "block_size": 512, 00:34:34.334 "num_blocks": 196608, 00:34:34.334 "uuid": "ba9c90d9-673d-4f53-a0dd-8495291e7119", 00:34:34.334 "assigned_rate_limits": { 00:34:34.334 "rw_ios_per_sec": 0, 00:34:34.334 "rw_mbytes_per_sec": 0, 00:34:34.334 "r_mbytes_per_sec": 0, 00:34:34.334 "w_mbytes_per_sec": 0 00:34:34.334 }, 00:34:34.334 "claimed": false, 00:34:34.334 "zoned": false, 00:34:34.334 "supported_io_types": { 00:34:34.334 "read": true, 00:34:34.334 "write": true, 00:34:34.334 "unmap": false, 00:34:34.334 "flush": false, 00:34:34.334 "reset": true, 00:34:34.334 "nvme_admin": false, 00:34:34.334 "nvme_io": false, 00:34:34.334 "nvme_io_md": false, 00:34:34.334 "write_zeroes": true, 00:34:34.334 "zcopy": false, 00:34:34.334 "get_zone_info": false, 00:34:34.334 "zone_management": false, 00:34:34.334 "zone_append": false, 00:34:34.334 "compare": false, 00:34:34.334 "compare_and_write": false, 00:34:34.334 "abort": false, 00:34:34.334 "seek_hole": false, 00:34:34.334 "seek_data": false, 00:34:34.334 "copy": false, 00:34:34.334 "nvme_iov_md": false 00:34:34.334 }, 00:34:34.334 "driver_specific": { 00:34:34.334 "raid": { 00:34:34.334 "uuid": "ba9c90d9-673d-4f53-a0dd-8495291e7119", 00:34:34.334 "strip_size_kb": 64, 00:34:34.334 "state": "online", 00:34:34.334 "raid_level": "raid5f", 00:34:34.334 "superblock": false, 00:34:34.334 "num_base_bdevs": 4, 00:34:34.334 "num_base_bdevs_discovered": 4, 00:34:34.334 "num_base_bdevs_operational": 4, 00:34:34.334 "base_bdevs_list": [ 00:34:34.334 { 00:34:34.334 "name": "NewBaseBdev", 00:34:34.334 "uuid": "d6d58c80-a3ff-4c1a-b52d-542f5963ce25", 00:34:34.334 "is_configured": true, 00:34:34.334 "data_offset": 0, 00:34:34.334 "data_size": 65536 00:34:34.334 }, 00:34:34.334 { 00:34:34.334 "name": "BaseBdev2", 00:34:34.334 "uuid": "86c1a4d0-0979-44dd-b626-589cd6ee504f", 00:34:34.334 "is_configured": true, 00:34:34.334 "data_offset": 0, 00:34:34.334 "data_size": 65536 00:34:34.334 }, 00:34:34.334 { 00:34:34.334 "name": "BaseBdev3", 00:34:34.334 "uuid": "dba1994c-aa26-47d9-85a5-68e80b2d891b", 00:34:34.334 "is_configured": true, 00:34:34.334 "data_offset": 0, 00:34:34.334 "data_size": 65536 00:34:34.334 }, 00:34:34.334 { 00:34:34.334 "name": "BaseBdev4", 00:34:34.335 "uuid": "8b0d6da7-9a98-413d-be1b-05cf584ea4ae", 00:34:34.335 "is_configured": true, 00:34:34.335 "data_offset": 0, 00:34:34.335 "data_size": 65536 00:34:34.335 } 00:34:34.335 ] 00:34:34.335 } 00:34:34.335 } 00:34:34.335 }' 00:34:34.335 21:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:34.335 21:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:34:34.335 BaseBdev2 00:34:34.335 BaseBdev3 00:34:34.335 BaseBdev4' 00:34:34.335 21:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:34.335 21:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:34:34.335 21:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:34.592 21:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:34.592 "name": "NewBaseBdev", 00:34:34.592 "aliases": [ 00:34:34.592 "d6d58c80-a3ff-4c1a-b52d-542f5963ce25" 00:34:34.592 ], 00:34:34.592 "product_name": "Malloc disk", 00:34:34.592 "block_size": 512, 00:34:34.592 "num_blocks": 65536, 00:34:34.592 "uuid": "d6d58c80-a3ff-4c1a-b52d-542f5963ce25", 00:34:34.592 "assigned_rate_limits": { 00:34:34.592 "rw_ios_per_sec": 0, 00:34:34.592 "rw_mbytes_per_sec": 0, 00:34:34.592 "r_mbytes_per_sec": 0, 00:34:34.592 "w_mbytes_per_sec": 0 00:34:34.592 }, 00:34:34.592 "claimed": true, 00:34:34.592 "claim_type": "exclusive_write", 00:34:34.592 "zoned": false, 00:34:34.592 "supported_io_types": { 00:34:34.592 "read": true, 00:34:34.592 "write": true, 00:34:34.592 "unmap": true, 00:34:34.592 "flush": true, 00:34:34.592 "reset": true, 00:34:34.592 "nvme_admin": false, 00:34:34.592 "nvme_io": false, 00:34:34.592 "nvme_io_md": false, 00:34:34.592 "write_zeroes": true, 00:34:34.592 "zcopy": true, 00:34:34.592 "get_zone_info": false, 00:34:34.592 "zone_management": false, 00:34:34.592 "zone_append": false, 00:34:34.592 "compare": false, 00:34:34.592 "compare_and_write": false, 00:34:34.592 "abort": true, 00:34:34.592 "seek_hole": false, 00:34:34.592 "seek_data": false, 00:34:34.592 "copy": true, 00:34:34.592 "nvme_iov_md": false 00:34:34.592 }, 00:34:34.592 "memory_domains": [ 00:34:34.592 { 00:34:34.592 "dma_device_id": "system", 00:34:34.592 "dma_device_type": 1 00:34:34.592 }, 00:34:34.592 { 00:34:34.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:34.592 "dma_device_type": 2 00:34:34.592 } 00:34:34.592 ], 00:34:34.592 "driver_specific": {} 00:34:34.592 }' 00:34:34.592 21:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:34.592 21:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:34.848 21:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:34.848 21:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:34.848 21:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:34.848 21:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:34.848 21:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:34.848 21:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:35.105 21:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:35.105 21:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:35.106 21:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:35.106 21:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:35.106 21:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:35.106 21:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:34:35.106 21:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:35.363 21:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:35.363 "name": "BaseBdev2", 00:34:35.363 "aliases": [ 00:34:35.363 "86c1a4d0-0979-44dd-b626-589cd6ee504f" 00:34:35.363 ], 00:34:35.363 "product_name": "Malloc disk", 00:34:35.363 "block_size": 512, 00:34:35.363 "num_blocks": 65536, 00:34:35.363 "uuid": "86c1a4d0-0979-44dd-b626-589cd6ee504f", 00:34:35.363 "assigned_rate_limits": { 00:34:35.363 "rw_ios_per_sec": 0, 00:34:35.363 "rw_mbytes_per_sec": 0, 00:34:35.363 "r_mbytes_per_sec": 0, 00:34:35.363 "w_mbytes_per_sec": 0 00:34:35.363 }, 00:34:35.363 "claimed": true, 00:34:35.363 "claim_type": "exclusive_write", 00:34:35.363 "zoned": false, 00:34:35.363 "supported_io_types": { 00:34:35.363 "read": true, 00:34:35.363 "write": true, 00:34:35.363 "unmap": true, 00:34:35.363 "flush": true, 00:34:35.363 "reset": true, 00:34:35.363 "nvme_admin": false, 00:34:35.363 "nvme_io": false, 00:34:35.363 "nvme_io_md": false, 00:34:35.363 "write_zeroes": true, 00:34:35.363 "zcopy": true, 00:34:35.363 "get_zone_info": false, 00:34:35.363 "zone_management": false, 00:34:35.363 "zone_append": false, 00:34:35.363 "compare": false, 00:34:35.363 "compare_and_write": false, 00:34:35.363 "abort": true, 00:34:35.363 "seek_hole": false, 00:34:35.363 "seek_data": false, 00:34:35.363 "copy": true, 00:34:35.363 "nvme_iov_md": false 00:34:35.363 }, 00:34:35.363 "memory_domains": [ 00:34:35.363 { 00:34:35.363 "dma_device_id": "system", 00:34:35.363 "dma_device_type": 1 00:34:35.363 }, 00:34:35.363 { 00:34:35.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:35.363 "dma_device_type": 2 00:34:35.363 } 00:34:35.363 ], 00:34:35.363 "driver_specific": {} 00:34:35.363 }' 00:34:35.364 21:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:35.364 21:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:35.364 21:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:35.364 21:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:35.364 21:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:35.621 21:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:35.621 21:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:35.621 21:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:35.621 21:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:35.621 21:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:35.621 21:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:35.880 21:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:35.880 21:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:35.880 21:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:34:35.880 21:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:35.880 21:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:35.880 "name": "BaseBdev3", 00:34:35.880 "aliases": [ 00:34:35.880 "dba1994c-aa26-47d9-85a5-68e80b2d891b" 00:34:35.880 ], 00:34:35.880 "product_name": "Malloc disk", 00:34:35.880 "block_size": 512, 00:34:35.880 "num_blocks": 65536, 00:34:35.880 "uuid": "dba1994c-aa26-47d9-85a5-68e80b2d891b", 00:34:35.880 "assigned_rate_limits": { 00:34:35.880 "rw_ios_per_sec": 0, 00:34:35.880 "rw_mbytes_per_sec": 0, 00:34:35.880 "r_mbytes_per_sec": 0, 00:34:35.880 "w_mbytes_per_sec": 0 00:34:35.880 }, 00:34:35.880 "claimed": true, 00:34:35.880 "claim_type": "exclusive_write", 00:34:35.880 "zoned": false, 00:34:35.880 "supported_io_types": { 00:34:35.880 "read": true, 00:34:35.880 "write": true, 00:34:35.880 "unmap": true, 00:34:35.880 "flush": true, 00:34:35.880 "reset": true, 00:34:35.880 "nvme_admin": false, 00:34:35.880 "nvme_io": false, 00:34:35.880 "nvme_io_md": false, 00:34:35.880 "write_zeroes": true, 00:34:35.880 "zcopy": true, 00:34:35.880 "get_zone_info": false, 00:34:35.880 "zone_management": false, 00:34:35.880 "zone_append": false, 00:34:35.880 "compare": false, 00:34:35.880 "compare_and_write": false, 00:34:35.880 "abort": true, 00:34:35.880 "seek_hole": false, 00:34:35.880 "seek_data": false, 00:34:35.880 "copy": true, 00:34:35.880 "nvme_iov_md": false 00:34:35.880 }, 00:34:35.880 "memory_domains": [ 00:34:35.880 { 00:34:35.880 "dma_device_id": "system", 00:34:35.880 "dma_device_type": 1 00:34:35.880 }, 00:34:35.880 { 00:34:35.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:35.880 "dma_device_type": 2 00:34:35.880 } 00:34:35.880 ], 00:34:35.880 "driver_specific": {} 00:34:35.880 }' 00:34:35.880 21:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:36.138 21:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:36.138 21:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:36.138 21:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:36.138 21:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:36.138 21:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:36.138 21:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:36.138 21:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:36.395 21:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:36.395 21:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:36.395 21:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:36.395 21:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:36.395 21:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:36.395 21:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:34:36.395 21:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:36.657 21:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:36.657 "name": "BaseBdev4", 00:34:36.657 "aliases": [ 00:34:36.657 "8b0d6da7-9a98-413d-be1b-05cf584ea4ae" 00:34:36.657 ], 00:34:36.657 "product_name": "Malloc disk", 00:34:36.657 "block_size": 512, 00:34:36.657 "num_blocks": 65536, 00:34:36.657 "uuid": "8b0d6da7-9a98-413d-be1b-05cf584ea4ae", 00:34:36.657 "assigned_rate_limits": { 00:34:36.657 "rw_ios_per_sec": 0, 00:34:36.657 "rw_mbytes_per_sec": 0, 00:34:36.657 "r_mbytes_per_sec": 0, 00:34:36.657 "w_mbytes_per_sec": 0 00:34:36.657 }, 00:34:36.657 "claimed": true, 00:34:36.657 "claim_type": "exclusive_write", 00:34:36.657 "zoned": false, 00:34:36.657 "supported_io_types": { 00:34:36.657 "read": true, 00:34:36.657 "write": true, 00:34:36.657 "unmap": true, 00:34:36.657 "flush": true, 00:34:36.657 "reset": true, 00:34:36.657 "nvme_admin": false, 00:34:36.657 "nvme_io": false, 00:34:36.657 "nvme_io_md": false, 00:34:36.657 "write_zeroes": true, 00:34:36.657 "zcopy": true, 00:34:36.657 "get_zone_info": false, 00:34:36.657 "zone_management": false, 00:34:36.657 "zone_append": false, 00:34:36.657 "compare": false, 00:34:36.657 "compare_and_write": false, 00:34:36.657 "abort": true, 00:34:36.657 "seek_hole": false, 00:34:36.657 "seek_data": false, 00:34:36.657 "copy": true, 00:34:36.657 "nvme_iov_md": false 00:34:36.657 }, 00:34:36.657 "memory_domains": [ 00:34:36.657 { 00:34:36.657 "dma_device_id": "system", 00:34:36.657 "dma_device_type": 1 00:34:36.657 }, 00:34:36.657 { 00:34:36.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:36.657 "dma_device_type": 2 00:34:36.657 } 00:34:36.657 ], 00:34:36.657 "driver_specific": {} 00:34:36.657 }' 00:34:36.657 21:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:36.657 21:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:36.657 21:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:36.657 21:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:36.657 21:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:36.914 21:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:36.914 21:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:36.914 21:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:36.914 21:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:36.914 21:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:36.914 21:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:37.172 21:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:37.172 21:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:37.172 [2024-07-15 21:48:10.490374] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:37.172 [2024-07-15 21:48:10.490481] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:37.172 [2024-07-15 21:48:10.490595] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:37.173 [2024-07-15 21:48:10.490858] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:37.173 [2024-07-15 21:48:10.490889] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name Existed_Raid, state offline 00:34:37.173 21:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 155800 00:34:37.173 21:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 155800 ']' 00:34:37.173 21:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # kill -0 155800 00:34:37.173 21:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@953 -- # uname 00:34:37.173 21:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:37.173 21:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 155800 00:34:37.173 killing process with pid 155800 00:34:37.173 21:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:37.173 21:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:37.173 21:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 155800' 00:34:37.173 21:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@967 -- # kill 155800 00:34:37.173 21:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # wait 155800 00:34:37.173 [2024-07-15 21:48:10.533908] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:37.737 [2024-07-15 21:48:10.940172] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:39.114 ************************************ 00:34:39.114 END TEST raid5f_state_function_test 00:34:39.114 ************************************ 00:34:39.114 21:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:34:39.114 00:34:39.114 real 0m34.363s 00:34:39.114 user 1m3.670s 00:34:39.114 sys 0m4.226s 00:34:39.114 21:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:39.114 21:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:39.114 21:48:12 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:34:39.114 21:48:12 bdev_raid -- bdev/bdev_raid.sh@887 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:34:39.114 21:48:12 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:34:39.114 21:48:12 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:39.114 21:48:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:39.114 ************************************ 00:34:39.114 START TEST raid5f_state_function_test_sb 00:34:39.114 ************************************ 00:34:39.114 21:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid5f 4 true 00:34:39.114 21:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:34:39.114 21:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:34:39.114 21:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:34:39.114 21:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:34:39.114 21:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:34:39.114 21:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:34:39.114 21:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:39.114 21:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:34:39.114 21:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:34:39.114 21:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:39.114 21:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:34:39.114 21:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:34:39.114 21:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:39.114 21:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:34:39.114 21:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:34:39.114 21:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:39.114 21:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:34:39.114 21:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:34:39.114 21:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:39.114 21:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:34:39.114 21:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:34:39.114 21:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:34:39.114 21:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:34:39.115 21:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:34:39.115 21:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:34:39.115 21:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:34:39.115 21:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:34:39.115 21:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:34:39.115 21:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:34:39.115 21:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=156958 00:34:39.115 21:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:34:39.115 21:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 156958' 00:34:39.115 Process raid pid: 156958 00:34:39.115 21:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 156958 /var/tmp/spdk-raid.sock 00:34:39.115 21:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 156958 ']' 00:34:39.115 21:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:39.115 21:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:39.115 21:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:39.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:39.115 21:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:39.115 21:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:39.115 [2024-07-15 21:48:12.341371] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:34:39.115 [2024-07-15 21:48:12.341581] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:39.373 [2024-07-15 21:48:12.499946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:39.373 [2024-07-15 21:48:12.704087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:39.632 [2024-07-15 21:48:12.917141] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:39.889 21:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:39.889 21:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:34:39.889 21:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:34:40.147 [2024-07-15 21:48:13.345879] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:40.148 [2024-07-15 21:48:13.346103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:40.148 [2024-07-15 21:48:13.346145] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:40.148 [2024-07-15 21:48:13.346195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:40.148 [2024-07-15 21:48:13.346236] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:40.148 [2024-07-15 21:48:13.346273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:40.148 [2024-07-15 21:48:13.346308] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:34:40.148 [2024-07-15 21:48:13.346346] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:34:40.148 21:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:40.148 21:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:40.148 21:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:40.148 21:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:40.148 21:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:40.148 21:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:40.148 21:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:40.148 21:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:40.148 21:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:40.148 21:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:40.148 21:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:40.148 21:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:40.407 21:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:40.407 "name": "Existed_Raid", 00:34:40.407 "uuid": "648ceda2-2b1e-4982-afcb-dbed45c22d8f", 00:34:40.407 "strip_size_kb": 64, 00:34:40.407 "state": "configuring", 00:34:40.407 "raid_level": "raid5f", 00:34:40.407 "superblock": true, 00:34:40.407 "num_base_bdevs": 4, 00:34:40.407 "num_base_bdevs_discovered": 0, 00:34:40.407 "num_base_bdevs_operational": 4, 00:34:40.407 "base_bdevs_list": [ 00:34:40.407 { 00:34:40.407 "name": "BaseBdev1", 00:34:40.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:40.407 "is_configured": false, 00:34:40.407 "data_offset": 0, 00:34:40.407 "data_size": 0 00:34:40.407 }, 00:34:40.407 { 00:34:40.407 "name": "BaseBdev2", 00:34:40.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:40.407 "is_configured": false, 00:34:40.407 "data_offset": 0, 00:34:40.407 "data_size": 0 00:34:40.407 }, 00:34:40.407 { 00:34:40.407 "name": "BaseBdev3", 00:34:40.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:40.407 "is_configured": false, 00:34:40.407 "data_offset": 0, 00:34:40.407 "data_size": 0 00:34:40.407 }, 00:34:40.407 { 00:34:40.407 "name": "BaseBdev4", 00:34:40.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:40.407 "is_configured": false, 00:34:40.407 "data_offset": 0, 00:34:40.407 "data_size": 0 00:34:40.407 } 00:34:40.407 ] 00:34:40.407 }' 00:34:40.407 21:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:40.407 21:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:40.973 21:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:41.231 [2024-07-15 21:48:14.451791] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:41.231 [2024-07-15 21:48:14.451916] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:34:41.231 21:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:34:41.489 [2024-07-15 21:48:14.643495] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:41.489 [2024-07-15 21:48:14.643642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:41.489 [2024-07-15 21:48:14.643678] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:41.489 [2024-07-15 21:48:14.643740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:41.489 [2024-07-15 21:48:14.643779] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:41.489 [2024-07-15 21:48:14.643825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:41.489 [2024-07-15 21:48:14.643853] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:34:41.489 [2024-07-15 21:48:14.643887] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:34:41.489 21:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:34:41.749 [2024-07-15 21:48:14.886717] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:41.749 BaseBdev1 00:34:41.749 21:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:34:41.749 21:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:34:41.749 21:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:41.749 21:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:34:41.749 21:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:41.749 21:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:41.749 21:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:41.749 21:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:42.013 [ 00:34:42.013 { 00:34:42.013 "name": "BaseBdev1", 00:34:42.013 "aliases": [ 00:34:42.013 "c4f66e90-0b0b-4ee6-b801-475a6b2cda55" 00:34:42.013 ], 00:34:42.013 "product_name": "Malloc disk", 00:34:42.013 "block_size": 512, 00:34:42.013 "num_blocks": 65536, 00:34:42.013 "uuid": "c4f66e90-0b0b-4ee6-b801-475a6b2cda55", 00:34:42.013 "assigned_rate_limits": { 00:34:42.013 "rw_ios_per_sec": 0, 00:34:42.013 "rw_mbytes_per_sec": 0, 00:34:42.013 "r_mbytes_per_sec": 0, 00:34:42.013 "w_mbytes_per_sec": 0 00:34:42.013 }, 00:34:42.013 "claimed": true, 00:34:42.013 "claim_type": "exclusive_write", 00:34:42.013 "zoned": false, 00:34:42.013 "supported_io_types": { 00:34:42.013 "read": true, 00:34:42.013 "write": true, 00:34:42.013 "unmap": true, 00:34:42.013 "flush": true, 00:34:42.013 "reset": true, 00:34:42.013 "nvme_admin": false, 00:34:42.013 "nvme_io": false, 00:34:42.013 "nvme_io_md": false, 00:34:42.013 "write_zeroes": true, 00:34:42.013 "zcopy": true, 00:34:42.013 "get_zone_info": false, 00:34:42.013 "zone_management": false, 00:34:42.013 "zone_append": false, 00:34:42.013 "compare": false, 00:34:42.013 "compare_and_write": false, 00:34:42.013 "abort": true, 00:34:42.013 "seek_hole": false, 00:34:42.013 "seek_data": false, 00:34:42.013 "copy": true, 00:34:42.013 "nvme_iov_md": false 00:34:42.013 }, 00:34:42.013 "memory_domains": [ 00:34:42.013 { 00:34:42.013 "dma_device_id": "system", 00:34:42.013 "dma_device_type": 1 00:34:42.013 }, 00:34:42.013 { 00:34:42.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:42.013 "dma_device_type": 2 00:34:42.013 } 00:34:42.013 ], 00:34:42.013 "driver_specific": {} 00:34:42.013 } 00:34:42.013 ] 00:34:42.013 21:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:34:42.013 21:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:42.013 21:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:42.013 21:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:42.013 21:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:42.013 21:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:42.013 21:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:42.013 21:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:42.013 21:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:42.013 21:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:42.013 21:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:42.013 21:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:42.013 21:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:42.272 21:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:42.272 "name": "Existed_Raid", 00:34:42.272 "uuid": "f1753719-4ea3-4594-a2a2-155aad3eda55", 00:34:42.272 "strip_size_kb": 64, 00:34:42.272 "state": "configuring", 00:34:42.272 "raid_level": "raid5f", 00:34:42.272 "superblock": true, 00:34:42.272 "num_base_bdevs": 4, 00:34:42.272 "num_base_bdevs_discovered": 1, 00:34:42.272 "num_base_bdevs_operational": 4, 00:34:42.272 "base_bdevs_list": [ 00:34:42.272 { 00:34:42.272 "name": "BaseBdev1", 00:34:42.272 "uuid": "c4f66e90-0b0b-4ee6-b801-475a6b2cda55", 00:34:42.272 "is_configured": true, 00:34:42.272 "data_offset": 2048, 00:34:42.272 "data_size": 63488 00:34:42.272 }, 00:34:42.272 { 00:34:42.272 "name": "BaseBdev2", 00:34:42.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:42.272 "is_configured": false, 00:34:42.272 "data_offset": 0, 00:34:42.272 "data_size": 0 00:34:42.272 }, 00:34:42.272 { 00:34:42.272 "name": "BaseBdev3", 00:34:42.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:42.272 "is_configured": false, 00:34:42.272 "data_offset": 0, 00:34:42.272 "data_size": 0 00:34:42.272 }, 00:34:42.272 { 00:34:42.272 "name": "BaseBdev4", 00:34:42.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:42.272 "is_configured": false, 00:34:42.272 "data_offset": 0, 00:34:42.272 "data_size": 0 00:34:42.272 } 00:34:42.272 ] 00:34:42.272 }' 00:34:42.272 21:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:42.272 21:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:43.205 21:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:43.205 [2024-07-15 21:48:16.424145] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:43.205 [2024-07-15 21:48:16.424301] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:34:43.205 21:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:34:43.463 [2024-07-15 21:48:16.643865] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:43.463 [2024-07-15 21:48:16.645920] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:43.463 [2024-07-15 21:48:16.646017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:43.463 [2024-07-15 21:48:16.646043] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:43.463 [2024-07-15 21:48:16.646076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:43.463 [2024-07-15 21:48:16.646164] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:34:43.463 [2024-07-15 21:48:16.646212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:34:43.463 21:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:34:43.463 21:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:34:43.463 21:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:43.463 21:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:43.463 21:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:43.463 21:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:43.463 21:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:43.463 21:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:43.463 21:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:43.463 21:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:43.463 21:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:43.463 21:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:43.463 21:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:43.463 21:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:43.723 21:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:43.723 "name": "Existed_Raid", 00:34:43.723 "uuid": "2614568b-ac47-4fc3-9e0a-0d99bb90840e", 00:34:43.723 "strip_size_kb": 64, 00:34:43.723 "state": "configuring", 00:34:43.723 "raid_level": "raid5f", 00:34:43.723 "superblock": true, 00:34:43.723 "num_base_bdevs": 4, 00:34:43.723 "num_base_bdevs_discovered": 1, 00:34:43.723 "num_base_bdevs_operational": 4, 00:34:43.723 "base_bdevs_list": [ 00:34:43.723 { 00:34:43.723 "name": "BaseBdev1", 00:34:43.723 "uuid": "c4f66e90-0b0b-4ee6-b801-475a6b2cda55", 00:34:43.723 "is_configured": true, 00:34:43.723 "data_offset": 2048, 00:34:43.723 "data_size": 63488 00:34:43.723 }, 00:34:43.723 { 00:34:43.723 "name": "BaseBdev2", 00:34:43.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:43.723 "is_configured": false, 00:34:43.723 "data_offset": 0, 00:34:43.723 "data_size": 0 00:34:43.723 }, 00:34:43.723 { 00:34:43.723 "name": "BaseBdev3", 00:34:43.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:43.723 "is_configured": false, 00:34:43.723 "data_offset": 0, 00:34:43.723 "data_size": 0 00:34:43.723 }, 00:34:43.723 { 00:34:43.723 "name": "BaseBdev4", 00:34:43.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:43.723 "is_configured": false, 00:34:43.723 "data_offset": 0, 00:34:43.723 "data_size": 0 00:34:43.723 } 00:34:43.723 ] 00:34:43.723 }' 00:34:43.723 21:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:43.723 21:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:44.289 21:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:34:44.559 [2024-07-15 21:48:17.826922] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:44.559 BaseBdev2 00:34:44.559 21:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:34:44.559 21:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:34:44.559 21:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:44.559 21:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:34:44.559 21:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:44.559 21:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:44.559 21:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:44.843 21:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:45.104 [ 00:34:45.104 { 00:34:45.104 "name": "BaseBdev2", 00:34:45.104 "aliases": [ 00:34:45.104 "a4107aba-75f3-4142-93d6-836666ec4fe5" 00:34:45.104 ], 00:34:45.104 "product_name": "Malloc disk", 00:34:45.104 "block_size": 512, 00:34:45.104 "num_blocks": 65536, 00:34:45.104 "uuid": "a4107aba-75f3-4142-93d6-836666ec4fe5", 00:34:45.104 "assigned_rate_limits": { 00:34:45.104 "rw_ios_per_sec": 0, 00:34:45.104 "rw_mbytes_per_sec": 0, 00:34:45.104 "r_mbytes_per_sec": 0, 00:34:45.104 "w_mbytes_per_sec": 0 00:34:45.104 }, 00:34:45.104 "claimed": true, 00:34:45.104 "claim_type": "exclusive_write", 00:34:45.104 "zoned": false, 00:34:45.104 "supported_io_types": { 00:34:45.104 "read": true, 00:34:45.104 "write": true, 00:34:45.104 "unmap": true, 00:34:45.104 "flush": true, 00:34:45.104 "reset": true, 00:34:45.104 "nvme_admin": false, 00:34:45.104 "nvme_io": false, 00:34:45.104 "nvme_io_md": false, 00:34:45.104 "write_zeroes": true, 00:34:45.104 "zcopy": true, 00:34:45.104 "get_zone_info": false, 00:34:45.104 "zone_management": false, 00:34:45.104 "zone_append": false, 00:34:45.104 "compare": false, 00:34:45.104 "compare_and_write": false, 00:34:45.104 "abort": true, 00:34:45.104 "seek_hole": false, 00:34:45.104 "seek_data": false, 00:34:45.104 "copy": true, 00:34:45.104 "nvme_iov_md": false 00:34:45.104 }, 00:34:45.104 "memory_domains": [ 00:34:45.104 { 00:34:45.104 "dma_device_id": "system", 00:34:45.104 "dma_device_type": 1 00:34:45.104 }, 00:34:45.104 { 00:34:45.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:45.104 "dma_device_type": 2 00:34:45.104 } 00:34:45.104 ], 00:34:45.104 "driver_specific": {} 00:34:45.104 } 00:34:45.104 ] 00:34:45.104 21:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:34:45.104 21:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:34:45.104 21:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:34:45.104 21:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:45.104 21:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:45.104 21:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:45.104 21:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:45.104 21:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:45.104 21:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:45.104 21:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:45.104 21:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:45.104 21:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:45.104 21:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:45.104 21:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:45.104 21:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:45.362 21:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:45.362 "name": "Existed_Raid", 00:34:45.362 "uuid": "2614568b-ac47-4fc3-9e0a-0d99bb90840e", 00:34:45.362 "strip_size_kb": 64, 00:34:45.362 "state": "configuring", 00:34:45.362 "raid_level": "raid5f", 00:34:45.362 "superblock": true, 00:34:45.362 "num_base_bdevs": 4, 00:34:45.362 "num_base_bdevs_discovered": 2, 00:34:45.362 "num_base_bdevs_operational": 4, 00:34:45.362 "base_bdevs_list": [ 00:34:45.362 { 00:34:45.362 "name": "BaseBdev1", 00:34:45.362 "uuid": "c4f66e90-0b0b-4ee6-b801-475a6b2cda55", 00:34:45.362 "is_configured": true, 00:34:45.362 "data_offset": 2048, 00:34:45.362 "data_size": 63488 00:34:45.362 }, 00:34:45.362 { 00:34:45.362 "name": "BaseBdev2", 00:34:45.362 "uuid": "a4107aba-75f3-4142-93d6-836666ec4fe5", 00:34:45.362 "is_configured": true, 00:34:45.362 "data_offset": 2048, 00:34:45.362 "data_size": 63488 00:34:45.362 }, 00:34:45.362 { 00:34:45.362 "name": "BaseBdev3", 00:34:45.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:45.362 "is_configured": false, 00:34:45.362 "data_offset": 0, 00:34:45.362 "data_size": 0 00:34:45.362 }, 00:34:45.362 { 00:34:45.362 "name": "BaseBdev4", 00:34:45.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:45.362 "is_configured": false, 00:34:45.362 "data_offset": 0, 00:34:45.362 "data_size": 0 00:34:45.362 } 00:34:45.362 ] 00:34:45.362 }' 00:34:45.362 21:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:45.362 21:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:45.927 21:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:34:46.185 [2024-07-15 21:48:19.361080] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:46.185 BaseBdev3 00:34:46.185 21:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:34:46.185 21:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:34:46.185 21:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:46.185 21:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:34:46.185 21:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:46.185 21:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:46.185 21:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:46.443 21:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:34:46.443 [ 00:34:46.443 { 00:34:46.443 "name": "BaseBdev3", 00:34:46.443 "aliases": [ 00:34:46.443 "e36128f5-f1be-4d8a-8660-1ebc91e05d83" 00:34:46.443 ], 00:34:46.443 "product_name": "Malloc disk", 00:34:46.443 "block_size": 512, 00:34:46.443 "num_blocks": 65536, 00:34:46.443 "uuid": "e36128f5-f1be-4d8a-8660-1ebc91e05d83", 00:34:46.443 "assigned_rate_limits": { 00:34:46.443 "rw_ios_per_sec": 0, 00:34:46.443 "rw_mbytes_per_sec": 0, 00:34:46.443 "r_mbytes_per_sec": 0, 00:34:46.443 "w_mbytes_per_sec": 0 00:34:46.443 }, 00:34:46.443 "claimed": true, 00:34:46.443 "claim_type": "exclusive_write", 00:34:46.443 "zoned": false, 00:34:46.443 "supported_io_types": { 00:34:46.443 "read": true, 00:34:46.443 "write": true, 00:34:46.443 "unmap": true, 00:34:46.443 "flush": true, 00:34:46.443 "reset": true, 00:34:46.443 "nvme_admin": false, 00:34:46.443 "nvme_io": false, 00:34:46.443 "nvme_io_md": false, 00:34:46.443 "write_zeroes": true, 00:34:46.443 "zcopy": true, 00:34:46.443 "get_zone_info": false, 00:34:46.443 "zone_management": false, 00:34:46.443 "zone_append": false, 00:34:46.443 "compare": false, 00:34:46.443 "compare_and_write": false, 00:34:46.443 "abort": true, 00:34:46.443 "seek_hole": false, 00:34:46.443 "seek_data": false, 00:34:46.443 "copy": true, 00:34:46.443 "nvme_iov_md": false 00:34:46.443 }, 00:34:46.443 "memory_domains": [ 00:34:46.443 { 00:34:46.443 "dma_device_id": "system", 00:34:46.443 "dma_device_type": 1 00:34:46.443 }, 00:34:46.443 { 00:34:46.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:46.443 "dma_device_type": 2 00:34:46.443 } 00:34:46.443 ], 00:34:46.443 "driver_specific": {} 00:34:46.443 } 00:34:46.443 ] 00:34:46.443 21:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:34:46.443 21:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:34:46.443 21:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:34:46.443 21:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:46.443 21:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:46.443 21:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:46.443 21:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:46.443 21:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:46.443 21:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:46.444 21:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:46.444 21:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:46.444 21:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:46.444 21:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:46.444 21:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:46.444 21:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:46.702 21:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:46.702 "name": "Existed_Raid", 00:34:46.702 "uuid": "2614568b-ac47-4fc3-9e0a-0d99bb90840e", 00:34:46.702 "strip_size_kb": 64, 00:34:46.702 "state": "configuring", 00:34:46.702 "raid_level": "raid5f", 00:34:46.702 "superblock": true, 00:34:46.702 "num_base_bdevs": 4, 00:34:46.702 "num_base_bdevs_discovered": 3, 00:34:46.702 "num_base_bdevs_operational": 4, 00:34:46.702 "base_bdevs_list": [ 00:34:46.702 { 00:34:46.702 "name": "BaseBdev1", 00:34:46.702 "uuid": "c4f66e90-0b0b-4ee6-b801-475a6b2cda55", 00:34:46.702 "is_configured": true, 00:34:46.702 "data_offset": 2048, 00:34:46.702 "data_size": 63488 00:34:46.702 }, 00:34:46.702 { 00:34:46.702 "name": "BaseBdev2", 00:34:46.702 "uuid": "a4107aba-75f3-4142-93d6-836666ec4fe5", 00:34:46.702 "is_configured": true, 00:34:46.702 "data_offset": 2048, 00:34:46.702 "data_size": 63488 00:34:46.702 }, 00:34:46.702 { 00:34:46.702 "name": "BaseBdev3", 00:34:46.702 "uuid": "e36128f5-f1be-4d8a-8660-1ebc91e05d83", 00:34:46.702 "is_configured": true, 00:34:46.702 "data_offset": 2048, 00:34:46.702 "data_size": 63488 00:34:46.702 }, 00:34:46.702 { 00:34:46.702 "name": "BaseBdev4", 00:34:46.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:46.702 "is_configured": false, 00:34:46.702 "data_offset": 0, 00:34:46.702 "data_size": 0 00:34:46.702 } 00:34:46.702 ] 00:34:46.702 }' 00:34:46.702 21:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:46.702 21:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:47.661 21:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:34:47.661 [2024-07-15 21:48:20.972318] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:34:47.661 [2024-07-15 21:48:20.972718] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:34:47.661 [2024-07-15 21:48:20.972763] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:34:47.661 [2024-07-15 21:48:20.972910] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:34:47.661 BaseBdev4 00:34:47.661 [2024-07-15 21:48:20.981424] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:34:47.661 [2024-07-15 21:48:20.981496] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:34:47.661 [2024-07-15 21:48:20.981738] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:47.661 21:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:34:47.661 21:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:34:47.661 21:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:47.661 21:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:34:47.661 21:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:47.661 21:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:47.661 21:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:47.920 21:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:34:48.178 [ 00:34:48.178 { 00:34:48.178 "name": "BaseBdev4", 00:34:48.178 "aliases": [ 00:34:48.178 "72e621dc-8968-46d2-b520-207a1f4c79ee" 00:34:48.178 ], 00:34:48.179 "product_name": "Malloc disk", 00:34:48.179 "block_size": 512, 00:34:48.179 "num_blocks": 65536, 00:34:48.179 "uuid": "72e621dc-8968-46d2-b520-207a1f4c79ee", 00:34:48.179 "assigned_rate_limits": { 00:34:48.179 "rw_ios_per_sec": 0, 00:34:48.179 "rw_mbytes_per_sec": 0, 00:34:48.179 "r_mbytes_per_sec": 0, 00:34:48.179 "w_mbytes_per_sec": 0 00:34:48.179 }, 00:34:48.179 "claimed": true, 00:34:48.179 "claim_type": "exclusive_write", 00:34:48.179 "zoned": false, 00:34:48.179 "supported_io_types": { 00:34:48.179 "read": true, 00:34:48.179 "write": true, 00:34:48.179 "unmap": true, 00:34:48.179 "flush": true, 00:34:48.179 "reset": true, 00:34:48.179 "nvme_admin": false, 00:34:48.179 "nvme_io": false, 00:34:48.179 "nvme_io_md": false, 00:34:48.179 "write_zeroes": true, 00:34:48.179 "zcopy": true, 00:34:48.179 "get_zone_info": false, 00:34:48.179 "zone_management": false, 00:34:48.179 "zone_append": false, 00:34:48.179 "compare": false, 00:34:48.179 "compare_and_write": false, 00:34:48.179 "abort": true, 00:34:48.179 "seek_hole": false, 00:34:48.179 "seek_data": false, 00:34:48.179 "copy": true, 00:34:48.179 "nvme_iov_md": false 00:34:48.179 }, 00:34:48.179 "memory_domains": [ 00:34:48.179 { 00:34:48.179 "dma_device_id": "system", 00:34:48.179 "dma_device_type": 1 00:34:48.179 }, 00:34:48.179 { 00:34:48.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:48.179 "dma_device_type": 2 00:34:48.179 } 00:34:48.179 ], 00:34:48.179 "driver_specific": {} 00:34:48.179 } 00:34:48.179 ] 00:34:48.179 21:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:34:48.179 21:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:34:48.179 21:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:34:48.179 21:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:34:48.179 21:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:48.179 21:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:48.179 21:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:48.179 21:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:48.179 21:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:48.179 21:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:48.179 21:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:48.179 21:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:48.179 21:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:48.179 21:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:48.179 21:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:48.438 21:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:48.438 "name": "Existed_Raid", 00:34:48.438 "uuid": "2614568b-ac47-4fc3-9e0a-0d99bb90840e", 00:34:48.438 "strip_size_kb": 64, 00:34:48.438 "state": "online", 00:34:48.438 "raid_level": "raid5f", 00:34:48.438 "superblock": true, 00:34:48.438 "num_base_bdevs": 4, 00:34:48.438 "num_base_bdevs_discovered": 4, 00:34:48.438 "num_base_bdevs_operational": 4, 00:34:48.438 "base_bdevs_list": [ 00:34:48.438 { 00:34:48.438 "name": "BaseBdev1", 00:34:48.438 "uuid": "c4f66e90-0b0b-4ee6-b801-475a6b2cda55", 00:34:48.438 "is_configured": true, 00:34:48.438 "data_offset": 2048, 00:34:48.438 "data_size": 63488 00:34:48.438 }, 00:34:48.438 { 00:34:48.438 "name": "BaseBdev2", 00:34:48.438 "uuid": "a4107aba-75f3-4142-93d6-836666ec4fe5", 00:34:48.438 "is_configured": true, 00:34:48.438 "data_offset": 2048, 00:34:48.438 "data_size": 63488 00:34:48.438 }, 00:34:48.438 { 00:34:48.438 "name": "BaseBdev3", 00:34:48.438 "uuid": "e36128f5-f1be-4d8a-8660-1ebc91e05d83", 00:34:48.438 "is_configured": true, 00:34:48.438 "data_offset": 2048, 00:34:48.438 "data_size": 63488 00:34:48.438 }, 00:34:48.438 { 00:34:48.438 "name": "BaseBdev4", 00:34:48.438 "uuid": "72e621dc-8968-46d2-b520-207a1f4c79ee", 00:34:48.438 "is_configured": true, 00:34:48.438 "data_offset": 2048, 00:34:48.438 "data_size": 63488 00:34:48.438 } 00:34:48.438 ] 00:34:48.438 }' 00:34:48.438 21:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:48.438 21:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:49.003 21:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:34:49.003 21:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:34:49.003 21:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:34:49.003 21:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:34:49.003 21:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:34:49.003 21:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:34:49.003 21:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:34:49.003 21:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:34:49.276 [2024-07-15 21:48:22.505989] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:49.276 21:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:34:49.276 "name": "Existed_Raid", 00:34:49.276 "aliases": [ 00:34:49.276 "2614568b-ac47-4fc3-9e0a-0d99bb90840e" 00:34:49.276 ], 00:34:49.277 "product_name": "Raid Volume", 00:34:49.277 "block_size": 512, 00:34:49.277 "num_blocks": 190464, 00:34:49.277 "uuid": "2614568b-ac47-4fc3-9e0a-0d99bb90840e", 00:34:49.277 "assigned_rate_limits": { 00:34:49.277 "rw_ios_per_sec": 0, 00:34:49.277 "rw_mbytes_per_sec": 0, 00:34:49.277 "r_mbytes_per_sec": 0, 00:34:49.277 "w_mbytes_per_sec": 0 00:34:49.277 }, 00:34:49.277 "claimed": false, 00:34:49.277 "zoned": false, 00:34:49.277 "supported_io_types": { 00:34:49.277 "read": true, 00:34:49.277 "write": true, 00:34:49.277 "unmap": false, 00:34:49.277 "flush": false, 00:34:49.277 "reset": true, 00:34:49.277 "nvme_admin": false, 00:34:49.277 "nvme_io": false, 00:34:49.277 "nvme_io_md": false, 00:34:49.277 "write_zeroes": true, 00:34:49.277 "zcopy": false, 00:34:49.277 "get_zone_info": false, 00:34:49.277 "zone_management": false, 00:34:49.277 "zone_append": false, 00:34:49.277 "compare": false, 00:34:49.277 "compare_and_write": false, 00:34:49.277 "abort": false, 00:34:49.277 "seek_hole": false, 00:34:49.277 "seek_data": false, 00:34:49.277 "copy": false, 00:34:49.277 "nvme_iov_md": false 00:34:49.277 }, 00:34:49.277 "driver_specific": { 00:34:49.277 "raid": { 00:34:49.277 "uuid": "2614568b-ac47-4fc3-9e0a-0d99bb90840e", 00:34:49.277 "strip_size_kb": 64, 00:34:49.277 "state": "online", 00:34:49.277 "raid_level": "raid5f", 00:34:49.277 "superblock": true, 00:34:49.277 "num_base_bdevs": 4, 00:34:49.277 "num_base_bdevs_discovered": 4, 00:34:49.277 "num_base_bdevs_operational": 4, 00:34:49.277 "base_bdevs_list": [ 00:34:49.277 { 00:34:49.277 "name": "BaseBdev1", 00:34:49.277 "uuid": "c4f66e90-0b0b-4ee6-b801-475a6b2cda55", 00:34:49.277 "is_configured": true, 00:34:49.277 "data_offset": 2048, 00:34:49.277 "data_size": 63488 00:34:49.277 }, 00:34:49.277 { 00:34:49.277 "name": "BaseBdev2", 00:34:49.277 "uuid": "a4107aba-75f3-4142-93d6-836666ec4fe5", 00:34:49.277 "is_configured": true, 00:34:49.277 "data_offset": 2048, 00:34:49.277 "data_size": 63488 00:34:49.277 }, 00:34:49.277 { 00:34:49.277 "name": "BaseBdev3", 00:34:49.277 "uuid": "e36128f5-f1be-4d8a-8660-1ebc91e05d83", 00:34:49.277 "is_configured": true, 00:34:49.277 "data_offset": 2048, 00:34:49.277 "data_size": 63488 00:34:49.277 }, 00:34:49.277 { 00:34:49.277 "name": "BaseBdev4", 00:34:49.277 "uuid": "72e621dc-8968-46d2-b520-207a1f4c79ee", 00:34:49.277 "is_configured": true, 00:34:49.277 "data_offset": 2048, 00:34:49.277 "data_size": 63488 00:34:49.277 } 00:34:49.277 ] 00:34:49.277 } 00:34:49.277 } 00:34:49.277 }' 00:34:49.277 21:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:49.277 21:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:34:49.277 BaseBdev2 00:34:49.277 BaseBdev3 00:34:49.277 BaseBdev4' 00:34:49.277 21:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:49.277 21:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:34:49.277 21:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:49.535 21:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:49.535 "name": "BaseBdev1", 00:34:49.535 "aliases": [ 00:34:49.535 "c4f66e90-0b0b-4ee6-b801-475a6b2cda55" 00:34:49.535 ], 00:34:49.535 "product_name": "Malloc disk", 00:34:49.535 "block_size": 512, 00:34:49.535 "num_blocks": 65536, 00:34:49.535 "uuid": "c4f66e90-0b0b-4ee6-b801-475a6b2cda55", 00:34:49.535 "assigned_rate_limits": { 00:34:49.535 "rw_ios_per_sec": 0, 00:34:49.535 "rw_mbytes_per_sec": 0, 00:34:49.535 "r_mbytes_per_sec": 0, 00:34:49.535 "w_mbytes_per_sec": 0 00:34:49.535 }, 00:34:49.535 "claimed": true, 00:34:49.535 "claim_type": "exclusive_write", 00:34:49.535 "zoned": false, 00:34:49.535 "supported_io_types": { 00:34:49.535 "read": true, 00:34:49.535 "write": true, 00:34:49.535 "unmap": true, 00:34:49.535 "flush": true, 00:34:49.535 "reset": true, 00:34:49.535 "nvme_admin": false, 00:34:49.535 "nvme_io": false, 00:34:49.535 "nvme_io_md": false, 00:34:49.535 "write_zeroes": true, 00:34:49.535 "zcopy": true, 00:34:49.535 "get_zone_info": false, 00:34:49.535 "zone_management": false, 00:34:49.535 "zone_append": false, 00:34:49.535 "compare": false, 00:34:49.535 "compare_and_write": false, 00:34:49.535 "abort": true, 00:34:49.535 "seek_hole": false, 00:34:49.535 "seek_data": false, 00:34:49.535 "copy": true, 00:34:49.535 "nvme_iov_md": false 00:34:49.535 }, 00:34:49.535 "memory_domains": [ 00:34:49.535 { 00:34:49.535 "dma_device_id": "system", 00:34:49.535 "dma_device_type": 1 00:34:49.535 }, 00:34:49.535 { 00:34:49.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:49.535 "dma_device_type": 2 00:34:49.535 } 00:34:49.535 ], 00:34:49.535 "driver_specific": {} 00:34:49.535 }' 00:34:49.535 21:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:49.535 21:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:49.793 21:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:49.793 21:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:49.793 21:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:49.793 21:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:49.793 21:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:49.793 21:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:50.057 21:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:50.057 21:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:50.057 21:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:50.057 21:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:50.057 21:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:50.057 21:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:34:50.057 21:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:50.313 21:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:50.313 "name": "BaseBdev2", 00:34:50.313 "aliases": [ 00:34:50.313 "a4107aba-75f3-4142-93d6-836666ec4fe5" 00:34:50.313 ], 00:34:50.313 "product_name": "Malloc disk", 00:34:50.313 "block_size": 512, 00:34:50.313 "num_blocks": 65536, 00:34:50.313 "uuid": "a4107aba-75f3-4142-93d6-836666ec4fe5", 00:34:50.313 "assigned_rate_limits": { 00:34:50.313 "rw_ios_per_sec": 0, 00:34:50.313 "rw_mbytes_per_sec": 0, 00:34:50.313 "r_mbytes_per_sec": 0, 00:34:50.313 "w_mbytes_per_sec": 0 00:34:50.313 }, 00:34:50.313 "claimed": true, 00:34:50.313 "claim_type": "exclusive_write", 00:34:50.313 "zoned": false, 00:34:50.313 "supported_io_types": { 00:34:50.313 "read": true, 00:34:50.313 "write": true, 00:34:50.313 "unmap": true, 00:34:50.313 "flush": true, 00:34:50.313 "reset": true, 00:34:50.313 "nvme_admin": false, 00:34:50.313 "nvme_io": false, 00:34:50.313 "nvme_io_md": false, 00:34:50.313 "write_zeroes": true, 00:34:50.313 "zcopy": true, 00:34:50.313 "get_zone_info": false, 00:34:50.313 "zone_management": false, 00:34:50.313 "zone_append": false, 00:34:50.313 "compare": false, 00:34:50.313 "compare_and_write": false, 00:34:50.313 "abort": true, 00:34:50.313 "seek_hole": false, 00:34:50.314 "seek_data": false, 00:34:50.314 "copy": true, 00:34:50.314 "nvme_iov_md": false 00:34:50.314 }, 00:34:50.314 "memory_domains": [ 00:34:50.314 { 00:34:50.314 "dma_device_id": "system", 00:34:50.314 "dma_device_type": 1 00:34:50.314 }, 00:34:50.314 { 00:34:50.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:50.314 "dma_device_type": 2 00:34:50.314 } 00:34:50.314 ], 00:34:50.314 "driver_specific": {} 00:34:50.314 }' 00:34:50.314 21:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:50.314 21:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:50.314 21:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:50.314 21:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:50.571 21:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:50.571 21:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:50.571 21:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:50.572 21:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:50.572 21:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:50.572 21:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:50.830 21:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:50.830 21:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:50.830 21:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:50.830 21:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:34:50.830 21:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:51.087 21:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:51.087 "name": "BaseBdev3", 00:34:51.087 "aliases": [ 00:34:51.087 "e36128f5-f1be-4d8a-8660-1ebc91e05d83" 00:34:51.087 ], 00:34:51.087 "product_name": "Malloc disk", 00:34:51.087 "block_size": 512, 00:34:51.087 "num_blocks": 65536, 00:34:51.087 "uuid": "e36128f5-f1be-4d8a-8660-1ebc91e05d83", 00:34:51.087 "assigned_rate_limits": { 00:34:51.087 "rw_ios_per_sec": 0, 00:34:51.087 "rw_mbytes_per_sec": 0, 00:34:51.087 "r_mbytes_per_sec": 0, 00:34:51.087 "w_mbytes_per_sec": 0 00:34:51.087 }, 00:34:51.087 "claimed": true, 00:34:51.087 "claim_type": "exclusive_write", 00:34:51.087 "zoned": false, 00:34:51.087 "supported_io_types": { 00:34:51.087 "read": true, 00:34:51.087 "write": true, 00:34:51.087 "unmap": true, 00:34:51.087 "flush": true, 00:34:51.087 "reset": true, 00:34:51.087 "nvme_admin": false, 00:34:51.087 "nvme_io": false, 00:34:51.087 "nvme_io_md": false, 00:34:51.087 "write_zeroes": true, 00:34:51.087 "zcopy": true, 00:34:51.087 "get_zone_info": false, 00:34:51.087 "zone_management": false, 00:34:51.087 "zone_append": false, 00:34:51.087 "compare": false, 00:34:51.087 "compare_and_write": false, 00:34:51.087 "abort": true, 00:34:51.087 "seek_hole": false, 00:34:51.087 "seek_data": false, 00:34:51.087 "copy": true, 00:34:51.087 "nvme_iov_md": false 00:34:51.087 }, 00:34:51.087 "memory_domains": [ 00:34:51.087 { 00:34:51.087 "dma_device_id": "system", 00:34:51.087 "dma_device_type": 1 00:34:51.087 }, 00:34:51.087 { 00:34:51.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:51.087 "dma_device_type": 2 00:34:51.087 } 00:34:51.087 ], 00:34:51.087 "driver_specific": {} 00:34:51.087 }' 00:34:51.087 21:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:51.087 21:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:51.087 21:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:51.087 21:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:51.087 21:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:51.346 21:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:51.346 21:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:51.346 21:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:51.346 21:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:51.346 21:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:51.346 21:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:51.346 21:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:51.346 21:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:51.346 21:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:34:51.346 21:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:51.605 21:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:51.605 "name": "BaseBdev4", 00:34:51.605 "aliases": [ 00:34:51.605 "72e621dc-8968-46d2-b520-207a1f4c79ee" 00:34:51.605 ], 00:34:51.605 "product_name": "Malloc disk", 00:34:51.605 "block_size": 512, 00:34:51.605 "num_blocks": 65536, 00:34:51.605 "uuid": "72e621dc-8968-46d2-b520-207a1f4c79ee", 00:34:51.605 "assigned_rate_limits": { 00:34:51.605 "rw_ios_per_sec": 0, 00:34:51.605 "rw_mbytes_per_sec": 0, 00:34:51.605 "r_mbytes_per_sec": 0, 00:34:51.605 "w_mbytes_per_sec": 0 00:34:51.605 }, 00:34:51.605 "claimed": true, 00:34:51.605 "claim_type": "exclusive_write", 00:34:51.605 "zoned": false, 00:34:51.605 "supported_io_types": { 00:34:51.605 "read": true, 00:34:51.605 "write": true, 00:34:51.605 "unmap": true, 00:34:51.605 "flush": true, 00:34:51.605 "reset": true, 00:34:51.605 "nvme_admin": false, 00:34:51.605 "nvme_io": false, 00:34:51.605 "nvme_io_md": false, 00:34:51.605 "write_zeroes": true, 00:34:51.605 "zcopy": true, 00:34:51.605 "get_zone_info": false, 00:34:51.605 "zone_management": false, 00:34:51.605 "zone_append": false, 00:34:51.605 "compare": false, 00:34:51.605 "compare_and_write": false, 00:34:51.605 "abort": true, 00:34:51.605 "seek_hole": false, 00:34:51.605 "seek_data": false, 00:34:51.605 "copy": true, 00:34:51.605 "nvme_iov_md": false 00:34:51.605 }, 00:34:51.605 "memory_domains": [ 00:34:51.605 { 00:34:51.605 "dma_device_id": "system", 00:34:51.605 "dma_device_type": 1 00:34:51.605 }, 00:34:51.605 { 00:34:51.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:51.605 "dma_device_type": 2 00:34:51.605 } 00:34:51.605 ], 00:34:51.605 "driver_specific": {} 00:34:51.605 }' 00:34:51.605 21:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:51.605 21:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:51.864 21:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:51.864 21:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:51.864 21:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:51.864 21:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:51.864 21:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:51.864 21:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:52.122 21:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:52.122 21:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:52.122 21:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:52.122 21:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:52.122 21:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:34:52.380 [2024-07-15 21:48:25.605265] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:52.380 21:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:34:52.380 21:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:34:52.380 21:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:34:52.380 21:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:34:52.380 21:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:34:52.380 21:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:34:52.381 21:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:52.381 21:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:52.381 21:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:52.381 21:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:52.381 21:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:52.381 21:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:52.381 21:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:52.381 21:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:52.381 21:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:52.381 21:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:52.381 21:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:52.639 21:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:52.639 "name": "Existed_Raid", 00:34:52.639 "uuid": "2614568b-ac47-4fc3-9e0a-0d99bb90840e", 00:34:52.639 "strip_size_kb": 64, 00:34:52.639 "state": "online", 00:34:52.639 "raid_level": "raid5f", 00:34:52.639 "superblock": true, 00:34:52.639 "num_base_bdevs": 4, 00:34:52.639 "num_base_bdevs_discovered": 3, 00:34:52.639 "num_base_bdevs_operational": 3, 00:34:52.639 "base_bdevs_list": [ 00:34:52.639 { 00:34:52.639 "name": null, 00:34:52.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:52.639 "is_configured": false, 00:34:52.639 "data_offset": 2048, 00:34:52.639 "data_size": 63488 00:34:52.639 }, 00:34:52.639 { 00:34:52.639 "name": "BaseBdev2", 00:34:52.639 "uuid": "a4107aba-75f3-4142-93d6-836666ec4fe5", 00:34:52.639 "is_configured": true, 00:34:52.639 "data_offset": 2048, 00:34:52.639 "data_size": 63488 00:34:52.639 }, 00:34:52.639 { 00:34:52.639 "name": "BaseBdev3", 00:34:52.639 "uuid": "e36128f5-f1be-4d8a-8660-1ebc91e05d83", 00:34:52.639 "is_configured": true, 00:34:52.639 "data_offset": 2048, 00:34:52.639 "data_size": 63488 00:34:52.639 }, 00:34:52.639 { 00:34:52.639 "name": "BaseBdev4", 00:34:52.639 "uuid": "72e621dc-8968-46d2-b520-207a1f4c79ee", 00:34:52.639 "is_configured": true, 00:34:52.639 "data_offset": 2048, 00:34:52.639 "data_size": 63488 00:34:52.639 } 00:34:52.639 ] 00:34:52.639 }' 00:34:52.639 21:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:52.639 21:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:53.574 21:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:34:53.574 21:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:34:53.574 21:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:53.574 21:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:34:53.574 21:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:34:53.574 21:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:53.574 21:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:34:53.833 [2024-07-15 21:48:27.069616] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:53.833 [2024-07-15 21:48:27.069869] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:53.833 [2024-07-15 21:48:27.176794] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:53.833 21:48:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:34:53.833 21:48:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:34:53.833 21:48:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:53.833 21:48:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:34:54.091 21:48:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:34:54.091 21:48:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:54.091 21:48:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:34:54.349 [2024-07-15 21:48:27.612091] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:34:54.607 21:48:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:34:54.607 21:48:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:34:54.607 21:48:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:54.607 21:48:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:34:54.607 21:48:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:34:54.607 21:48:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:54.607 21:48:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:34:54.864 [2024-07-15 21:48:28.156213] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:34:54.864 [2024-07-15 21:48:28.156343] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:34:55.122 21:48:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:34:55.122 21:48:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:34:55.122 21:48:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:55.122 21:48:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:34:55.122 21:48:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:34:55.122 21:48:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:34:55.122 21:48:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:34:55.122 21:48:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:34:55.122 21:48:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:34:55.122 21:48:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:34:55.381 BaseBdev2 00:34:55.381 21:48:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:34:55.381 21:48:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:34:55.381 21:48:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:55.381 21:48:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:34:55.381 21:48:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:55.381 21:48:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:55.381 21:48:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:55.639 21:48:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:55.898 [ 00:34:55.898 { 00:34:55.898 "name": "BaseBdev2", 00:34:55.898 "aliases": [ 00:34:55.898 "1e8eabfb-8bcb-4f46-893a-e7774dde4e1c" 00:34:55.898 ], 00:34:55.898 "product_name": "Malloc disk", 00:34:55.898 "block_size": 512, 00:34:55.898 "num_blocks": 65536, 00:34:55.898 "uuid": "1e8eabfb-8bcb-4f46-893a-e7774dde4e1c", 00:34:55.898 "assigned_rate_limits": { 00:34:55.898 "rw_ios_per_sec": 0, 00:34:55.898 "rw_mbytes_per_sec": 0, 00:34:55.898 "r_mbytes_per_sec": 0, 00:34:55.898 "w_mbytes_per_sec": 0 00:34:55.898 }, 00:34:55.898 "claimed": false, 00:34:55.898 "zoned": false, 00:34:55.898 "supported_io_types": { 00:34:55.898 "read": true, 00:34:55.898 "write": true, 00:34:55.898 "unmap": true, 00:34:55.898 "flush": true, 00:34:55.898 "reset": true, 00:34:55.898 "nvme_admin": false, 00:34:55.898 "nvme_io": false, 00:34:55.898 "nvme_io_md": false, 00:34:55.898 "write_zeroes": true, 00:34:55.898 "zcopy": true, 00:34:55.898 "get_zone_info": false, 00:34:55.898 "zone_management": false, 00:34:55.898 "zone_append": false, 00:34:55.898 "compare": false, 00:34:55.898 "compare_and_write": false, 00:34:55.898 "abort": true, 00:34:55.898 "seek_hole": false, 00:34:55.898 "seek_data": false, 00:34:55.898 "copy": true, 00:34:55.898 "nvme_iov_md": false 00:34:55.898 }, 00:34:55.898 "memory_domains": [ 00:34:55.898 { 00:34:55.898 "dma_device_id": "system", 00:34:55.898 "dma_device_type": 1 00:34:55.898 }, 00:34:55.898 { 00:34:55.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:55.899 "dma_device_type": 2 00:34:55.899 } 00:34:55.899 ], 00:34:55.899 "driver_specific": {} 00:34:55.899 } 00:34:55.899 ] 00:34:55.899 21:48:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:34:55.899 21:48:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:34:55.899 21:48:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:34:55.899 21:48:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:34:56.157 BaseBdev3 00:34:56.157 21:48:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:34:56.157 21:48:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:34:56.157 21:48:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:56.157 21:48:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:34:56.157 21:48:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:56.157 21:48:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:56.157 21:48:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:56.415 21:48:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:34:56.415 [ 00:34:56.415 { 00:34:56.415 "name": "BaseBdev3", 00:34:56.415 "aliases": [ 00:34:56.415 "93f8e809-6d27-4abf-8d9e-9c97e5cdcd0b" 00:34:56.415 ], 00:34:56.415 "product_name": "Malloc disk", 00:34:56.415 "block_size": 512, 00:34:56.415 "num_blocks": 65536, 00:34:56.415 "uuid": "93f8e809-6d27-4abf-8d9e-9c97e5cdcd0b", 00:34:56.415 "assigned_rate_limits": { 00:34:56.415 "rw_ios_per_sec": 0, 00:34:56.415 "rw_mbytes_per_sec": 0, 00:34:56.415 "r_mbytes_per_sec": 0, 00:34:56.415 "w_mbytes_per_sec": 0 00:34:56.415 }, 00:34:56.415 "claimed": false, 00:34:56.415 "zoned": false, 00:34:56.415 "supported_io_types": { 00:34:56.415 "read": true, 00:34:56.415 "write": true, 00:34:56.415 "unmap": true, 00:34:56.415 "flush": true, 00:34:56.415 "reset": true, 00:34:56.415 "nvme_admin": false, 00:34:56.415 "nvme_io": false, 00:34:56.415 "nvme_io_md": false, 00:34:56.415 "write_zeroes": true, 00:34:56.415 "zcopy": true, 00:34:56.415 "get_zone_info": false, 00:34:56.415 "zone_management": false, 00:34:56.415 "zone_append": false, 00:34:56.415 "compare": false, 00:34:56.415 "compare_and_write": false, 00:34:56.415 "abort": true, 00:34:56.415 "seek_hole": false, 00:34:56.415 "seek_data": false, 00:34:56.415 "copy": true, 00:34:56.415 "nvme_iov_md": false 00:34:56.415 }, 00:34:56.415 "memory_domains": [ 00:34:56.415 { 00:34:56.415 "dma_device_id": "system", 00:34:56.415 "dma_device_type": 1 00:34:56.415 }, 00:34:56.415 { 00:34:56.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:56.415 "dma_device_type": 2 00:34:56.415 } 00:34:56.415 ], 00:34:56.415 "driver_specific": {} 00:34:56.415 } 00:34:56.415 ] 00:34:56.415 21:48:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:34:56.415 21:48:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:34:56.415 21:48:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:34:56.415 21:48:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:34:56.672 BaseBdev4 00:34:56.929 21:48:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:34:56.929 21:48:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:34:56.929 21:48:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:56.929 21:48:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:34:56.929 21:48:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:56.929 21:48:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:56.929 21:48:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:56.929 21:48:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:34:57.187 [ 00:34:57.187 { 00:34:57.187 "name": "BaseBdev4", 00:34:57.187 "aliases": [ 00:34:57.187 "1df209b6-6fd4-4e98-ae3d-dbf9b2314446" 00:34:57.187 ], 00:34:57.187 "product_name": "Malloc disk", 00:34:57.187 "block_size": 512, 00:34:57.187 "num_blocks": 65536, 00:34:57.187 "uuid": "1df209b6-6fd4-4e98-ae3d-dbf9b2314446", 00:34:57.187 "assigned_rate_limits": { 00:34:57.187 "rw_ios_per_sec": 0, 00:34:57.187 "rw_mbytes_per_sec": 0, 00:34:57.187 "r_mbytes_per_sec": 0, 00:34:57.187 "w_mbytes_per_sec": 0 00:34:57.187 }, 00:34:57.187 "claimed": false, 00:34:57.187 "zoned": false, 00:34:57.187 "supported_io_types": { 00:34:57.187 "read": true, 00:34:57.187 "write": true, 00:34:57.187 "unmap": true, 00:34:57.187 "flush": true, 00:34:57.187 "reset": true, 00:34:57.187 "nvme_admin": false, 00:34:57.187 "nvme_io": false, 00:34:57.187 "nvme_io_md": false, 00:34:57.187 "write_zeroes": true, 00:34:57.187 "zcopy": true, 00:34:57.187 "get_zone_info": false, 00:34:57.187 "zone_management": false, 00:34:57.187 "zone_append": false, 00:34:57.187 "compare": false, 00:34:57.187 "compare_and_write": false, 00:34:57.187 "abort": true, 00:34:57.187 "seek_hole": false, 00:34:57.187 "seek_data": false, 00:34:57.187 "copy": true, 00:34:57.187 "nvme_iov_md": false 00:34:57.187 }, 00:34:57.187 "memory_domains": [ 00:34:57.187 { 00:34:57.187 "dma_device_id": "system", 00:34:57.187 "dma_device_type": 1 00:34:57.187 }, 00:34:57.187 { 00:34:57.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:57.187 "dma_device_type": 2 00:34:57.187 } 00:34:57.187 ], 00:34:57.187 "driver_specific": {} 00:34:57.187 } 00:34:57.187 ] 00:34:57.187 21:48:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:34:57.187 21:48:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:34:57.187 21:48:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:34:57.187 21:48:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:34:57.445 [2024-07-15 21:48:30.744986] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:57.445 [2024-07-15 21:48:30.745139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:57.445 [2024-07-15 21:48:30.745211] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:57.445 [2024-07-15 21:48:30.747146] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:57.445 [2024-07-15 21:48:30.747254] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:34:57.445 21:48:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:57.445 21:48:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:57.445 21:48:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:57.445 21:48:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:57.445 21:48:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:57.445 21:48:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:57.445 21:48:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:57.445 21:48:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:57.445 21:48:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:57.445 21:48:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:57.445 21:48:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:57.445 21:48:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:57.704 21:48:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:57.704 "name": "Existed_Raid", 00:34:57.704 "uuid": "d1bb69d4-6bf9-43d4-b846-a606c457636d", 00:34:57.704 "strip_size_kb": 64, 00:34:57.704 "state": "configuring", 00:34:57.704 "raid_level": "raid5f", 00:34:57.704 "superblock": true, 00:34:57.704 "num_base_bdevs": 4, 00:34:57.704 "num_base_bdevs_discovered": 3, 00:34:57.704 "num_base_bdevs_operational": 4, 00:34:57.704 "base_bdevs_list": [ 00:34:57.704 { 00:34:57.704 "name": "BaseBdev1", 00:34:57.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:57.704 "is_configured": false, 00:34:57.704 "data_offset": 0, 00:34:57.704 "data_size": 0 00:34:57.704 }, 00:34:57.704 { 00:34:57.704 "name": "BaseBdev2", 00:34:57.704 "uuid": "1e8eabfb-8bcb-4f46-893a-e7774dde4e1c", 00:34:57.704 "is_configured": true, 00:34:57.704 "data_offset": 2048, 00:34:57.704 "data_size": 63488 00:34:57.704 }, 00:34:57.704 { 00:34:57.704 "name": "BaseBdev3", 00:34:57.704 "uuid": "93f8e809-6d27-4abf-8d9e-9c97e5cdcd0b", 00:34:57.704 "is_configured": true, 00:34:57.704 "data_offset": 2048, 00:34:57.704 "data_size": 63488 00:34:57.704 }, 00:34:57.704 { 00:34:57.704 "name": "BaseBdev4", 00:34:57.704 "uuid": "1df209b6-6fd4-4e98-ae3d-dbf9b2314446", 00:34:57.704 "is_configured": true, 00:34:57.704 "data_offset": 2048, 00:34:57.704 "data_size": 63488 00:34:57.704 } 00:34:57.704 ] 00:34:57.704 }' 00:34:57.704 21:48:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:57.704 21:48:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:58.270 21:48:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:34:58.528 [2024-07-15 21:48:31.727340] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:58.528 21:48:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:58.528 21:48:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:58.528 21:48:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:58.528 21:48:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:58.528 21:48:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:58.528 21:48:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:58.528 21:48:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:58.528 21:48:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:58.528 21:48:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:58.528 21:48:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:58.528 21:48:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:58.528 21:48:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:58.792 21:48:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:58.792 "name": "Existed_Raid", 00:34:58.792 "uuid": "d1bb69d4-6bf9-43d4-b846-a606c457636d", 00:34:58.792 "strip_size_kb": 64, 00:34:58.792 "state": "configuring", 00:34:58.792 "raid_level": "raid5f", 00:34:58.792 "superblock": true, 00:34:58.792 "num_base_bdevs": 4, 00:34:58.792 "num_base_bdevs_discovered": 2, 00:34:58.792 "num_base_bdevs_operational": 4, 00:34:58.792 "base_bdevs_list": [ 00:34:58.792 { 00:34:58.792 "name": "BaseBdev1", 00:34:58.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:58.792 "is_configured": false, 00:34:58.792 "data_offset": 0, 00:34:58.792 "data_size": 0 00:34:58.792 }, 00:34:58.792 { 00:34:58.792 "name": null, 00:34:58.792 "uuid": "1e8eabfb-8bcb-4f46-893a-e7774dde4e1c", 00:34:58.792 "is_configured": false, 00:34:58.792 "data_offset": 2048, 00:34:58.792 "data_size": 63488 00:34:58.792 }, 00:34:58.792 { 00:34:58.792 "name": "BaseBdev3", 00:34:58.792 "uuid": "93f8e809-6d27-4abf-8d9e-9c97e5cdcd0b", 00:34:58.792 "is_configured": true, 00:34:58.792 "data_offset": 2048, 00:34:58.792 "data_size": 63488 00:34:58.792 }, 00:34:58.792 { 00:34:58.792 "name": "BaseBdev4", 00:34:58.792 "uuid": "1df209b6-6fd4-4e98-ae3d-dbf9b2314446", 00:34:58.792 "is_configured": true, 00:34:58.792 "data_offset": 2048, 00:34:58.792 "data_size": 63488 00:34:58.792 } 00:34:58.792 ] 00:34:58.792 }' 00:34:58.792 21:48:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:58.792 21:48:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:59.364 21:48:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:59.364 21:48:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:34:59.623 21:48:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:34:59.623 21:48:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:34:59.881 [2024-07-15 21:48:33.020212] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:59.881 BaseBdev1 00:34:59.881 21:48:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:34:59.881 21:48:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:34:59.881 21:48:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:59.881 21:48:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:34:59.881 21:48:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:59.881 21:48:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:59.881 21:48:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:59.881 21:48:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:00.139 [ 00:35:00.139 { 00:35:00.139 "name": "BaseBdev1", 00:35:00.139 "aliases": [ 00:35:00.139 "6e845dfa-5cf4-43d4-8722-e46e33ed7d82" 00:35:00.139 ], 00:35:00.139 "product_name": "Malloc disk", 00:35:00.139 "block_size": 512, 00:35:00.139 "num_blocks": 65536, 00:35:00.139 "uuid": "6e845dfa-5cf4-43d4-8722-e46e33ed7d82", 00:35:00.139 "assigned_rate_limits": { 00:35:00.139 "rw_ios_per_sec": 0, 00:35:00.139 "rw_mbytes_per_sec": 0, 00:35:00.139 "r_mbytes_per_sec": 0, 00:35:00.139 "w_mbytes_per_sec": 0 00:35:00.139 }, 00:35:00.139 "claimed": true, 00:35:00.139 "claim_type": "exclusive_write", 00:35:00.139 "zoned": false, 00:35:00.139 "supported_io_types": { 00:35:00.139 "read": true, 00:35:00.139 "write": true, 00:35:00.139 "unmap": true, 00:35:00.139 "flush": true, 00:35:00.139 "reset": true, 00:35:00.139 "nvme_admin": false, 00:35:00.139 "nvme_io": false, 00:35:00.139 "nvme_io_md": false, 00:35:00.139 "write_zeroes": true, 00:35:00.139 "zcopy": true, 00:35:00.139 "get_zone_info": false, 00:35:00.139 "zone_management": false, 00:35:00.139 "zone_append": false, 00:35:00.139 "compare": false, 00:35:00.139 "compare_and_write": false, 00:35:00.139 "abort": true, 00:35:00.139 "seek_hole": false, 00:35:00.139 "seek_data": false, 00:35:00.139 "copy": true, 00:35:00.139 "nvme_iov_md": false 00:35:00.139 }, 00:35:00.139 "memory_domains": [ 00:35:00.139 { 00:35:00.139 "dma_device_id": "system", 00:35:00.139 "dma_device_type": 1 00:35:00.139 }, 00:35:00.139 { 00:35:00.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:00.139 "dma_device_type": 2 00:35:00.139 } 00:35:00.139 ], 00:35:00.139 "driver_specific": {} 00:35:00.139 } 00:35:00.139 ] 00:35:00.139 21:48:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:35:00.139 21:48:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:00.139 21:48:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:00.139 21:48:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:00.139 21:48:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:00.139 21:48:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:00.139 21:48:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:00.139 21:48:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:00.139 21:48:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:00.139 21:48:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:00.139 21:48:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:00.139 21:48:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:00.139 21:48:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:00.397 21:48:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:00.397 "name": "Existed_Raid", 00:35:00.397 "uuid": "d1bb69d4-6bf9-43d4-b846-a606c457636d", 00:35:00.397 "strip_size_kb": 64, 00:35:00.397 "state": "configuring", 00:35:00.397 "raid_level": "raid5f", 00:35:00.397 "superblock": true, 00:35:00.397 "num_base_bdevs": 4, 00:35:00.397 "num_base_bdevs_discovered": 3, 00:35:00.397 "num_base_bdevs_operational": 4, 00:35:00.397 "base_bdevs_list": [ 00:35:00.397 { 00:35:00.397 "name": "BaseBdev1", 00:35:00.397 "uuid": "6e845dfa-5cf4-43d4-8722-e46e33ed7d82", 00:35:00.397 "is_configured": true, 00:35:00.397 "data_offset": 2048, 00:35:00.397 "data_size": 63488 00:35:00.397 }, 00:35:00.397 { 00:35:00.397 "name": null, 00:35:00.397 "uuid": "1e8eabfb-8bcb-4f46-893a-e7774dde4e1c", 00:35:00.397 "is_configured": false, 00:35:00.397 "data_offset": 2048, 00:35:00.397 "data_size": 63488 00:35:00.397 }, 00:35:00.397 { 00:35:00.397 "name": "BaseBdev3", 00:35:00.397 "uuid": "93f8e809-6d27-4abf-8d9e-9c97e5cdcd0b", 00:35:00.397 "is_configured": true, 00:35:00.397 "data_offset": 2048, 00:35:00.397 "data_size": 63488 00:35:00.397 }, 00:35:00.397 { 00:35:00.397 "name": "BaseBdev4", 00:35:00.397 "uuid": "1df209b6-6fd4-4e98-ae3d-dbf9b2314446", 00:35:00.397 "is_configured": true, 00:35:00.397 "data_offset": 2048, 00:35:00.397 "data_size": 63488 00:35:00.397 } 00:35:00.397 ] 00:35:00.397 }' 00:35:00.397 21:48:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:00.397 21:48:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:00.962 21:48:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:00.962 21:48:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:35:01.221 21:48:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:35:01.221 21:48:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:35:01.479 [2024-07-15 21:48:34.645483] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:35:01.479 21:48:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:01.479 21:48:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:01.479 21:48:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:01.479 21:48:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:01.479 21:48:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:01.479 21:48:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:01.479 21:48:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:01.479 21:48:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:01.479 21:48:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:01.479 21:48:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:01.479 21:48:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:01.479 21:48:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:01.738 21:48:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:01.738 "name": "Existed_Raid", 00:35:01.738 "uuid": "d1bb69d4-6bf9-43d4-b846-a606c457636d", 00:35:01.738 "strip_size_kb": 64, 00:35:01.738 "state": "configuring", 00:35:01.738 "raid_level": "raid5f", 00:35:01.738 "superblock": true, 00:35:01.738 "num_base_bdevs": 4, 00:35:01.738 "num_base_bdevs_discovered": 2, 00:35:01.738 "num_base_bdevs_operational": 4, 00:35:01.738 "base_bdevs_list": [ 00:35:01.738 { 00:35:01.738 "name": "BaseBdev1", 00:35:01.738 "uuid": "6e845dfa-5cf4-43d4-8722-e46e33ed7d82", 00:35:01.738 "is_configured": true, 00:35:01.738 "data_offset": 2048, 00:35:01.738 "data_size": 63488 00:35:01.738 }, 00:35:01.738 { 00:35:01.738 "name": null, 00:35:01.738 "uuid": "1e8eabfb-8bcb-4f46-893a-e7774dde4e1c", 00:35:01.738 "is_configured": false, 00:35:01.738 "data_offset": 2048, 00:35:01.738 "data_size": 63488 00:35:01.738 }, 00:35:01.738 { 00:35:01.738 "name": null, 00:35:01.738 "uuid": "93f8e809-6d27-4abf-8d9e-9c97e5cdcd0b", 00:35:01.738 "is_configured": false, 00:35:01.738 "data_offset": 2048, 00:35:01.738 "data_size": 63488 00:35:01.738 }, 00:35:01.738 { 00:35:01.738 "name": "BaseBdev4", 00:35:01.738 "uuid": "1df209b6-6fd4-4e98-ae3d-dbf9b2314446", 00:35:01.738 "is_configured": true, 00:35:01.738 "data_offset": 2048, 00:35:01.738 "data_size": 63488 00:35:01.738 } 00:35:01.738 ] 00:35:01.738 }' 00:35:01.738 21:48:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:01.738 21:48:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:02.305 21:48:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:02.305 21:48:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:35:02.564 21:48:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:35:02.564 21:48:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:35:02.564 [2024-07-15 21:48:35.887484] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:02.564 21:48:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:02.564 21:48:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:02.564 21:48:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:02.564 21:48:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:02.564 21:48:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:02.564 21:48:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:02.564 21:48:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:02.564 21:48:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:02.564 21:48:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:02.564 21:48:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:02.564 21:48:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:02.564 21:48:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:02.822 21:48:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:02.822 "name": "Existed_Raid", 00:35:02.822 "uuid": "d1bb69d4-6bf9-43d4-b846-a606c457636d", 00:35:02.822 "strip_size_kb": 64, 00:35:02.822 "state": "configuring", 00:35:02.822 "raid_level": "raid5f", 00:35:02.822 "superblock": true, 00:35:02.822 "num_base_bdevs": 4, 00:35:02.822 "num_base_bdevs_discovered": 3, 00:35:02.822 "num_base_bdevs_operational": 4, 00:35:02.822 "base_bdevs_list": [ 00:35:02.822 { 00:35:02.822 "name": "BaseBdev1", 00:35:02.822 "uuid": "6e845dfa-5cf4-43d4-8722-e46e33ed7d82", 00:35:02.822 "is_configured": true, 00:35:02.822 "data_offset": 2048, 00:35:02.822 "data_size": 63488 00:35:02.822 }, 00:35:02.822 { 00:35:02.822 "name": null, 00:35:02.822 "uuid": "1e8eabfb-8bcb-4f46-893a-e7774dde4e1c", 00:35:02.822 "is_configured": false, 00:35:02.822 "data_offset": 2048, 00:35:02.822 "data_size": 63488 00:35:02.822 }, 00:35:02.822 { 00:35:02.822 "name": "BaseBdev3", 00:35:02.822 "uuid": "93f8e809-6d27-4abf-8d9e-9c97e5cdcd0b", 00:35:02.822 "is_configured": true, 00:35:02.822 "data_offset": 2048, 00:35:02.822 "data_size": 63488 00:35:02.822 }, 00:35:02.822 { 00:35:02.822 "name": "BaseBdev4", 00:35:02.822 "uuid": "1df209b6-6fd4-4e98-ae3d-dbf9b2314446", 00:35:02.822 "is_configured": true, 00:35:02.822 "data_offset": 2048, 00:35:02.822 "data_size": 63488 00:35:02.822 } 00:35:02.822 ] 00:35:02.822 }' 00:35:02.822 21:48:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:02.822 21:48:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:03.757 21:48:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:03.757 21:48:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:35:03.757 21:48:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:35:03.757 21:48:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:35:04.017 [2024-07-15 21:48:37.165330] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:04.017 21:48:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:04.017 21:48:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:04.017 21:48:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:04.017 21:48:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:04.017 21:48:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:04.017 21:48:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:04.017 21:48:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:04.017 21:48:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:04.017 21:48:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:04.017 21:48:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:04.017 21:48:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:04.017 21:48:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:04.276 21:48:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:04.276 "name": "Existed_Raid", 00:35:04.276 "uuid": "d1bb69d4-6bf9-43d4-b846-a606c457636d", 00:35:04.276 "strip_size_kb": 64, 00:35:04.276 "state": "configuring", 00:35:04.276 "raid_level": "raid5f", 00:35:04.276 "superblock": true, 00:35:04.276 "num_base_bdevs": 4, 00:35:04.276 "num_base_bdevs_discovered": 2, 00:35:04.276 "num_base_bdevs_operational": 4, 00:35:04.276 "base_bdevs_list": [ 00:35:04.276 { 00:35:04.276 "name": null, 00:35:04.276 "uuid": "6e845dfa-5cf4-43d4-8722-e46e33ed7d82", 00:35:04.276 "is_configured": false, 00:35:04.276 "data_offset": 2048, 00:35:04.276 "data_size": 63488 00:35:04.276 }, 00:35:04.276 { 00:35:04.276 "name": null, 00:35:04.276 "uuid": "1e8eabfb-8bcb-4f46-893a-e7774dde4e1c", 00:35:04.276 "is_configured": false, 00:35:04.276 "data_offset": 2048, 00:35:04.276 "data_size": 63488 00:35:04.276 }, 00:35:04.276 { 00:35:04.276 "name": "BaseBdev3", 00:35:04.276 "uuid": "93f8e809-6d27-4abf-8d9e-9c97e5cdcd0b", 00:35:04.276 "is_configured": true, 00:35:04.276 "data_offset": 2048, 00:35:04.276 "data_size": 63488 00:35:04.276 }, 00:35:04.276 { 00:35:04.276 "name": "BaseBdev4", 00:35:04.276 "uuid": "1df209b6-6fd4-4e98-ae3d-dbf9b2314446", 00:35:04.276 "is_configured": true, 00:35:04.276 "data_offset": 2048, 00:35:04.276 "data_size": 63488 00:35:04.276 } 00:35:04.276 ] 00:35:04.276 }' 00:35:04.276 21:48:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:04.276 21:48:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:04.843 21:48:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:04.843 21:48:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:35:05.102 21:48:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:35:05.102 21:48:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:35:05.361 [2024-07-15 21:48:38.523870] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:05.361 21:48:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:05.361 21:48:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:05.361 21:48:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:05.361 21:48:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:05.361 21:48:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:05.361 21:48:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:05.361 21:48:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:05.361 21:48:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:05.361 21:48:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:05.361 21:48:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:05.361 21:48:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:05.361 21:48:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:05.361 21:48:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:05.361 "name": "Existed_Raid", 00:35:05.361 "uuid": "d1bb69d4-6bf9-43d4-b846-a606c457636d", 00:35:05.361 "strip_size_kb": 64, 00:35:05.361 "state": "configuring", 00:35:05.361 "raid_level": "raid5f", 00:35:05.361 "superblock": true, 00:35:05.361 "num_base_bdevs": 4, 00:35:05.361 "num_base_bdevs_discovered": 3, 00:35:05.361 "num_base_bdevs_operational": 4, 00:35:05.361 "base_bdevs_list": [ 00:35:05.361 { 00:35:05.361 "name": null, 00:35:05.362 "uuid": "6e845dfa-5cf4-43d4-8722-e46e33ed7d82", 00:35:05.362 "is_configured": false, 00:35:05.362 "data_offset": 2048, 00:35:05.362 "data_size": 63488 00:35:05.362 }, 00:35:05.362 { 00:35:05.362 "name": "BaseBdev2", 00:35:05.362 "uuid": "1e8eabfb-8bcb-4f46-893a-e7774dde4e1c", 00:35:05.362 "is_configured": true, 00:35:05.362 "data_offset": 2048, 00:35:05.362 "data_size": 63488 00:35:05.362 }, 00:35:05.362 { 00:35:05.362 "name": "BaseBdev3", 00:35:05.362 "uuid": "93f8e809-6d27-4abf-8d9e-9c97e5cdcd0b", 00:35:05.362 "is_configured": true, 00:35:05.362 "data_offset": 2048, 00:35:05.362 "data_size": 63488 00:35:05.362 }, 00:35:05.362 { 00:35:05.362 "name": "BaseBdev4", 00:35:05.362 "uuid": "1df209b6-6fd4-4e98-ae3d-dbf9b2314446", 00:35:05.362 "is_configured": true, 00:35:05.362 "data_offset": 2048, 00:35:05.362 "data_size": 63488 00:35:05.362 } 00:35:05.362 ] 00:35:05.362 }' 00:35:05.362 21:48:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:05.362 21:48:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:06.300 21:48:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:06.300 21:48:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:35:06.300 21:48:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:35:06.300 21:48:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:06.300 21:48:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:35:06.559 21:48:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 6e845dfa-5cf4-43d4-8722-e46e33ed7d82 00:35:06.819 [2024-07-15 21:48:39.996238] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:35:06.819 [2024-07-15 21:48:39.996560] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:35:06.819 [2024-07-15 21:48:39.996608] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:35:06.819 [2024-07-15 21:48:39.996750] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:35:06.819 NewBaseBdev 00:35:06.819 [2024-07-15 21:48:40.004504] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:35:06.819 [2024-07-15 21:48:40.004570] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009680 00:35:06.819 [2024-07-15 21:48:40.004784] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:06.819 21:48:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:35:06.819 21:48:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:35:06.819 21:48:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:35:06.819 21:48:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:35:06.819 21:48:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:35:06.819 21:48:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:35:06.819 21:48:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:07.079 21:48:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:35:07.079 [ 00:35:07.079 { 00:35:07.079 "name": "NewBaseBdev", 00:35:07.079 "aliases": [ 00:35:07.079 "6e845dfa-5cf4-43d4-8722-e46e33ed7d82" 00:35:07.079 ], 00:35:07.079 "product_name": "Malloc disk", 00:35:07.079 "block_size": 512, 00:35:07.079 "num_blocks": 65536, 00:35:07.079 "uuid": "6e845dfa-5cf4-43d4-8722-e46e33ed7d82", 00:35:07.079 "assigned_rate_limits": { 00:35:07.079 "rw_ios_per_sec": 0, 00:35:07.079 "rw_mbytes_per_sec": 0, 00:35:07.079 "r_mbytes_per_sec": 0, 00:35:07.079 "w_mbytes_per_sec": 0 00:35:07.079 }, 00:35:07.079 "claimed": true, 00:35:07.079 "claim_type": "exclusive_write", 00:35:07.079 "zoned": false, 00:35:07.079 "supported_io_types": { 00:35:07.079 "read": true, 00:35:07.079 "write": true, 00:35:07.079 "unmap": true, 00:35:07.079 "flush": true, 00:35:07.079 "reset": true, 00:35:07.079 "nvme_admin": false, 00:35:07.079 "nvme_io": false, 00:35:07.079 "nvme_io_md": false, 00:35:07.079 "write_zeroes": true, 00:35:07.079 "zcopy": true, 00:35:07.079 "get_zone_info": false, 00:35:07.079 "zone_management": false, 00:35:07.079 "zone_append": false, 00:35:07.079 "compare": false, 00:35:07.079 "compare_and_write": false, 00:35:07.079 "abort": true, 00:35:07.079 "seek_hole": false, 00:35:07.079 "seek_data": false, 00:35:07.079 "copy": true, 00:35:07.079 "nvme_iov_md": false 00:35:07.079 }, 00:35:07.079 "memory_domains": [ 00:35:07.079 { 00:35:07.079 "dma_device_id": "system", 00:35:07.079 "dma_device_type": 1 00:35:07.079 }, 00:35:07.079 { 00:35:07.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:07.079 "dma_device_type": 2 00:35:07.079 } 00:35:07.079 ], 00:35:07.079 "driver_specific": {} 00:35:07.079 } 00:35:07.079 ] 00:35:07.079 21:48:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:35:07.079 21:48:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:35:07.079 21:48:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:07.079 21:48:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:07.079 21:48:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:07.079 21:48:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:07.079 21:48:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:07.079 21:48:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:07.079 21:48:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:07.079 21:48:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:07.079 21:48:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:07.079 21:48:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:07.079 21:48:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:07.338 21:48:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:07.338 "name": "Existed_Raid", 00:35:07.338 "uuid": "d1bb69d4-6bf9-43d4-b846-a606c457636d", 00:35:07.338 "strip_size_kb": 64, 00:35:07.338 "state": "online", 00:35:07.338 "raid_level": "raid5f", 00:35:07.338 "superblock": true, 00:35:07.338 "num_base_bdevs": 4, 00:35:07.338 "num_base_bdevs_discovered": 4, 00:35:07.338 "num_base_bdevs_operational": 4, 00:35:07.338 "base_bdevs_list": [ 00:35:07.338 { 00:35:07.338 "name": "NewBaseBdev", 00:35:07.338 "uuid": "6e845dfa-5cf4-43d4-8722-e46e33ed7d82", 00:35:07.338 "is_configured": true, 00:35:07.338 "data_offset": 2048, 00:35:07.338 "data_size": 63488 00:35:07.338 }, 00:35:07.338 { 00:35:07.339 "name": "BaseBdev2", 00:35:07.339 "uuid": "1e8eabfb-8bcb-4f46-893a-e7774dde4e1c", 00:35:07.339 "is_configured": true, 00:35:07.339 "data_offset": 2048, 00:35:07.339 "data_size": 63488 00:35:07.339 }, 00:35:07.339 { 00:35:07.339 "name": "BaseBdev3", 00:35:07.339 "uuid": "93f8e809-6d27-4abf-8d9e-9c97e5cdcd0b", 00:35:07.339 "is_configured": true, 00:35:07.339 "data_offset": 2048, 00:35:07.339 "data_size": 63488 00:35:07.339 }, 00:35:07.339 { 00:35:07.339 "name": "BaseBdev4", 00:35:07.339 "uuid": "1df209b6-6fd4-4e98-ae3d-dbf9b2314446", 00:35:07.339 "is_configured": true, 00:35:07.339 "data_offset": 2048, 00:35:07.339 "data_size": 63488 00:35:07.339 } 00:35:07.339 ] 00:35:07.339 }' 00:35:07.339 21:48:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:07.339 21:48:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:07.908 21:48:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:35:07.908 21:48:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:35:07.908 21:48:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:35:07.908 21:48:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:35:07.908 21:48:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:35:07.908 21:48:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:35:07.908 21:48:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:35:07.908 21:48:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:35:08.168 [2024-07-15 21:48:41.480292] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:08.168 21:48:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:35:08.168 "name": "Existed_Raid", 00:35:08.168 "aliases": [ 00:35:08.168 "d1bb69d4-6bf9-43d4-b846-a606c457636d" 00:35:08.168 ], 00:35:08.168 "product_name": "Raid Volume", 00:35:08.168 "block_size": 512, 00:35:08.168 "num_blocks": 190464, 00:35:08.168 "uuid": "d1bb69d4-6bf9-43d4-b846-a606c457636d", 00:35:08.168 "assigned_rate_limits": { 00:35:08.168 "rw_ios_per_sec": 0, 00:35:08.168 "rw_mbytes_per_sec": 0, 00:35:08.168 "r_mbytes_per_sec": 0, 00:35:08.168 "w_mbytes_per_sec": 0 00:35:08.168 }, 00:35:08.168 "claimed": false, 00:35:08.168 "zoned": false, 00:35:08.168 "supported_io_types": { 00:35:08.168 "read": true, 00:35:08.168 "write": true, 00:35:08.168 "unmap": false, 00:35:08.168 "flush": false, 00:35:08.168 "reset": true, 00:35:08.168 "nvme_admin": false, 00:35:08.168 "nvme_io": false, 00:35:08.168 "nvme_io_md": false, 00:35:08.168 "write_zeroes": true, 00:35:08.168 "zcopy": false, 00:35:08.168 "get_zone_info": false, 00:35:08.168 "zone_management": false, 00:35:08.168 "zone_append": false, 00:35:08.168 "compare": false, 00:35:08.168 "compare_and_write": false, 00:35:08.168 "abort": false, 00:35:08.168 "seek_hole": false, 00:35:08.168 "seek_data": false, 00:35:08.168 "copy": false, 00:35:08.168 "nvme_iov_md": false 00:35:08.168 }, 00:35:08.168 "driver_specific": { 00:35:08.168 "raid": { 00:35:08.168 "uuid": "d1bb69d4-6bf9-43d4-b846-a606c457636d", 00:35:08.168 "strip_size_kb": 64, 00:35:08.168 "state": "online", 00:35:08.168 "raid_level": "raid5f", 00:35:08.168 "superblock": true, 00:35:08.168 "num_base_bdevs": 4, 00:35:08.168 "num_base_bdevs_discovered": 4, 00:35:08.168 "num_base_bdevs_operational": 4, 00:35:08.168 "base_bdevs_list": [ 00:35:08.168 { 00:35:08.168 "name": "NewBaseBdev", 00:35:08.168 "uuid": "6e845dfa-5cf4-43d4-8722-e46e33ed7d82", 00:35:08.168 "is_configured": true, 00:35:08.168 "data_offset": 2048, 00:35:08.168 "data_size": 63488 00:35:08.168 }, 00:35:08.169 { 00:35:08.169 "name": "BaseBdev2", 00:35:08.169 "uuid": "1e8eabfb-8bcb-4f46-893a-e7774dde4e1c", 00:35:08.169 "is_configured": true, 00:35:08.169 "data_offset": 2048, 00:35:08.169 "data_size": 63488 00:35:08.169 }, 00:35:08.169 { 00:35:08.169 "name": "BaseBdev3", 00:35:08.169 "uuid": "93f8e809-6d27-4abf-8d9e-9c97e5cdcd0b", 00:35:08.169 "is_configured": true, 00:35:08.169 "data_offset": 2048, 00:35:08.169 "data_size": 63488 00:35:08.169 }, 00:35:08.169 { 00:35:08.169 "name": "BaseBdev4", 00:35:08.169 "uuid": "1df209b6-6fd4-4e98-ae3d-dbf9b2314446", 00:35:08.169 "is_configured": true, 00:35:08.169 "data_offset": 2048, 00:35:08.169 "data_size": 63488 00:35:08.169 } 00:35:08.169 ] 00:35:08.169 } 00:35:08.169 } 00:35:08.169 }' 00:35:08.169 21:48:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:08.169 21:48:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:35:08.169 BaseBdev2 00:35:08.169 BaseBdev3 00:35:08.169 BaseBdev4' 00:35:08.169 21:48:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:08.169 21:48:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:08.169 21:48:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:35:08.428 21:48:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:08.428 "name": "NewBaseBdev", 00:35:08.428 "aliases": [ 00:35:08.428 "6e845dfa-5cf4-43d4-8722-e46e33ed7d82" 00:35:08.428 ], 00:35:08.428 "product_name": "Malloc disk", 00:35:08.428 "block_size": 512, 00:35:08.428 "num_blocks": 65536, 00:35:08.428 "uuid": "6e845dfa-5cf4-43d4-8722-e46e33ed7d82", 00:35:08.428 "assigned_rate_limits": { 00:35:08.428 "rw_ios_per_sec": 0, 00:35:08.428 "rw_mbytes_per_sec": 0, 00:35:08.428 "r_mbytes_per_sec": 0, 00:35:08.428 "w_mbytes_per_sec": 0 00:35:08.428 }, 00:35:08.428 "claimed": true, 00:35:08.428 "claim_type": "exclusive_write", 00:35:08.428 "zoned": false, 00:35:08.428 "supported_io_types": { 00:35:08.428 "read": true, 00:35:08.428 "write": true, 00:35:08.428 "unmap": true, 00:35:08.428 "flush": true, 00:35:08.428 "reset": true, 00:35:08.428 "nvme_admin": false, 00:35:08.428 "nvme_io": false, 00:35:08.428 "nvme_io_md": false, 00:35:08.428 "write_zeroes": true, 00:35:08.428 "zcopy": true, 00:35:08.428 "get_zone_info": false, 00:35:08.428 "zone_management": false, 00:35:08.428 "zone_append": false, 00:35:08.428 "compare": false, 00:35:08.428 "compare_and_write": false, 00:35:08.428 "abort": true, 00:35:08.428 "seek_hole": false, 00:35:08.428 "seek_data": false, 00:35:08.428 "copy": true, 00:35:08.428 "nvme_iov_md": false 00:35:08.428 }, 00:35:08.428 "memory_domains": [ 00:35:08.428 { 00:35:08.428 "dma_device_id": "system", 00:35:08.428 "dma_device_type": 1 00:35:08.428 }, 00:35:08.428 { 00:35:08.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:08.428 "dma_device_type": 2 00:35:08.428 } 00:35:08.428 ], 00:35:08.428 "driver_specific": {} 00:35:08.428 }' 00:35:08.428 21:48:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:08.428 21:48:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:08.688 21:48:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:08.688 21:48:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:08.688 21:48:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:08.688 21:48:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:08.688 21:48:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:08.688 21:48:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:08.688 21:48:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:08.688 21:48:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:08.947 21:48:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:08.947 21:48:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:08.947 21:48:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:08.947 21:48:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:35:08.947 21:48:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:09.207 21:48:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:09.207 "name": "BaseBdev2", 00:35:09.207 "aliases": [ 00:35:09.207 "1e8eabfb-8bcb-4f46-893a-e7774dde4e1c" 00:35:09.207 ], 00:35:09.207 "product_name": "Malloc disk", 00:35:09.207 "block_size": 512, 00:35:09.207 "num_blocks": 65536, 00:35:09.207 "uuid": "1e8eabfb-8bcb-4f46-893a-e7774dde4e1c", 00:35:09.207 "assigned_rate_limits": { 00:35:09.207 "rw_ios_per_sec": 0, 00:35:09.207 "rw_mbytes_per_sec": 0, 00:35:09.207 "r_mbytes_per_sec": 0, 00:35:09.207 "w_mbytes_per_sec": 0 00:35:09.207 }, 00:35:09.207 "claimed": true, 00:35:09.207 "claim_type": "exclusive_write", 00:35:09.207 "zoned": false, 00:35:09.207 "supported_io_types": { 00:35:09.207 "read": true, 00:35:09.207 "write": true, 00:35:09.207 "unmap": true, 00:35:09.208 "flush": true, 00:35:09.208 "reset": true, 00:35:09.208 "nvme_admin": false, 00:35:09.208 "nvme_io": false, 00:35:09.208 "nvme_io_md": false, 00:35:09.208 "write_zeroes": true, 00:35:09.208 "zcopy": true, 00:35:09.208 "get_zone_info": false, 00:35:09.208 "zone_management": false, 00:35:09.208 "zone_append": false, 00:35:09.208 "compare": false, 00:35:09.208 "compare_and_write": false, 00:35:09.208 "abort": true, 00:35:09.208 "seek_hole": false, 00:35:09.208 "seek_data": false, 00:35:09.208 "copy": true, 00:35:09.208 "nvme_iov_md": false 00:35:09.208 }, 00:35:09.208 "memory_domains": [ 00:35:09.208 { 00:35:09.208 "dma_device_id": "system", 00:35:09.208 "dma_device_type": 1 00:35:09.208 }, 00:35:09.208 { 00:35:09.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:09.208 "dma_device_type": 2 00:35:09.208 } 00:35:09.208 ], 00:35:09.208 "driver_specific": {} 00:35:09.208 }' 00:35:09.208 21:48:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:09.208 21:48:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:09.208 21:48:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:09.208 21:48:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:09.208 21:48:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:09.208 21:48:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:09.208 21:48:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:09.467 21:48:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:09.467 21:48:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:09.467 21:48:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:09.467 21:48:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:09.467 21:48:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:09.467 21:48:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:09.468 21:48:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:35:09.468 21:48:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:09.728 21:48:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:09.728 "name": "BaseBdev3", 00:35:09.728 "aliases": [ 00:35:09.728 "93f8e809-6d27-4abf-8d9e-9c97e5cdcd0b" 00:35:09.728 ], 00:35:09.728 "product_name": "Malloc disk", 00:35:09.728 "block_size": 512, 00:35:09.728 "num_blocks": 65536, 00:35:09.728 "uuid": "93f8e809-6d27-4abf-8d9e-9c97e5cdcd0b", 00:35:09.728 "assigned_rate_limits": { 00:35:09.728 "rw_ios_per_sec": 0, 00:35:09.728 "rw_mbytes_per_sec": 0, 00:35:09.728 "r_mbytes_per_sec": 0, 00:35:09.728 "w_mbytes_per_sec": 0 00:35:09.728 }, 00:35:09.728 "claimed": true, 00:35:09.728 "claim_type": "exclusive_write", 00:35:09.728 "zoned": false, 00:35:09.728 "supported_io_types": { 00:35:09.728 "read": true, 00:35:09.728 "write": true, 00:35:09.728 "unmap": true, 00:35:09.728 "flush": true, 00:35:09.728 "reset": true, 00:35:09.728 "nvme_admin": false, 00:35:09.728 "nvme_io": false, 00:35:09.728 "nvme_io_md": false, 00:35:09.728 "write_zeroes": true, 00:35:09.728 "zcopy": true, 00:35:09.728 "get_zone_info": false, 00:35:09.728 "zone_management": false, 00:35:09.728 "zone_append": false, 00:35:09.728 "compare": false, 00:35:09.728 "compare_and_write": false, 00:35:09.728 "abort": true, 00:35:09.728 "seek_hole": false, 00:35:09.728 "seek_data": false, 00:35:09.728 "copy": true, 00:35:09.728 "nvme_iov_md": false 00:35:09.728 }, 00:35:09.728 "memory_domains": [ 00:35:09.728 { 00:35:09.728 "dma_device_id": "system", 00:35:09.728 "dma_device_type": 1 00:35:09.728 }, 00:35:09.728 { 00:35:09.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:09.728 "dma_device_type": 2 00:35:09.728 } 00:35:09.728 ], 00:35:09.728 "driver_specific": {} 00:35:09.728 }' 00:35:09.728 21:48:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:09.728 21:48:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:09.728 21:48:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:09.728 21:48:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:09.987 21:48:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:09.987 21:48:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:09.987 21:48:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:09.987 21:48:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:09.987 21:48:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:09.987 21:48:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:09.987 21:48:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:10.259 21:48:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:10.259 21:48:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:10.259 21:48:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:35:10.259 21:48:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:10.259 21:48:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:10.259 "name": "BaseBdev4", 00:35:10.259 "aliases": [ 00:35:10.259 "1df209b6-6fd4-4e98-ae3d-dbf9b2314446" 00:35:10.259 ], 00:35:10.259 "product_name": "Malloc disk", 00:35:10.259 "block_size": 512, 00:35:10.259 "num_blocks": 65536, 00:35:10.259 "uuid": "1df209b6-6fd4-4e98-ae3d-dbf9b2314446", 00:35:10.259 "assigned_rate_limits": { 00:35:10.259 "rw_ios_per_sec": 0, 00:35:10.259 "rw_mbytes_per_sec": 0, 00:35:10.259 "r_mbytes_per_sec": 0, 00:35:10.259 "w_mbytes_per_sec": 0 00:35:10.259 }, 00:35:10.259 "claimed": true, 00:35:10.259 "claim_type": "exclusive_write", 00:35:10.259 "zoned": false, 00:35:10.259 "supported_io_types": { 00:35:10.259 "read": true, 00:35:10.259 "write": true, 00:35:10.259 "unmap": true, 00:35:10.259 "flush": true, 00:35:10.259 "reset": true, 00:35:10.259 "nvme_admin": false, 00:35:10.259 "nvme_io": false, 00:35:10.259 "nvme_io_md": false, 00:35:10.259 "write_zeroes": true, 00:35:10.259 "zcopy": true, 00:35:10.259 "get_zone_info": false, 00:35:10.259 "zone_management": false, 00:35:10.259 "zone_append": false, 00:35:10.259 "compare": false, 00:35:10.259 "compare_and_write": false, 00:35:10.259 "abort": true, 00:35:10.259 "seek_hole": false, 00:35:10.259 "seek_data": false, 00:35:10.259 "copy": true, 00:35:10.259 "nvme_iov_md": false 00:35:10.259 }, 00:35:10.259 "memory_domains": [ 00:35:10.259 { 00:35:10.259 "dma_device_id": "system", 00:35:10.259 "dma_device_type": 1 00:35:10.259 }, 00:35:10.259 { 00:35:10.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:10.259 "dma_device_type": 2 00:35:10.259 } 00:35:10.259 ], 00:35:10.259 "driver_specific": {} 00:35:10.259 }' 00:35:10.259 21:48:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:10.517 21:48:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:10.517 21:48:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:10.517 21:48:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:10.517 21:48:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:10.517 21:48:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:10.517 21:48:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:10.517 21:48:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:10.777 21:48:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:10.777 21:48:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:10.777 21:48:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:10.777 21:48:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:10.777 21:48:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:35:11.035 [2024-07-15 21:48:44.207584] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:11.035 [2024-07-15 21:48:44.207695] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:11.035 [2024-07-15 21:48:44.207819] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:11.035 [2024-07-15 21:48:44.208152] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:11.035 [2024-07-15 21:48:44.208193] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name Existed_Raid, state offline 00:35:11.035 21:48:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 156958 00:35:11.035 21:48:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 156958 ']' 00:35:11.035 21:48:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 156958 00:35:11.035 21:48:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:35:11.035 21:48:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:11.035 21:48:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 156958 00:35:11.035 21:48:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:11.035 21:48:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:11.035 21:48:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 156958' 00:35:11.035 killing process with pid 156958 00:35:11.035 21:48:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 156958 00:35:11.035 21:48:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 156958 00:35:11.035 [2024-07-15 21:48:44.249192] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:11.295 [2024-07-15 21:48:44.655881] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:12.671 21:48:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:35:12.671 ************************************ 00:35:12.671 END TEST raid5f_state_function_test_sb 00:35:12.671 ************************************ 00:35:12.671 00:35:12.671 real 0m33.619s 00:35:12.671 user 1m2.237s 00:35:12.671 sys 0m4.187s 00:35:12.671 21:48:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:12.671 21:48:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:12.671 21:48:45 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:35:12.671 21:48:45 bdev_raid -- bdev/bdev_raid.sh@888 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:35:12.671 21:48:45 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:35:12.671 21:48:45 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:12.671 21:48:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:12.671 ************************************ 00:35:12.671 START TEST raid5f_superblock_test 00:35:12.671 ************************************ 00:35:12.671 21:48:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid5f 4 00:35:12.671 21:48:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid5f 00:35:12.671 21:48:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:35:12.671 21:48:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:35:12.671 21:48:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:35:12.671 21:48:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:35:12.671 21:48:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:35:12.671 21:48:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:35:12.671 21:48:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:35:12.671 21:48:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:35:12.671 21:48:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:35:12.671 21:48:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:35:12.671 21:48:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:35:12.672 21:48:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:35:12.672 21:48:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid5f '!=' raid1 ']' 00:35:12.672 21:48:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:35:12.672 21:48:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:35:12.672 21:48:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=158086 00:35:12.672 21:48:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:35:12.672 21:48:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 158086 /var/tmp/spdk-raid.sock 00:35:12.672 21:48:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 158086 ']' 00:35:12.672 21:48:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:12.672 21:48:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:12.672 21:48:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:12.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:12.672 21:48:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:12.672 21:48:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:12.672 [2024-07-15 21:48:46.027312] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:35:12.672 [2024-07-15 21:48:46.027529] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158086 ] 00:35:12.930 [2024-07-15 21:48:46.186816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:13.188 [2024-07-15 21:48:46.391560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:13.446 [2024-07-15 21:48:46.577598] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:13.705 21:48:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:13.705 21:48:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:35:13.705 21:48:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:35:13.705 21:48:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:35:13.705 21:48:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:35:13.705 21:48:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:35:13.705 21:48:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:35:13.705 21:48:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:13.705 21:48:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:35:13.705 21:48:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:13.705 21:48:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:35:13.705 malloc1 00:35:13.964 21:48:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:13.964 [2024-07-15 21:48:47.268282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:13.964 [2024-07-15 21:48:47.268443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:13.964 [2024-07-15 21:48:47.268489] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:35:13.964 [2024-07-15 21:48:47.268527] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:13.964 [2024-07-15 21:48:47.270490] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:13.964 [2024-07-15 21:48:47.270592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:13.964 pt1 00:35:13.964 21:48:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:35:13.964 21:48:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:35:13.964 21:48:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:35:13.964 21:48:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:35:13.964 21:48:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:35:13.964 21:48:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:13.964 21:48:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:35:13.964 21:48:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:13.964 21:48:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:35:14.223 malloc2 00:35:14.223 21:48:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:14.482 [2024-07-15 21:48:47.704760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:14.482 [2024-07-15 21:48:47.704956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:14.482 [2024-07-15 21:48:47.705007] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:35:14.482 [2024-07-15 21:48:47.705042] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:14.482 [2024-07-15 21:48:47.707015] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:14.482 [2024-07-15 21:48:47.707094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:14.482 pt2 00:35:14.482 21:48:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:35:14.482 21:48:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:35:14.482 21:48:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:35:14.482 21:48:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:35:14.482 21:48:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:35:14.482 21:48:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:14.482 21:48:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:35:14.482 21:48:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:14.482 21:48:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:35:14.740 malloc3 00:35:14.740 21:48:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:35:14.999 [2024-07-15 21:48:48.141483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:35:14.999 [2024-07-15 21:48:48.141650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:14.999 [2024-07-15 21:48:48.141695] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:35:14.999 [2024-07-15 21:48:48.141733] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:14.999 [2024-07-15 21:48:48.143696] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:14.999 [2024-07-15 21:48:48.143773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:35:14.999 pt3 00:35:14.999 21:48:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:35:14.999 21:48:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:35:14.999 21:48:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:35:14.999 21:48:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:35:14.999 21:48:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:35:14.999 21:48:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:14.999 21:48:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:35:14.999 21:48:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:14.999 21:48:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:35:14.999 malloc4 00:35:15.257 21:48:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:35:15.257 [2024-07-15 21:48:48.541351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:35:15.257 [2024-07-15 21:48:48.541515] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:15.257 [2024-07-15 21:48:48.541560] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:35:15.257 [2024-07-15 21:48:48.541598] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:15.257 [2024-07-15 21:48:48.543674] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:15.257 [2024-07-15 21:48:48.543764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:35:15.257 pt4 00:35:15.257 21:48:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:35:15.257 21:48:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:35:15.257 21:48:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:35:15.517 [2024-07-15 21:48:48.725083] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:15.517 [2024-07-15 21:48:48.726847] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:15.517 [2024-07-15 21:48:48.726968] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:35:15.517 [2024-07-15 21:48:48.727039] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:35:15.517 [2024-07-15 21:48:48.727316] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:35:15.517 [2024-07-15 21:48:48.727362] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:35:15.517 [2024-07-15 21:48:48.727522] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:35:15.517 [2024-07-15 21:48:48.734806] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:35:15.517 [2024-07-15 21:48:48.734859] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:35:15.517 [2024-07-15 21:48:48.735034] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:15.517 21:48:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:35:15.517 21:48:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:15.517 21:48:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:15.517 21:48:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:15.517 21:48:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:15.517 21:48:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:15.517 21:48:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:15.517 21:48:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:15.517 21:48:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:15.517 21:48:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:15.517 21:48:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:15.517 21:48:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:15.776 21:48:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:15.776 "name": "raid_bdev1", 00:35:15.776 "uuid": "18323226-4e01-4217-9582-7958d9fe2a48", 00:35:15.776 "strip_size_kb": 64, 00:35:15.776 "state": "online", 00:35:15.776 "raid_level": "raid5f", 00:35:15.776 "superblock": true, 00:35:15.776 "num_base_bdevs": 4, 00:35:15.776 "num_base_bdevs_discovered": 4, 00:35:15.776 "num_base_bdevs_operational": 4, 00:35:15.776 "base_bdevs_list": [ 00:35:15.776 { 00:35:15.776 "name": "pt1", 00:35:15.776 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:15.776 "is_configured": true, 00:35:15.776 "data_offset": 2048, 00:35:15.776 "data_size": 63488 00:35:15.776 }, 00:35:15.776 { 00:35:15.776 "name": "pt2", 00:35:15.776 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:15.776 "is_configured": true, 00:35:15.776 "data_offset": 2048, 00:35:15.776 "data_size": 63488 00:35:15.776 }, 00:35:15.776 { 00:35:15.776 "name": "pt3", 00:35:15.776 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:15.776 "is_configured": true, 00:35:15.776 "data_offset": 2048, 00:35:15.776 "data_size": 63488 00:35:15.776 }, 00:35:15.776 { 00:35:15.776 "name": "pt4", 00:35:15.776 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:15.776 "is_configured": true, 00:35:15.776 "data_offset": 2048, 00:35:15.776 "data_size": 63488 00:35:15.776 } 00:35:15.776 ] 00:35:15.776 }' 00:35:15.776 21:48:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:15.776 21:48:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:16.347 21:48:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:35:16.348 21:48:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:35:16.348 21:48:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:35:16.348 21:48:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:35:16.348 21:48:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:35:16.348 21:48:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:35:16.348 21:48:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:35:16.348 21:48:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:16.348 [2024-07-15 21:48:49.689866] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:16.348 21:48:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:35:16.348 "name": "raid_bdev1", 00:35:16.348 "aliases": [ 00:35:16.348 "18323226-4e01-4217-9582-7958d9fe2a48" 00:35:16.348 ], 00:35:16.348 "product_name": "Raid Volume", 00:35:16.348 "block_size": 512, 00:35:16.348 "num_blocks": 190464, 00:35:16.348 "uuid": "18323226-4e01-4217-9582-7958d9fe2a48", 00:35:16.348 "assigned_rate_limits": { 00:35:16.348 "rw_ios_per_sec": 0, 00:35:16.348 "rw_mbytes_per_sec": 0, 00:35:16.348 "r_mbytes_per_sec": 0, 00:35:16.348 "w_mbytes_per_sec": 0 00:35:16.348 }, 00:35:16.348 "claimed": false, 00:35:16.348 "zoned": false, 00:35:16.348 "supported_io_types": { 00:35:16.348 "read": true, 00:35:16.348 "write": true, 00:35:16.348 "unmap": false, 00:35:16.348 "flush": false, 00:35:16.348 "reset": true, 00:35:16.348 "nvme_admin": false, 00:35:16.348 "nvme_io": false, 00:35:16.348 "nvme_io_md": false, 00:35:16.348 "write_zeroes": true, 00:35:16.348 "zcopy": false, 00:35:16.348 "get_zone_info": false, 00:35:16.348 "zone_management": false, 00:35:16.348 "zone_append": false, 00:35:16.348 "compare": false, 00:35:16.348 "compare_and_write": false, 00:35:16.348 "abort": false, 00:35:16.348 "seek_hole": false, 00:35:16.348 "seek_data": false, 00:35:16.348 "copy": false, 00:35:16.348 "nvme_iov_md": false 00:35:16.348 }, 00:35:16.348 "driver_specific": { 00:35:16.348 "raid": { 00:35:16.348 "uuid": "18323226-4e01-4217-9582-7958d9fe2a48", 00:35:16.348 "strip_size_kb": 64, 00:35:16.348 "state": "online", 00:35:16.348 "raid_level": "raid5f", 00:35:16.348 "superblock": true, 00:35:16.348 "num_base_bdevs": 4, 00:35:16.348 "num_base_bdevs_discovered": 4, 00:35:16.348 "num_base_bdevs_operational": 4, 00:35:16.348 "base_bdevs_list": [ 00:35:16.348 { 00:35:16.348 "name": "pt1", 00:35:16.348 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:16.348 "is_configured": true, 00:35:16.348 "data_offset": 2048, 00:35:16.348 "data_size": 63488 00:35:16.348 }, 00:35:16.348 { 00:35:16.348 "name": "pt2", 00:35:16.348 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:16.348 "is_configured": true, 00:35:16.348 "data_offset": 2048, 00:35:16.348 "data_size": 63488 00:35:16.348 }, 00:35:16.348 { 00:35:16.348 "name": "pt3", 00:35:16.348 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:16.348 "is_configured": true, 00:35:16.348 "data_offset": 2048, 00:35:16.348 "data_size": 63488 00:35:16.348 }, 00:35:16.348 { 00:35:16.348 "name": "pt4", 00:35:16.348 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:16.348 "is_configured": true, 00:35:16.348 "data_offset": 2048, 00:35:16.348 "data_size": 63488 00:35:16.348 } 00:35:16.348 ] 00:35:16.348 } 00:35:16.348 } 00:35:16.348 }' 00:35:16.348 21:48:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:16.611 21:48:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:35:16.611 pt2 00:35:16.611 pt3 00:35:16.611 pt4' 00:35:16.611 21:48:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:16.611 21:48:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:35:16.611 21:48:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:16.611 21:48:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:16.611 "name": "pt1", 00:35:16.611 "aliases": [ 00:35:16.611 "00000000-0000-0000-0000-000000000001" 00:35:16.611 ], 00:35:16.611 "product_name": "passthru", 00:35:16.611 "block_size": 512, 00:35:16.611 "num_blocks": 65536, 00:35:16.611 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:16.611 "assigned_rate_limits": { 00:35:16.611 "rw_ios_per_sec": 0, 00:35:16.611 "rw_mbytes_per_sec": 0, 00:35:16.611 "r_mbytes_per_sec": 0, 00:35:16.611 "w_mbytes_per_sec": 0 00:35:16.611 }, 00:35:16.611 "claimed": true, 00:35:16.611 "claim_type": "exclusive_write", 00:35:16.611 "zoned": false, 00:35:16.611 "supported_io_types": { 00:35:16.611 "read": true, 00:35:16.611 "write": true, 00:35:16.611 "unmap": true, 00:35:16.611 "flush": true, 00:35:16.611 "reset": true, 00:35:16.611 "nvme_admin": false, 00:35:16.611 "nvme_io": false, 00:35:16.611 "nvme_io_md": false, 00:35:16.611 "write_zeroes": true, 00:35:16.611 "zcopy": true, 00:35:16.611 "get_zone_info": false, 00:35:16.611 "zone_management": false, 00:35:16.611 "zone_append": false, 00:35:16.611 "compare": false, 00:35:16.611 "compare_and_write": false, 00:35:16.611 "abort": true, 00:35:16.611 "seek_hole": false, 00:35:16.611 "seek_data": false, 00:35:16.611 "copy": true, 00:35:16.611 "nvme_iov_md": false 00:35:16.611 }, 00:35:16.611 "memory_domains": [ 00:35:16.611 { 00:35:16.611 "dma_device_id": "system", 00:35:16.611 "dma_device_type": 1 00:35:16.611 }, 00:35:16.611 { 00:35:16.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:16.611 "dma_device_type": 2 00:35:16.611 } 00:35:16.611 ], 00:35:16.611 "driver_specific": { 00:35:16.611 "passthru": { 00:35:16.611 "name": "pt1", 00:35:16.611 "base_bdev_name": "malloc1" 00:35:16.611 } 00:35:16.611 } 00:35:16.611 }' 00:35:16.611 21:48:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:16.870 21:48:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:16.870 21:48:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:16.870 21:48:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:16.870 21:48:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:16.870 21:48:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:16.870 21:48:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:17.129 21:48:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:17.129 21:48:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:17.129 21:48:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:17.129 21:48:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:17.129 21:48:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:17.129 21:48:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:17.129 21:48:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:35:17.129 21:48:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:17.389 21:48:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:17.389 "name": "pt2", 00:35:17.389 "aliases": [ 00:35:17.389 "00000000-0000-0000-0000-000000000002" 00:35:17.389 ], 00:35:17.389 "product_name": "passthru", 00:35:17.389 "block_size": 512, 00:35:17.389 "num_blocks": 65536, 00:35:17.389 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:17.389 "assigned_rate_limits": { 00:35:17.389 "rw_ios_per_sec": 0, 00:35:17.389 "rw_mbytes_per_sec": 0, 00:35:17.389 "r_mbytes_per_sec": 0, 00:35:17.389 "w_mbytes_per_sec": 0 00:35:17.389 }, 00:35:17.389 "claimed": true, 00:35:17.389 "claim_type": "exclusive_write", 00:35:17.389 "zoned": false, 00:35:17.389 "supported_io_types": { 00:35:17.389 "read": true, 00:35:17.389 "write": true, 00:35:17.389 "unmap": true, 00:35:17.389 "flush": true, 00:35:17.389 "reset": true, 00:35:17.389 "nvme_admin": false, 00:35:17.389 "nvme_io": false, 00:35:17.389 "nvme_io_md": false, 00:35:17.389 "write_zeroes": true, 00:35:17.389 "zcopy": true, 00:35:17.389 "get_zone_info": false, 00:35:17.389 "zone_management": false, 00:35:17.389 "zone_append": false, 00:35:17.389 "compare": false, 00:35:17.389 "compare_and_write": false, 00:35:17.389 "abort": true, 00:35:17.389 "seek_hole": false, 00:35:17.389 "seek_data": false, 00:35:17.389 "copy": true, 00:35:17.389 "nvme_iov_md": false 00:35:17.389 }, 00:35:17.389 "memory_domains": [ 00:35:17.389 { 00:35:17.389 "dma_device_id": "system", 00:35:17.389 "dma_device_type": 1 00:35:17.389 }, 00:35:17.389 { 00:35:17.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:17.389 "dma_device_type": 2 00:35:17.389 } 00:35:17.389 ], 00:35:17.389 "driver_specific": { 00:35:17.389 "passthru": { 00:35:17.389 "name": "pt2", 00:35:17.389 "base_bdev_name": "malloc2" 00:35:17.389 } 00:35:17.389 } 00:35:17.389 }' 00:35:17.389 21:48:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:17.389 21:48:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:17.389 21:48:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:17.389 21:48:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:17.648 21:48:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:17.648 21:48:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:17.648 21:48:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:17.648 21:48:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:17.648 21:48:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:17.648 21:48:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:17.648 21:48:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:17.907 21:48:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:17.907 21:48:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:17.907 21:48:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:17.907 21:48:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:35:18.166 21:48:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:18.166 "name": "pt3", 00:35:18.166 "aliases": [ 00:35:18.166 "00000000-0000-0000-0000-000000000003" 00:35:18.166 ], 00:35:18.166 "product_name": "passthru", 00:35:18.166 "block_size": 512, 00:35:18.166 "num_blocks": 65536, 00:35:18.166 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:18.166 "assigned_rate_limits": { 00:35:18.166 "rw_ios_per_sec": 0, 00:35:18.166 "rw_mbytes_per_sec": 0, 00:35:18.166 "r_mbytes_per_sec": 0, 00:35:18.166 "w_mbytes_per_sec": 0 00:35:18.166 }, 00:35:18.166 "claimed": true, 00:35:18.166 "claim_type": "exclusive_write", 00:35:18.166 "zoned": false, 00:35:18.166 "supported_io_types": { 00:35:18.166 "read": true, 00:35:18.166 "write": true, 00:35:18.166 "unmap": true, 00:35:18.166 "flush": true, 00:35:18.166 "reset": true, 00:35:18.166 "nvme_admin": false, 00:35:18.166 "nvme_io": false, 00:35:18.166 "nvme_io_md": false, 00:35:18.166 "write_zeroes": true, 00:35:18.166 "zcopy": true, 00:35:18.166 "get_zone_info": false, 00:35:18.166 "zone_management": false, 00:35:18.166 "zone_append": false, 00:35:18.166 "compare": false, 00:35:18.166 "compare_and_write": false, 00:35:18.166 "abort": true, 00:35:18.166 "seek_hole": false, 00:35:18.166 "seek_data": false, 00:35:18.166 "copy": true, 00:35:18.166 "nvme_iov_md": false 00:35:18.166 }, 00:35:18.166 "memory_domains": [ 00:35:18.166 { 00:35:18.166 "dma_device_id": "system", 00:35:18.166 "dma_device_type": 1 00:35:18.166 }, 00:35:18.166 { 00:35:18.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:18.166 "dma_device_type": 2 00:35:18.166 } 00:35:18.166 ], 00:35:18.166 "driver_specific": { 00:35:18.166 "passthru": { 00:35:18.166 "name": "pt3", 00:35:18.166 "base_bdev_name": "malloc3" 00:35:18.166 } 00:35:18.166 } 00:35:18.166 }' 00:35:18.166 21:48:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:18.166 21:48:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:18.166 21:48:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:18.166 21:48:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:18.166 21:48:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:18.166 21:48:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:18.166 21:48:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:18.425 21:48:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:18.425 21:48:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:18.425 21:48:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:18.425 21:48:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:18.425 21:48:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:18.425 21:48:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:18.425 21:48:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:35:18.425 21:48:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:18.685 21:48:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:18.685 "name": "pt4", 00:35:18.685 "aliases": [ 00:35:18.685 "00000000-0000-0000-0000-000000000004" 00:35:18.685 ], 00:35:18.685 "product_name": "passthru", 00:35:18.685 "block_size": 512, 00:35:18.685 "num_blocks": 65536, 00:35:18.685 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:18.685 "assigned_rate_limits": { 00:35:18.685 "rw_ios_per_sec": 0, 00:35:18.685 "rw_mbytes_per_sec": 0, 00:35:18.685 "r_mbytes_per_sec": 0, 00:35:18.685 "w_mbytes_per_sec": 0 00:35:18.685 }, 00:35:18.685 "claimed": true, 00:35:18.685 "claim_type": "exclusive_write", 00:35:18.685 "zoned": false, 00:35:18.685 "supported_io_types": { 00:35:18.685 "read": true, 00:35:18.685 "write": true, 00:35:18.685 "unmap": true, 00:35:18.685 "flush": true, 00:35:18.685 "reset": true, 00:35:18.685 "nvme_admin": false, 00:35:18.685 "nvme_io": false, 00:35:18.685 "nvme_io_md": false, 00:35:18.685 "write_zeroes": true, 00:35:18.685 "zcopy": true, 00:35:18.685 "get_zone_info": false, 00:35:18.685 "zone_management": false, 00:35:18.685 "zone_append": false, 00:35:18.685 "compare": false, 00:35:18.685 "compare_and_write": false, 00:35:18.685 "abort": true, 00:35:18.685 "seek_hole": false, 00:35:18.685 "seek_data": false, 00:35:18.685 "copy": true, 00:35:18.685 "nvme_iov_md": false 00:35:18.685 }, 00:35:18.685 "memory_domains": [ 00:35:18.685 { 00:35:18.685 "dma_device_id": "system", 00:35:18.685 "dma_device_type": 1 00:35:18.685 }, 00:35:18.685 { 00:35:18.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:18.685 "dma_device_type": 2 00:35:18.685 } 00:35:18.685 ], 00:35:18.685 "driver_specific": { 00:35:18.685 "passthru": { 00:35:18.685 "name": "pt4", 00:35:18.685 "base_bdev_name": "malloc4" 00:35:18.685 } 00:35:18.685 } 00:35:18.685 }' 00:35:18.685 21:48:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:18.685 21:48:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:18.685 21:48:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:18.685 21:48:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:18.685 21:48:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:18.944 21:48:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:18.944 21:48:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:18.944 21:48:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:18.944 21:48:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:18.944 21:48:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:18.944 21:48:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:18.944 21:48:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:18.944 21:48:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:35:19.203 21:48:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:19.203 [2024-07-15 21:48:52.521123] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:19.203 21:48:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=18323226-4e01-4217-9582-7958d9fe2a48 00:35:19.203 21:48:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 18323226-4e01-4217-9582-7958d9fe2a48 ']' 00:35:19.203 21:48:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:19.462 [2024-07-15 21:48:52.716585] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:19.462 [2024-07-15 21:48:52.716669] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:19.462 [2024-07-15 21:48:52.716786] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:19.462 [2024-07-15 21:48:52.716906] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:19.462 [2024-07-15 21:48:52.716936] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:35:19.462 21:48:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:19.462 21:48:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:35:19.721 21:48:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:35:19.721 21:48:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:35:19.721 21:48:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:35:19.721 21:48:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:35:19.979 21:48:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:35:19.979 21:48:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:35:19.979 21:48:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:35:19.979 21:48:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:35:20.237 21:48:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:35:20.237 21:48:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:35:20.494 21:48:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:35:20.494 21:48:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:35:20.495 21:48:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:35:20.495 21:48:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:35:20.495 21:48:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:35:20.495 21:48:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:35:20.495 21:48:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:20.495 21:48:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:20.495 21:48:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:20.495 21:48:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:20.495 21:48:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:20.495 21:48:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:20.495 21:48:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:20.495 21:48:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:35:20.495 21:48:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:35:20.753 [2024-07-15 21:48:54.070243] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:35:20.753 [2024-07-15 21:48:54.072200] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:35:20.753 [2024-07-15 21:48:54.072308] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:35:20.753 [2024-07-15 21:48:54.072360] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:35:20.753 [2024-07-15 21:48:54.072436] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:35:20.753 [2024-07-15 21:48:54.072570] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:35:20.753 [2024-07-15 21:48:54.072636] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:35:20.753 [2024-07-15 21:48:54.072693] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:35:20.753 [2024-07-15 21:48:54.072739] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:20.753 [2024-07-15 21:48:54.072772] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state configuring 00:35:20.753 request: 00:35:20.753 { 00:35:20.753 "name": "raid_bdev1", 00:35:20.753 "raid_level": "raid5f", 00:35:20.753 "base_bdevs": [ 00:35:20.753 "malloc1", 00:35:20.753 "malloc2", 00:35:20.753 "malloc3", 00:35:20.753 "malloc4" 00:35:20.753 ], 00:35:20.753 "strip_size_kb": 64, 00:35:20.753 "superblock": false, 00:35:20.753 "method": "bdev_raid_create", 00:35:20.753 "req_id": 1 00:35:20.753 } 00:35:20.753 Got JSON-RPC error response 00:35:20.753 response: 00:35:20.753 { 00:35:20.753 "code": -17, 00:35:20.753 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:35:20.753 } 00:35:20.753 21:48:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:35:20.753 21:48:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:20.753 21:48:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:20.753 21:48:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:20.753 21:48:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:20.753 21:48:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:35:21.011 21:48:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:35:21.011 21:48:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:35:21.011 21:48:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:21.269 [2024-07-15 21:48:54.493484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:21.269 [2024-07-15 21:48:54.493628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:21.269 [2024-07-15 21:48:54.493672] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:35:21.269 [2024-07-15 21:48:54.493730] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:21.269 [2024-07-15 21:48:54.495804] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:21.269 [2024-07-15 21:48:54.495885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:21.269 [2024-07-15 21:48:54.496028] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:35:21.269 [2024-07-15 21:48:54.496113] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:21.269 pt1 00:35:21.269 21:48:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:35:21.269 21:48:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:21.269 21:48:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:21.269 21:48:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:21.269 21:48:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:21.269 21:48:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:21.269 21:48:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:21.269 21:48:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:21.269 21:48:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:21.269 21:48:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:21.269 21:48:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:21.269 21:48:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:21.528 21:48:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:21.528 "name": "raid_bdev1", 00:35:21.528 "uuid": "18323226-4e01-4217-9582-7958d9fe2a48", 00:35:21.528 "strip_size_kb": 64, 00:35:21.528 "state": "configuring", 00:35:21.528 "raid_level": "raid5f", 00:35:21.528 "superblock": true, 00:35:21.528 "num_base_bdevs": 4, 00:35:21.528 "num_base_bdevs_discovered": 1, 00:35:21.528 "num_base_bdevs_operational": 4, 00:35:21.528 "base_bdevs_list": [ 00:35:21.528 { 00:35:21.528 "name": "pt1", 00:35:21.528 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:21.528 "is_configured": true, 00:35:21.528 "data_offset": 2048, 00:35:21.528 "data_size": 63488 00:35:21.528 }, 00:35:21.528 { 00:35:21.528 "name": null, 00:35:21.528 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:21.528 "is_configured": false, 00:35:21.528 "data_offset": 2048, 00:35:21.528 "data_size": 63488 00:35:21.528 }, 00:35:21.528 { 00:35:21.528 "name": null, 00:35:21.528 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:21.528 "is_configured": false, 00:35:21.528 "data_offset": 2048, 00:35:21.528 "data_size": 63488 00:35:21.528 }, 00:35:21.528 { 00:35:21.528 "name": null, 00:35:21.528 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:21.528 "is_configured": false, 00:35:21.528 "data_offset": 2048, 00:35:21.528 "data_size": 63488 00:35:21.528 } 00:35:21.528 ] 00:35:21.528 }' 00:35:21.528 21:48:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:21.528 21:48:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:22.108 21:48:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:35:22.108 21:48:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:22.366 [2024-07-15 21:48:55.563726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:22.366 [2024-07-15 21:48:55.563898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:22.366 [2024-07-15 21:48:55.563971] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:35:22.366 [2024-07-15 21:48:55.564034] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:22.366 [2024-07-15 21:48:55.564538] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:22.366 [2024-07-15 21:48:55.564606] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:22.366 [2024-07-15 21:48:55.564747] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:35:22.366 [2024-07-15 21:48:55.564801] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:22.366 pt2 00:35:22.366 21:48:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:35:22.625 [2024-07-15 21:48:55.779353] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:35:22.625 21:48:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:35:22.625 21:48:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:22.625 21:48:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:22.625 21:48:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:22.625 21:48:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:22.625 21:48:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:22.625 21:48:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:22.625 21:48:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:22.625 21:48:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:22.625 21:48:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:22.625 21:48:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:22.625 21:48:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:22.884 21:48:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:22.884 "name": "raid_bdev1", 00:35:22.884 "uuid": "18323226-4e01-4217-9582-7958d9fe2a48", 00:35:22.884 "strip_size_kb": 64, 00:35:22.884 "state": "configuring", 00:35:22.884 "raid_level": "raid5f", 00:35:22.884 "superblock": true, 00:35:22.884 "num_base_bdevs": 4, 00:35:22.884 "num_base_bdevs_discovered": 1, 00:35:22.884 "num_base_bdevs_operational": 4, 00:35:22.884 "base_bdevs_list": [ 00:35:22.884 { 00:35:22.884 "name": "pt1", 00:35:22.884 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:22.884 "is_configured": true, 00:35:22.884 "data_offset": 2048, 00:35:22.884 "data_size": 63488 00:35:22.884 }, 00:35:22.884 { 00:35:22.884 "name": null, 00:35:22.884 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:22.884 "is_configured": false, 00:35:22.884 "data_offset": 2048, 00:35:22.884 "data_size": 63488 00:35:22.884 }, 00:35:22.884 { 00:35:22.884 "name": null, 00:35:22.884 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:22.884 "is_configured": false, 00:35:22.884 "data_offset": 2048, 00:35:22.884 "data_size": 63488 00:35:22.884 }, 00:35:22.884 { 00:35:22.884 "name": null, 00:35:22.884 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:22.884 "is_configured": false, 00:35:22.884 "data_offset": 2048, 00:35:22.884 "data_size": 63488 00:35:22.884 } 00:35:22.884 ] 00:35:22.884 }' 00:35:22.884 21:48:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:22.884 21:48:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.450 21:48:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:35:23.450 21:48:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:35:23.450 21:48:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:23.450 [2024-07-15 21:48:56.822685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:23.450 [2024-07-15 21:48:56.822830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:23.450 [2024-07-15 21:48:56.822891] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:35:23.450 [2024-07-15 21:48:56.822952] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:23.450 [2024-07-15 21:48:56.823401] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:23.450 [2024-07-15 21:48:56.823474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:23.451 [2024-07-15 21:48:56.823615] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:35:23.451 [2024-07-15 21:48:56.823669] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:23.451 pt2 00:35:23.709 21:48:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:35:23.709 21:48:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:35:23.709 21:48:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:35:23.709 [2024-07-15 21:48:56.998360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:35:23.709 [2024-07-15 21:48:56.998491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:23.709 [2024-07-15 21:48:56.998527] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:35:23.709 [2024-07-15 21:48:56.998598] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:23.709 [2024-07-15 21:48:56.999063] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:23.709 [2024-07-15 21:48:56.999127] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:35:23.709 [2024-07-15 21:48:56.999247] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:35:23.709 [2024-07-15 21:48:56.999290] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:35:23.709 pt3 00:35:23.709 21:48:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:35:23.709 21:48:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:35:23.709 21:48:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:35:23.968 [2024-07-15 21:48:57.190007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:35:23.968 [2024-07-15 21:48:57.190143] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:23.968 [2024-07-15 21:48:57.190196] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:35:23.968 [2024-07-15 21:48:57.190260] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:23.968 [2024-07-15 21:48:57.190694] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:23.968 [2024-07-15 21:48:57.190761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:35:23.968 [2024-07-15 21:48:57.190881] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:35:23.968 [2024-07-15 21:48:57.190933] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:35:23.968 [2024-07-15 21:48:57.191090] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:35:23.968 [2024-07-15 21:48:57.191122] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:35:23.968 [2024-07-15 21:48:57.191229] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:35:23.968 [2024-07-15 21:48:57.198463] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:35:23.968 [2024-07-15 21:48:57.198531] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:35:23.968 [2024-07-15 21:48:57.198759] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:23.968 pt4 00:35:23.968 21:48:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:35:23.968 21:48:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:35:23.968 21:48:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:35:23.968 21:48:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:23.968 21:48:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:23.968 21:48:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:23.968 21:48:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:23.968 21:48:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:23.968 21:48:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:23.968 21:48:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:23.968 21:48:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:23.968 21:48:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:23.968 21:48:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:23.968 21:48:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:24.227 21:48:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:24.227 "name": "raid_bdev1", 00:35:24.227 "uuid": "18323226-4e01-4217-9582-7958d9fe2a48", 00:35:24.227 "strip_size_kb": 64, 00:35:24.227 "state": "online", 00:35:24.227 "raid_level": "raid5f", 00:35:24.227 "superblock": true, 00:35:24.227 "num_base_bdevs": 4, 00:35:24.227 "num_base_bdevs_discovered": 4, 00:35:24.227 "num_base_bdevs_operational": 4, 00:35:24.227 "base_bdevs_list": [ 00:35:24.227 { 00:35:24.227 "name": "pt1", 00:35:24.227 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:24.227 "is_configured": true, 00:35:24.227 "data_offset": 2048, 00:35:24.227 "data_size": 63488 00:35:24.227 }, 00:35:24.227 { 00:35:24.227 "name": "pt2", 00:35:24.227 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:24.227 "is_configured": true, 00:35:24.227 "data_offset": 2048, 00:35:24.227 "data_size": 63488 00:35:24.227 }, 00:35:24.227 { 00:35:24.227 "name": "pt3", 00:35:24.227 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:24.227 "is_configured": true, 00:35:24.227 "data_offset": 2048, 00:35:24.227 "data_size": 63488 00:35:24.227 }, 00:35:24.227 { 00:35:24.227 "name": "pt4", 00:35:24.227 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:24.227 "is_configured": true, 00:35:24.227 "data_offset": 2048, 00:35:24.227 "data_size": 63488 00:35:24.227 } 00:35:24.227 ] 00:35:24.227 }' 00:35:24.227 21:48:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:24.227 21:48:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:24.795 21:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:35:24.795 21:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:35:24.795 21:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:35:24.795 21:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:35:24.795 21:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:35:24.795 21:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:35:24.795 21:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:24.795 21:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:35:25.054 [2024-07-15 21:48:58.274233] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:25.054 21:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:35:25.054 "name": "raid_bdev1", 00:35:25.054 "aliases": [ 00:35:25.054 "18323226-4e01-4217-9582-7958d9fe2a48" 00:35:25.054 ], 00:35:25.054 "product_name": "Raid Volume", 00:35:25.054 "block_size": 512, 00:35:25.054 "num_blocks": 190464, 00:35:25.054 "uuid": "18323226-4e01-4217-9582-7958d9fe2a48", 00:35:25.054 "assigned_rate_limits": { 00:35:25.054 "rw_ios_per_sec": 0, 00:35:25.054 "rw_mbytes_per_sec": 0, 00:35:25.054 "r_mbytes_per_sec": 0, 00:35:25.054 "w_mbytes_per_sec": 0 00:35:25.054 }, 00:35:25.054 "claimed": false, 00:35:25.054 "zoned": false, 00:35:25.054 "supported_io_types": { 00:35:25.054 "read": true, 00:35:25.054 "write": true, 00:35:25.054 "unmap": false, 00:35:25.054 "flush": false, 00:35:25.054 "reset": true, 00:35:25.054 "nvme_admin": false, 00:35:25.054 "nvme_io": false, 00:35:25.054 "nvme_io_md": false, 00:35:25.054 "write_zeroes": true, 00:35:25.054 "zcopy": false, 00:35:25.054 "get_zone_info": false, 00:35:25.054 "zone_management": false, 00:35:25.054 "zone_append": false, 00:35:25.054 "compare": false, 00:35:25.054 "compare_and_write": false, 00:35:25.054 "abort": false, 00:35:25.054 "seek_hole": false, 00:35:25.054 "seek_data": false, 00:35:25.054 "copy": false, 00:35:25.054 "nvme_iov_md": false 00:35:25.054 }, 00:35:25.054 "driver_specific": { 00:35:25.054 "raid": { 00:35:25.054 "uuid": "18323226-4e01-4217-9582-7958d9fe2a48", 00:35:25.054 "strip_size_kb": 64, 00:35:25.054 "state": "online", 00:35:25.054 "raid_level": "raid5f", 00:35:25.054 "superblock": true, 00:35:25.054 "num_base_bdevs": 4, 00:35:25.054 "num_base_bdevs_discovered": 4, 00:35:25.054 "num_base_bdevs_operational": 4, 00:35:25.054 "base_bdevs_list": [ 00:35:25.054 { 00:35:25.054 "name": "pt1", 00:35:25.054 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:25.054 "is_configured": true, 00:35:25.054 "data_offset": 2048, 00:35:25.054 "data_size": 63488 00:35:25.054 }, 00:35:25.054 { 00:35:25.054 "name": "pt2", 00:35:25.054 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:25.054 "is_configured": true, 00:35:25.054 "data_offset": 2048, 00:35:25.054 "data_size": 63488 00:35:25.054 }, 00:35:25.054 { 00:35:25.054 "name": "pt3", 00:35:25.054 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:25.054 "is_configured": true, 00:35:25.054 "data_offset": 2048, 00:35:25.054 "data_size": 63488 00:35:25.054 }, 00:35:25.054 { 00:35:25.054 "name": "pt4", 00:35:25.054 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:25.054 "is_configured": true, 00:35:25.054 "data_offset": 2048, 00:35:25.054 "data_size": 63488 00:35:25.054 } 00:35:25.054 ] 00:35:25.054 } 00:35:25.054 } 00:35:25.054 }' 00:35:25.054 21:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:25.054 21:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:35:25.054 pt2 00:35:25.054 pt3 00:35:25.054 pt4' 00:35:25.054 21:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:25.054 21:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:35:25.054 21:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:25.313 21:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:25.313 "name": "pt1", 00:35:25.313 "aliases": [ 00:35:25.313 "00000000-0000-0000-0000-000000000001" 00:35:25.313 ], 00:35:25.313 "product_name": "passthru", 00:35:25.313 "block_size": 512, 00:35:25.313 "num_blocks": 65536, 00:35:25.313 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:25.313 "assigned_rate_limits": { 00:35:25.313 "rw_ios_per_sec": 0, 00:35:25.313 "rw_mbytes_per_sec": 0, 00:35:25.313 "r_mbytes_per_sec": 0, 00:35:25.313 "w_mbytes_per_sec": 0 00:35:25.313 }, 00:35:25.313 "claimed": true, 00:35:25.313 "claim_type": "exclusive_write", 00:35:25.313 "zoned": false, 00:35:25.313 "supported_io_types": { 00:35:25.313 "read": true, 00:35:25.313 "write": true, 00:35:25.313 "unmap": true, 00:35:25.313 "flush": true, 00:35:25.313 "reset": true, 00:35:25.313 "nvme_admin": false, 00:35:25.313 "nvme_io": false, 00:35:25.313 "nvme_io_md": false, 00:35:25.313 "write_zeroes": true, 00:35:25.313 "zcopy": true, 00:35:25.313 "get_zone_info": false, 00:35:25.313 "zone_management": false, 00:35:25.313 "zone_append": false, 00:35:25.313 "compare": false, 00:35:25.313 "compare_and_write": false, 00:35:25.313 "abort": true, 00:35:25.313 "seek_hole": false, 00:35:25.313 "seek_data": false, 00:35:25.313 "copy": true, 00:35:25.313 "nvme_iov_md": false 00:35:25.313 }, 00:35:25.313 "memory_domains": [ 00:35:25.313 { 00:35:25.313 "dma_device_id": "system", 00:35:25.313 "dma_device_type": 1 00:35:25.313 }, 00:35:25.313 { 00:35:25.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:25.313 "dma_device_type": 2 00:35:25.313 } 00:35:25.313 ], 00:35:25.313 "driver_specific": { 00:35:25.313 "passthru": { 00:35:25.313 "name": "pt1", 00:35:25.313 "base_bdev_name": "malloc1" 00:35:25.313 } 00:35:25.313 } 00:35:25.313 }' 00:35:25.313 21:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:25.313 21:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:25.313 21:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:25.313 21:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:25.573 21:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:25.573 21:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:25.573 21:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:25.573 21:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:25.573 21:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:25.573 21:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:25.573 21:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:25.831 21:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:25.831 21:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:25.831 21:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:35:25.831 21:48:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:26.091 21:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:26.091 "name": "pt2", 00:35:26.091 "aliases": [ 00:35:26.091 "00000000-0000-0000-0000-000000000002" 00:35:26.091 ], 00:35:26.091 "product_name": "passthru", 00:35:26.091 "block_size": 512, 00:35:26.091 "num_blocks": 65536, 00:35:26.091 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:26.091 "assigned_rate_limits": { 00:35:26.091 "rw_ios_per_sec": 0, 00:35:26.091 "rw_mbytes_per_sec": 0, 00:35:26.091 "r_mbytes_per_sec": 0, 00:35:26.091 "w_mbytes_per_sec": 0 00:35:26.091 }, 00:35:26.091 "claimed": true, 00:35:26.091 "claim_type": "exclusive_write", 00:35:26.091 "zoned": false, 00:35:26.091 "supported_io_types": { 00:35:26.091 "read": true, 00:35:26.091 "write": true, 00:35:26.091 "unmap": true, 00:35:26.091 "flush": true, 00:35:26.091 "reset": true, 00:35:26.091 "nvme_admin": false, 00:35:26.091 "nvme_io": false, 00:35:26.091 "nvme_io_md": false, 00:35:26.091 "write_zeroes": true, 00:35:26.091 "zcopy": true, 00:35:26.091 "get_zone_info": false, 00:35:26.091 "zone_management": false, 00:35:26.091 "zone_append": false, 00:35:26.091 "compare": false, 00:35:26.091 "compare_and_write": false, 00:35:26.091 "abort": true, 00:35:26.091 "seek_hole": false, 00:35:26.091 "seek_data": false, 00:35:26.091 "copy": true, 00:35:26.091 "nvme_iov_md": false 00:35:26.091 }, 00:35:26.091 "memory_domains": [ 00:35:26.091 { 00:35:26.091 "dma_device_id": "system", 00:35:26.091 "dma_device_type": 1 00:35:26.091 }, 00:35:26.091 { 00:35:26.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:26.091 "dma_device_type": 2 00:35:26.091 } 00:35:26.091 ], 00:35:26.091 "driver_specific": { 00:35:26.091 "passthru": { 00:35:26.091 "name": "pt2", 00:35:26.091 "base_bdev_name": "malloc2" 00:35:26.091 } 00:35:26.091 } 00:35:26.091 }' 00:35:26.091 21:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:26.091 21:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:26.091 21:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:26.091 21:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:26.091 21:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:26.091 21:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:26.091 21:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:26.350 21:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:26.350 21:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:26.350 21:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:26.350 21:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:26.350 21:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:26.350 21:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:26.350 21:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:26.350 21:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:35:26.609 21:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:26.609 "name": "pt3", 00:35:26.609 "aliases": [ 00:35:26.609 "00000000-0000-0000-0000-000000000003" 00:35:26.609 ], 00:35:26.609 "product_name": "passthru", 00:35:26.609 "block_size": 512, 00:35:26.609 "num_blocks": 65536, 00:35:26.609 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:26.609 "assigned_rate_limits": { 00:35:26.609 "rw_ios_per_sec": 0, 00:35:26.609 "rw_mbytes_per_sec": 0, 00:35:26.609 "r_mbytes_per_sec": 0, 00:35:26.609 "w_mbytes_per_sec": 0 00:35:26.609 }, 00:35:26.609 "claimed": true, 00:35:26.609 "claim_type": "exclusive_write", 00:35:26.609 "zoned": false, 00:35:26.609 "supported_io_types": { 00:35:26.609 "read": true, 00:35:26.609 "write": true, 00:35:26.609 "unmap": true, 00:35:26.609 "flush": true, 00:35:26.609 "reset": true, 00:35:26.609 "nvme_admin": false, 00:35:26.609 "nvme_io": false, 00:35:26.609 "nvme_io_md": false, 00:35:26.610 "write_zeroes": true, 00:35:26.610 "zcopy": true, 00:35:26.610 "get_zone_info": false, 00:35:26.610 "zone_management": false, 00:35:26.610 "zone_append": false, 00:35:26.610 "compare": false, 00:35:26.610 "compare_and_write": false, 00:35:26.610 "abort": true, 00:35:26.610 "seek_hole": false, 00:35:26.610 "seek_data": false, 00:35:26.610 "copy": true, 00:35:26.610 "nvme_iov_md": false 00:35:26.610 }, 00:35:26.610 "memory_domains": [ 00:35:26.610 { 00:35:26.610 "dma_device_id": "system", 00:35:26.610 "dma_device_type": 1 00:35:26.610 }, 00:35:26.610 { 00:35:26.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:26.610 "dma_device_type": 2 00:35:26.610 } 00:35:26.610 ], 00:35:26.610 "driver_specific": { 00:35:26.610 "passthru": { 00:35:26.610 "name": "pt3", 00:35:26.610 "base_bdev_name": "malloc3" 00:35:26.610 } 00:35:26.610 } 00:35:26.610 }' 00:35:26.610 21:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:26.610 21:48:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:26.868 21:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:26.868 21:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:26.868 21:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:26.868 21:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:26.868 21:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:26.868 21:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:27.126 21:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:27.126 21:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:27.126 21:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:27.126 21:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:27.126 21:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:27.126 21:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:35:27.126 21:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:27.399 21:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:27.399 "name": "pt4", 00:35:27.399 "aliases": [ 00:35:27.399 "00000000-0000-0000-0000-000000000004" 00:35:27.399 ], 00:35:27.399 "product_name": "passthru", 00:35:27.399 "block_size": 512, 00:35:27.399 "num_blocks": 65536, 00:35:27.399 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:27.399 "assigned_rate_limits": { 00:35:27.399 "rw_ios_per_sec": 0, 00:35:27.399 "rw_mbytes_per_sec": 0, 00:35:27.399 "r_mbytes_per_sec": 0, 00:35:27.399 "w_mbytes_per_sec": 0 00:35:27.399 }, 00:35:27.399 "claimed": true, 00:35:27.399 "claim_type": "exclusive_write", 00:35:27.399 "zoned": false, 00:35:27.399 "supported_io_types": { 00:35:27.399 "read": true, 00:35:27.399 "write": true, 00:35:27.399 "unmap": true, 00:35:27.399 "flush": true, 00:35:27.399 "reset": true, 00:35:27.399 "nvme_admin": false, 00:35:27.399 "nvme_io": false, 00:35:27.399 "nvme_io_md": false, 00:35:27.399 "write_zeroes": true, 00:35:27.399 "zcopy": true, 00:35:27.399 "get_zone_info": false, 00:35:27.399 "zone_management": false, 00:35:27.399 "zone_append": false, 00:35:27.399 "compare": false, 00:35:27.399 "compare_and_write": false, 00:35:27.399 "abort": true, 00:35:27.399 "seek_hole": false, 00:35:27.399 "seek_data": false, 00:35:27.399 "copy": true, 00:35:27.399 "nvme_iov_md": false 00:35:27.399 }, 00:35:27.399 "memory_domains": [ 00:35:27.399 { 00:35:27.399 "dma_device_id": "system", 00:35:27.399 "dma_device_type": 1 00:35:27.399 }, 00:35:27.399 { 00:35:27.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:27.399 "dma_device_type": 2 00:35:27.399 } 00:35:27.399 ], 00:35:27.399 "driver_specific": { 00:35:27.399 "passthru": { 00:35:27.399 "name": "pt4", 00:35:27.399 "base_bdev_name": "malloc4" 00:35:27.399 } 00:35:27.399 } 00:35:27.399 }' 00:35:27.399 21:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:27.399 21:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:27.661 21:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:27.661 21:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:27.661 21:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:27.661 21:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:27.661 21:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:27.661 21:49:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:27.661 21:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:27.661 21:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:27.921 21:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:27.921 21:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:27.921 21:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:27.921 21:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:35:27.921 [2024-07-15 21:49:01.285115] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:28.180 21:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 18323226-4e01-4217-9582-7958d9fe2a48 '!=' 18323226-4e01-4217-9582-7958d9fe2a48 ']' 00:35:28.180 21:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid5f 00:35:28.180 21:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:35:28.180 21:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:35:28.180 21:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:35:28.180 [2024-07-15 21:49:01.456641] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:35:28.180 21:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:28.180 21:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:28.180 21:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:28.180 21:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:28.180 21:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:28.180 21:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:28.180 21:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:28.180 21:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:28.180 21:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:28.180 21:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:28.180 21:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:28.180 21:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:28.440 21:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:28.440 "name": "raid_bdev1", 00:35:28.440 "uuid": "18323226-4e01-4217-9582-7958d9fe2a48", 00:35:28.440 "strip_size_kb": 64, 00:35:28.440 "state": "online", 00:35:28.440 "raid_level": "raid5f", 00:35:28.440 "superblock": true, 00:35:28.440 "num_base_bdevs": 4, 00:35:28.440 "num_base_bdevs_discovered": 3, 00:35:28.440 "num_base_bdevs_operational": 3, 00:35:28.440 "base_bdevs_list": [ 00:35:28.440 { 00:35:28.440 "name": null, 00:35:28.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:28.440 "is_configured": false, 00:35:28.440 "data_offset": 2048, 00:35:28.440 "data_size": 63488 00:35:28.440 }, 00:35:28.440 { 00:35:28.440 "name": "pt2", 00:35:28.440 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:28.440 "is_configured": true, 00:35:28.440 "data_offset": 2048, 00:35:28.440 "data_size": 63488 00:35:28.440 }, 00:35:28.440 { 00:35:28.440 "name": "pt3", 00:35:28.440 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:28.440 "is_configured": true, 00:35:28.440 "data_offset": 2048, 00:35:28.440 "data_size": 63488 00:35:28.440 }, 00:35:28.440 { 00:35:28.440 "name": "pt4", 00:35:28.440 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:28.440 "is_configured": true, 00:35:28.440 "data_offset": 2048, 00:35:28.440 "data_size": 63488 00:35:28.440 } 00:35:28.440 ] 00:35:28.440 }' 00:35:28.440 21:49:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:28.440 21:49:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:29.008 21:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:29.267 [2024-07-15 21:49:02.419067] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:29.267 [2024-07-15 21:49:02.419171] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:29.267 [2024-07-15 21:49:02.419264] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:29.267 [2024-07-15 21:49:02.419346] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:29.267 [2024-07-15 21:49:02.419372] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:35:29.267 21:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:29.267 21:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:35:29.267 21:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:35:29.267 21:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:35:29.267 21:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:35:29.267 21:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:35:29.267 21:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:35:29.526 21:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:35:29.526 21:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:35:29.526 21:49:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:35:29.786 21:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:35:29.786 21:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:35:29.786 21:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:35:30.044 21:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:35:30.044 21:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:35:30.044 21:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:35:30.044 21:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:35:30.044 21:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:30.044 [2024-07-15 21:49:03.412579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:30.044 [2024-07-15 21:49:03.412770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:30.044 [2024-07-15 21:49:03.412816] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:35:30.044 [2024-07-15 21:49:03.412873] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:30.044 [2024-07-15 21:49:03.415013] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:30.044 [2024-07-15 21:49:03.415095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:30.044 [2024-07-15 21:49:03.415231] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:35:30.044 [2024-07-15 21:49:03.415304] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:30.044 pt2 00:35:30.321 21:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:35:30.321 21:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:30.321 21:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:30.321 21:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:30.321 21:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:30.321 21:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:30.321 21:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:30.321 21:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:30.321 21:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:30.321 21:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:30.321 21:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:30.321 21:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:30.321 21:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:30.321 "name": "raid_bdev1", 00:35:30.321 "uuid": "18323226-4e01-4217-9582-7958d9fe2a48", 00:35:30.321 "strip_size_kb": 64, 00:35:30.321 "state": "configuring", 00:35:30.321 "raid_level": "raid5f", 00:35:30.321 "superblock": true, 00:35:30.321 "num_base_bdevs": 4, 00:35:30.321 "num_base_bdevs_discovered": 1, 00:35:30.321 "num_base_bdevs_operational": 3, 00:35:30.321 "base_bdevs_list": [ 00:35:30.321 { 00:35:30.321 "name": null, 00:35:30.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:30.321 "is_configured": false, 00:35:30.321 "data_offset": 2048, 00:35:30.321 "data_size": 63488 00:35:30.321 }, 00:35:30.321 { 00:35:30.321 "name": "pt2", 00:35:30.321 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:30.321 "is_configured": true, 00:35:30.321 "data_offset": 2048, 00:35:30.321 "data_size": 63488 00:35:30.321 }, 00:35:30.321 { 00:35:30.321 "name": null, 00:35:30.321 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:30.321 "is_configured": false, 00:35:30.321 "data_offset": 2048, 00:35:30.321 "data_size": 63488 00:35:30.321 }, 00:35:30.321 { 00:35:30.321 "name": null, 00:35:30.321 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:30.321 "is_configured": false, 00:35:30.321 "data_offset": 2048, 00:35:30.321 "data_size": 63488 00:35:30.321 } 00:35:30.321 ] 00:35:30.321 }' 00:35:30.321 21:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:30.321 21:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:31.270 21:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:35:31.270 21:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:35:31.270 21:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:35:31.270 [2024-07-15 21:49:04.502707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:35:31.270 [2024-07-15 21:49:04.502858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:31.270 [2024-07-15 21:49:04.502910] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:35:31.270 [2024-07-15 21:49:04.502985] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:31.270 [2024-07-15 21:49:04.503495] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:31.270 [2024-07-15 21:49:04.503569] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:35:31.270 [2024-07-15 21:49:04.503708] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:35:31.270 [2024-07-15 21:49:04.503762] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:35:31.270 pt3 00:35:31.270 21:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:35:31.270 21:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:31.270 21:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:31.270 21:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:31.270 21:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:31.270 21:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:31.270 21:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:31.270 21:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:31.270 21:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:31.270 21:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:31.270 21:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:31.270 21:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:31.529 21:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:31.529 "name": "raid_bdev1", 00:35:31.529 "uuid": "18323226-4e01-4217-9582-7958d9fe2a48", 00:35:31.529 "strip_size_kb": 64, 00:35:31.529 "state": "configuring", 00:35:31.529 "raid_level": "raid5f", 00:35:31.529 "superblock": true, 00:35:31.529 "num_base_bdevs": 4, 00:35:31.529 "num_base_bdevs_discovered": 2, 00:35:31.529 "num_base_bdevs_operational": 3, 00:35:31.529 "base_bdevs_list": [ 00:35:31.529 { 00:35:31.529 "name": null, 00:35:31.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:31.529 "is_configured": false, 00:35:31.529 "data_offset": 2048, 00:35:31.529 "data_size": 63488 00:35:31.529 }, 00:35:31.529 { 00:35:31.529 "name": "pt2", 00:35:31.529 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:31.529 "is_configured": true, 00:35:31.529 "data_offset": 2048, 00:35:31.529 "data_size": 63488 00:35:31.529 }, 00:35:31.529 { 00:35:31.529 "name": "pt3", 00:35:31.529 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:31.529 "is_configured": true, 00:35:31.529 "data_offset": 2048, 00:35:31.529 "data_size": 63488 00:35:31.529 }, 00:35:31.529 { 00:35:31.529 "name": null, 00:35:31.529 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:31.529 "is_configured": false, 00:35:31.529 "data_offset": 2048, 00:35:31.529 "data_size": 63488 00:35:31.529 } 00:35:31.529 ] 00:35:31.529 }' 00:35:31.529 21:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:31.529 21:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:32.096 21:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:35:32.096 21:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:35:32.096 21:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:35:32.096 21:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:35:32.355 [2024-07-15 21:49:05.560909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:35:32.355 [2024-07-15 21:49:05.561061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:32.355 [2024-07-15 21:49:05.561111] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:35:32.355 [2024-07-15 21:49:05.561145] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:32.355 [2024-07-15 21:49:05.561613] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:32.355 [2024-07-15 21:49:05.561675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:35:32.355 [2024-07-15 21:49:05.561796] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:35:32.355 [2024-07-15 21:49:05.561844] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:35:32.355 [2024-07-15 21:49:05.561982] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:35:32.355 [2024-07-15 21:49:05.562011] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:35:32.355 [2024-07-15 21:49:05.562115] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:35:32.355 [2024-07-15 21:49:05.569324] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:35:32.355 [2024-07-15 21:49:05.569377] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:35:32.355 [2024-07-15 21:49:05.569668] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:32.355 pt4 00:35:32.355 21:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:32.355 21:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:32.355 21:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:32.355 21:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:32.355 21:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:32.355 21:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:32.355 21:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:32.355 21:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:32.355 21:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:32.355 21:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:32.355 21:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:32.355 21:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:32.613 21:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:32.613 "name": "raid_bdev1", 00:35:32.613 "uuid": "18323226-4e01-4217-9582-7958d9fe2a48", 00:35:32.613 "strip_size_kb": 64, 00:35:32.613 "state": "online", 00:35:32.613 "raid_level": "raid5f", 00:35:32.613 "superblock": true, 00:35:32.613 "num_base_bdevs": 4, 00:35:32.613 "num_base_bdevs_discovered": 3, 00:35:32.613 "num_base_bdevs_operational": 3, 00:35:32.613 "base_bdevs_list": [ 00:35:32.613 { 00:35:32.613 "name": null, 00:35:32.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:32.613 "is_configured": false, 00:35:32.613 "data_offset": 2048, 00:35:32.613 "data_size": 63488 00:35:32.613 }, 00:35:32.613 { 00:35:32.613 "name": "pt2", 00:35:32.613 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:32.613 "is_configured": true, 00:35:32.613 "data_offset": 2048, 00:35:32.613 "data_size": 63488 00:35:32.613 }, 00:35:32.613 { 00:35:32.613 "name": "pt3", 00:35:32.613 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:32.613 "is_configured": true, 00:35:32.613 "data_offset": 2048, 00:35:32.613 "data_size": 63488 00:35:32.613 }, 00:35:32.613 { 00:35:32.613 "name": "pt4", 00:35:32.613 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:32.613 "is_configured": true, 00:35:32.613 "data_offset": 2048, 00:35:32.613 "data_size": 63488 00:35:32.613 } 00:35:32.613 ] 00:35:32.613 }' 00:35:32.613 21:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:32.613 21:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:33.182 21:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:33.443 [2024-07-15 21:49:06.661887] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:33.443 [2024-07-15 21:49:06.662024] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:33.443 [2024-07-15 21:49:06.662142] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:33.443 [2024-07-15 21:49:06.662240] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:33.443 [2024-07-15 21:49:06.662276] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:35:33.443 21:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:33.443 21:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:35:33.705 21:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:35:33.705 21:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:35:33.705 21:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:35:33.705 21:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:35:33.705 21:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:35:33.964 21:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:33.964 [2024-07-15 21:49:07.308777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:33.964 [2024-07-15 21:49:07.308929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:33.964 [2024-07-15 21:49:07.308983] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:35:33.964 [2024-07-15 21:49:07.309075] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:33.964 [2024-07-15 21:49:07.311159] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:33.964 [2024-07-15 21:49:07.311254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:33.964 [2024-07-15 21:49:07.311409] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:35:33.964 [2024-07-15 21:49:07.311507] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:33.964 [2024-07-15 21:49:07.311708] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:35:33.964 [2024-07-15 21:49:07.311755] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:33.964 [2024-07-15 21:49:07.311800] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cf80 name raid_bdev1, state configuring 00:35:33.964 [2024-07-15 21:49:07.311904] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:33.964 [2024-07-15 21:49:07.312064] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:35:33.964 pt1 00:35:33.964 21:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:35:33.964 21:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:35:33.964 21:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:33.964 21:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:33.964 21:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:33.964 21:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:33.964 21:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:33.964 21:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:33.964 21:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:33.964 21:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:33.964 21:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:33.964 21:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:33.964 21:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:34.223 21:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:34.223 "name": "raid_bdev1", 00:35:34.223 "uuid": "18323226-4e01-4217-9582-7958d9fe2a48", 00:35:34.223 "strip_size_kb": 64, 00:35:34.223 "state": "configuring", 00:35:34.223 "raid_level": "raid5f", 00:35:34.223 "superblock": true, 00:35:34.223 "num_base_bdevs": 4, 00:35:34.223 "num_base_bdevs_discovered": 2, 00:35:34.223 "num_base_bdevs_operational": 3, 00:35:34.223 "base_bdevs_list": [ 00:35:34.223 { 00:35:34.223 "name": null, 00:35:34.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:34.223 "is_configured": false, 00:35:34.223 "data_offset": 2048, 00:35:34.223 "data_size": 63488 00:35:34.223 }, 00:35:34.223 { 00:35:34.223 "name": "pt2", 00:35:34.223 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:34.223 "is_configured": true, 00:35:34.223 "data_offset": 2048, 00:35:34.223 "data_size": 63488 00:35:34.223 }, 00:35:34.223 { 00:35:34.223 "name": "pt3", 00:35:34.223 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:34.223 "is_configured": true, 00:35:34.223 "data_offset": 2048, 00:35:34.223 "data_size": 63488 00:35:34.223 }, 00:35:34.223 { 00:35:34.223 "name": null, 00:35:34.223 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:34.223 "is_configured": false, 00:35:34.223 "data_offset": 2048, 00:35:34.223 "data_size": 63488 00:35:34.223 } 00:35:34.223 ] 00:35:34.223 }' 00:35:34.223 21:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:34.223 21:49:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:34.813 21:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:35:34.813 21:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:35:35.072 21:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:35:35.072 21:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:35:35.331 [2024-07-15 21:49:08.578575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:35:35.331 [2024-07-15 21:49:08.578737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:35.331 [2024-07-15 21:49:08.578780] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:35:35.331 [2024-07-15 21:49:08.578864] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:35.331 [2024-07-15 21:49:08.579324] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:35.331 [2024-07-15 21:49:08.579391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:35:35.331 [2024-07-15 21:49:08.579546] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:35:35.331 [2024-07-15 21:49:08.579592] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:35:35.331 [2024-07-15 21:49:08.579771] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000d280 00:35:35.331 [2024-07-15 21:49:08.579802] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:35:35.331 [2024-07-15 21:49:08.579937] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:35:35.331 [2024-07-15 21:49:08.587312] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000d280 00:35:35.331 [2024-07-15 21:49:08.587368] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000d280 00:35:35.331 [2024-07-15 21:49:08.587705] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:35.331 pt4 00:35:35.331 21:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:35.331 21:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:35.331 21:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:35.331 21:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:35.331 21:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:35.331 21:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:35.331 21:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:35.331 21:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:35.331 21:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:35.332 21:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:35.332 21:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:35.332 21:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:35.592 21:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:35.592 "name": "raid_bdev1", 00:35:35.592 "uuid": "18323226-4e01-4217-9582-7958d9fe2a48", 00:35:35.592 "strip_size_kb": 64, 00:35:35.592 "state": "online", 00:35:35.592 "raid_level": "raid5f", 00:35:35.592 "superblock": true, 00:35:35.592 "num_base_bdevs": 4, 00:35:35.592 "num_base_bdevs_discovered": 3, 00:35:35.592 "num_base_bdevs_operational": 3, 00:35:35.592 "base_bdevs_list": [ 00:35:35.592 { 00:35:35.592 "name": null, 00:35:35.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:35.592 "is_configured": false, 00:35:35.592 "data_offset": 2048, 00:35:35.592 "data_size": 63488 00:35:35.592 }, 00:35:35.592 { 00:35:35.592 "name": "pt2", 00:35:35.592 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:35.592 "is_configured": true, 00:35:35.592 "data_offset": 2048, 00:35:35.592 "data_size": 63488 00:35:35.592 }, 00:35:35.592 { 00:35:35.592 "name": "pt3", 00:35:35.592 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:35.592 "is_configured": true, 00:35:35.592 "data_offset": 2048, 00:35:35.592 "data_size": 63488 00:35:35.592 }, 00:35:35.592 { 00:35:35.592 "name": "pt4", 00:35:35.592 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:35.592 "is_configured": true, 00:35:35.592 "data_offset": 2048, 00:35:35.592 "data_size": 63488 00:35:35.592 } 00:35:35.592 ] 00:35:35.592 }' 00:35:35.592 21:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:35.592 21:49:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:36.162 21:49:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:35:36.162 21:49:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:35:36.421 21:49:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:35:36.421 21:49:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:36.421 21:49:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:35:36.708 [2024-07-15 21:49:09.882453] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:36.708 21:49:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 18323226-4e01-4217-9582-7958d9fe2a48 '!=' 18323226-4e01-4217-9582-7958d9fe2a48 ']' 00:35:36.708 21:49:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 158086 00:35:36.708 21:49:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 158086 ']' 00:35:36.708 21:49:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # kill -0 158086 00:35:36.708 21:49:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@953 -- # uname 00:35:36.708 21:49:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:36.708 21:49:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 158086 00:35:36.708 killing process with pid 158086 00:35:36.708 21:49:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:36.708 21:49:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:36.708 21:49:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 158086' 00:35:36.708 21:49:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@967 -- # kill 158086 00:35:36.708 21:49:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # wait 158086 00:35:36.708 [2024-07-15 21:49:09.920307] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:36.708 [2024-07-15 21:49:09.920398] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:36.708 [2024-07-15 21:49:09.920524] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:36.708 [2024-07-15 21:49:09.920571] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000d280 name raid_bdev1, state offline 00:35:36.969 [2024-07-15 21:49:10.343489] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:38.350 ************************************ 00:35:38.350 END TEST raid5f_superblock_test 00:35:38.350 ************************************ 00:35:38.350 21:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:35:38.350 00:35:38.350 real 0m25.664s 00:35:38.350 user 0m47.434s 00:35:38.350 sys 0m3.220s 00:35:38.350 21:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:38.351 21:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:38.351 21:49:11 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:35:38.351 21:49:11 bdev_raid -- bdev/bdev_raid.sh@889 -- # '[' true = true ']' 00:35:38.351 21:49:11 bdev_raid -- bdev/bdev_raid.sh@890 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:35:38.351 21:49:11 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:35:38.351 21:49:11 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:38.351 21:49:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:38.351 ************************************ 00:35:38.351 START TEST raid5f_rebuild_test 00:35:38.351 ************************************ 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid5f 4 false false true 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=158949 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 158949 /var/tmp/spdk-raid.sock 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 158949 ']' 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:38.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:38.351 21:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:38.609 [2024-07-15 21:49:11.776260] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:35:38.610 [2024-07-15 21:49:11.776479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158949 ] 00:35:38.610 I/O size of 3145728 is greater than zero copy threshold (65536). 00:35:38.610 Zero copy mechanism will not be used. 00:35:38.610 [2024-07-15 21:49:11.933519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:38.867 [2024-07-15 21:49:12.235095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:39.433 [2024-07-15 21:49:12.518393] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:39.433 21:49:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:39.433 21:49:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:35:39.433 21:49:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:35:39.433 21:49:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:35:39.692 BaseBdev1_malloc 00:35:39.692 21:49:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:35:39.951 [2024-07-15 21:49:13.136752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:35:39.951 [2024-07-15 21:49:13.137005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:39.951 [2024-07-15 21:49:13.137069] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:35:39.951 [2024-07-15 21:49:13.137114] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:39.951 [2024-07-15 21:49:13.139768] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:39.951 [2024-07-15 21:49:13.139878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:39.951 BaseBdev1 00:35:39.951 21:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:35:39.951 21:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:35:40.209 BaseBdev2_malloc 00:35:40.209 21:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:35:40.467 [2024-07-15 21:49:13.694944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:35:40.467 [2024-07-15 21:49:13.695207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:40.467 [2024-07-15 21:49:13.695267] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:35:40.467 [2024-07-15 21:49:13.695313] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:40.467 [2024-07-15 21:49:13.697878] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:40.467 [2024-07-15 21:49:13.697973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:35:40.467 BaseBdev2 00:35:40.467 21:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:35:40.467 21:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:35:40.726 BaseBdev3_malloc 00:35:40.726 21:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:35:40.726 [2024-07-15 21:49:14.084839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:35:40.726 [2024-07-15 21:49:14.085059] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:40.726 [2024-07-15 21:49:14.085118] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:35:40.726 [2024-07-15 21:49:14.085169] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:40.726 [2024-07-15 21:49:14.087688] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:40.726 [2024-07-15 21:49:14.087788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:35:40.726 BaseBdev3 00:35:40.726 21:49:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:35:40.726 21:49:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:35:40.983 BaseBdev4_malloc 00:35:40.983 21:49:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:35:41.245 [2024-07-15 21:49:14.489925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:35:41.245 [2024-07-15 21:49:14.490135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:41.245 [2024-07-15 21:49:14.490208] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:35:41.245 [2024-07-15 21:49:14.490256] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:41.245 [2024-07-15 21:49:14.492861] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:41.245 [2024-07-15 21:49:14.492957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:35:41.245 BaseBdev4 00:35:41.245 21:49:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:35:41.502 spare_malloc 00:35:41.502 21:49:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:35:41.770 spare_delay 00:35:41.770 21:49:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:35:41.770 [2024-07-15 21:49:15.118328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:41.770 [2024-07-15 21:49:15.118554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:41.770 [2024-07-15 21:49:15.118606] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:35:41.770 [2024-07-15 21:49:15.118659] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:41.770 [2024-07-15 21:49:15.121366] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:41.770 [2024-07-15 21:49:15.121471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:41.770 spare 00:35:41.770 21:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:35:42.028 [2024-07-15 21:49:15.290178] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:42.028 [2024-07-15 21:49:15.292495] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:42.028 [2024-07-15 21:49:15.292632] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:42.028 [2024-07-15 21:49:15.292718] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:42.028 [2024-07-15 21:49:15.292863] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:35:42.028 [2024-07-15 21:49:15.292898] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:35:42.028 [2024-07-15 21:49:15.293094] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:35:42.028 [2024-07-15 21:49:15.301302] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:35:42.028 [2024-07-15 21:49:15.301378] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:35:42.028 [2024-07-15 21:49:15.301670] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:42.028 21:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:35:42.028 21:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:42.028 21:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:42.028 21:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:42.028 21:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:42.028 21:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:42.028 21:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:42.028 21:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:42.028 21:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:42.028 21:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:42.028 21:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:42.028 21:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:42.286 21:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:42.286 "name": "raid_bdev1", 00:35:42.286 "uuid": "d10c84b2-e9a1-430b-8008-903593d3a1ff", 00:35:42.286 "strip_size_kb": 64, 00:35:42.286 "state": "online", 00:35:42.286 "raid_level": "raid5f", 00:35:42.286 "superblock": false, 00:35:42.286 "num_base_bdevs": 4, 00:35:42.286 "num_base_bdevs_discovered": 4, 00:35:42.286 "num_base_bdevs_operational": 4, 00:35:42.286 "base_bdevs_list": [ 00:35:42.286 { 00:35:42.286 "name": "BaseBdev1", 00:35:42.286 "uuid": "7c4f6644-fef6-54db-b020-24848a835584", 00:35:42.286 "is_configured": true, 00:35:42.286 "data_offset": 0, 00:35:42.286 "data_size": 65536 00:35:42.286 }, 00:35:42.286 { 00:35:42.286 "name": "BaseBdev2", 00:35:42.286 "uuid": "fb53c94e-86b2-59fb-bf59-890b49c88fe5", 00:35:42.286 "is_configured": true, 00:35:42.286 "data_offset": 0, 00:35:42.286 "data_size": 65536 00:35:42.286 }, 00:35:42.286 { 00:35:42.286 "name": "BaseBdev3", 00:35:42.286 "uuid": "a3618845-9f5b-5063-9f3a-be943db5ffb6", 00:35:42.286 "is_configured": true, 00:35:42.286 "data_offset": 0, 00:35:42.286 "data_size": 65536 00:35:42.286 }, 00:35:42.286 { 00:35:42.286 "name": "BaseBdev4", 00:35:42.286 "uuid": "253b6c09-38c6-5b54-8cde-8e5a71ad9c05", 00:35:42.286 "is_configured": true, 00:35:42.286 "data_offset": 0, 00:35:42.286 "data_size": 65536 00:35:42.286 } 00:35:42.286 ] 00:35:42.286 }' 00:35:42.286 21:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:42.286 21:49:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:42.852 21:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:42.852 21:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:35:43.110 [2024-07-15 21:49:16.350731] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:43.110 21:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=196608 00:35:43.110 21:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:35:43.110 21:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:43.368 21:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:35:43.368 21:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:35:43.368 21:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:35:43.368 21:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:35:43.368 21:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:35:43.368 21:49:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:43.368 21:49:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:35:43.368 21:49:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:43.369 21:49:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:35:43.369 21:49:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:43.369 21:49:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:35:43.369 21:49:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:43.369 21:49:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:43.369 21:49:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:35:43.627 [2024-07-15 21:49:16.754394] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:35:43.627 /dev/nbd0 00:35:43.627 21:49:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:43.627 21:49:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:43.627 21:49:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:35:43.627 21:49:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:35:43.627 21:49:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:35:43.627 21:49:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:35:43.627 21:49:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:35:43.627 21:49:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:35:43.627 21:49:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:35:43.627 21:49:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:35:43.627 21:49:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:43.627 1+0 records in 00:35:43.627 1+0 records out 00:35:43.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495108 s, 8.3 MB/s 00:35:43.627 21:49:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:43.627 21:49:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:35:43.627 21:49:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:43.627 21:49:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:35:43.627 21:49:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:35:43.627 21:49:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:43.627 21:49:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:43.627 21:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:35:43.627 21:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # write_unit_size=384 00:35:43.627 21:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # echo 192 00:35:43.627 21:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:35:44.196 512+0 records in 00:35:44.196 512+0 records out 00:35:44.196 100663296 bytes (101 MB, 96 MiB) copied, 0.587857 s, 171 MB/s 00:35:44.196 21:49:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:35:44.196 21:49:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:44.196 21:49:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:35:44.196 21:49:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:44.196 21:49:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:35:44.196 21:49:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:44.196 21:49:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:35:44.456 21:49:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:44.456 21:49:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:44.456 21:49:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:44.456 21:49:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:44.456 21:49:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:44.456 21:49:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:44.456 [2024-07-15 21:49:17.618713] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:44.456 21:49:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:35:44.456 21:49:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:35:44.456 21:49:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:35:44.456 [2024-07-15 21:49:17.793405] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:44.456 21:49:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:44.456 21:49:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:44.456 21:49:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:44.456 21:49:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:44.456 21:49:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:44.456 21:49:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:44.456 21:49:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:44.456 21:49:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:44.456 21:49:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:44.456 21:49:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:44.456 21:49:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:44.456 21:49:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:44.716 21:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:44.716 "name": "raid_bdev1", 00:35:44.716 "uuid": "d10c84b2-e9a1-430b-8008-903593d3a1ff", 00:35:44.716 "strip_size_kb": 64, 00:35:44.716 "state": "online", 00:35:44.716 "raid_level": "raid5f", 00:35:44.716 "superblock": false, 00:35:44.716 "num_base_bdevs": 4, 00:35:44.716 "num_base_bdevs_discovered": 3, 00:35:44.716 "num_base_bdevs_operational": 3, 00:35:44.716 "base_bdevs_list": [ 00:35:44.716 { 00:35:44.716 "name": null, 00:35:44.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:44.716 "is_configured": false, 00:35:44.716 "data_offset": 0, 00:35:44.716 "data_size": 65536 00:35:44.716 }, 00:35:44.716 { 00:35:44.716 "name": "BaseBdev2", 00:35:44.716 "uuid": "fb53c94e-86b2-59fb-bf59-890b49c88fe5", 00:35:44.716 "is_configured": true, 00:35:44.716 "data_offset": 0, 00:35:44.716 "data_size": 65536 00:35:44.716 }, 00:35:44.716 { 00:35:44.716 "name": "BaseBdev3", 00:35:44.716 "uuid": "a3618845-9f5b-5063-9f3a-be943db5ffb6", 00:35:44.716 "is_configured": true, 00:35:44.716 "data_offset": 0, 00:35:44.716 "data_size": 65536 00:35:44.716 }, 00:35:44.716 { 00:35:44.716 "name": "BaseBdev4", 00:35:44.716 "uuid": "253b6c09-38c6-5b54-8cde-8e5a71ad9c05", 00:35:44.716 "is_configured": true, 00:35:44.716 "data_offset": 0, 00:35:44.716 "data_size": 65536 00:35:44.716 } 00:35:44.716 ] 00:35:44.716 }' 00:35:44.716 21:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:44.716 21:49:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:45.285 21:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:35:45.567 [2024-07-15 21:49:18.795563] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:45.567 [2024-07-15 21:49:18.810754] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d7d0 00:35:45.567 [2024-07-15 21:49:18.820013] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:45.567 21:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:35:46.505 21:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:46.505 21:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:46.505 21:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:46.505 21:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:46.505 21:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:46.505 21:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:46.505 21:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:46.764 21:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:46.764 "name": "raid_bdev1", 00:35:46.764 "uuid": "d10c84b2-e9a1-430b-8008-903593d3a1ff", 00:35:46.764 "strip_size_kb": 64, 00:35:46.764 "state": "online", 00:35:46.764 "raid_level": "raid5f", 00:35:46.764 "superblock": false, 00:35:46.764 "num_base_bdevs": 4, 00:35:46.764 "num_base_bdevs_discovered": 4, 00:35:46.764 "num_base_bdevs_operational": 4, 00:35:46.764 "process": { 00:35:46.764 "type": "rebuild", 00:35:46.764 "target": "spare", 00:35:46.764 "progress": { 00:35:46.764 "blocks": 21120, 00:35:46.764 "percent": 10 00:35:46.764 } 00:35:46.764 }, 00:35:46.764 "base_bdevs_list": [ 00:35:46.764 { 00:35:46.764 "name": "spare", 00:35:46.764 "uuid": "a6e016fe-7096-50d6-b1a7-6d05c6c89029", 00:35:46.764 "is_configured": true, 00:35:46.764 "data_offset": 0, 00:35:46.764 "data_size": 65536 00:35:46.764 }, 00:35:46.764 { 00:35:46.764 "name": "BaseBdev2", 00:35:46.764 "uuid": "fb53c94e-86b2-59fb-bf59-890b49c88fe5", 00:35:46.764 "is_configured": true, 00:35:46.764 "data_offset": 0, 00:35:46.764 "data_size": 65536 00:35:46.764 }, 00:35:46.764 { 00:35:46.764 "name": "BaseBdev3", 00:35:46.764 "uuid": "a3618845-9f5b-5063-9f3a-be943db5ffb6", 00:35:46.764 "is_configured": true, 00:35:46.764 "data_offset": 0, 00:35:46.764 "data_size": 65536 00:35:46.764 }, 00:35:46.764 { 00:35:46.764 "name": "BaseBdev4", 00:35:46.764 "uuid": "253b6c09-38c6-5b54-8cde-8e5a71ad9c05", 00:35:46.764 "is_configured": true, 00:35:46.764 "data_offset": 0, 00:35:46.764 "data_size": 65536 00:35:46.764 } 00:35:46.764 ] 00:35:46.764 }' 00:35:46.764 21:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:46.764 21:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:46.764 21:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:46.764 21:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:46.764 21:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:35:47.022 [2024-07-15 21:49:20.291578] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:47.022 [2024-07-15 21:49:20.331635] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:47.022 [2024-07-15 21:49:20.331773] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:47.023 [2024-07-15 21:49:20.331809] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:47.023 [2024-07-15 21:49:20.331845] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:47.023 21:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:47.023 21:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:47.023 21:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:47.023 21:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:47.023 21:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:47.023 21:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:47.023 21:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:47.023 21:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:47.023 21:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:47.023 21:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:47.023 21:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:47.023 21:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:47.282 21:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:47.282 "name": "raid_bdev1", 00:35:47.282 "uuid": "d10c84b2-e9a1-430b-8008-903593d3a1ff", 00:35:47.282 "strip_size_kb": 64, 00:35:47.282 "state": "online", 00:35:47.282 "raid_level": "raid5f", 00:35:47.282 "superblock": false, 00:35:47.282 "num_base_bdevs": 4, 00:35:47.282 "num_base_bdevs_discovered": 3, 00:35:47.282 "num_base_bdevs_operational": 3, 00:35:47.282 "base_bdevs_list": [ 00:35:47.282 { 00:35:47.282 "name": null, 00:35:47.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:47.282 "is_configured": false, 00:35:47.282 "data_offset": 0, 00:35:47.282 "data_size": 65536 00:35:47.282 }, 00:35:47.282 { 00:35:47.282 "name": "BaseBdev2", 00:35:47.282 "uuid": "fb53c94e-86b2-59fb-bf59-890b49c88fe5", 00:35:47.282 "is_configured": true, 00:35:47.282 "data_offset": 0, 00:35:47.282 "data_size": 65536 00:35:47.282 }, 00:35:47.282 { 00:35:47.282 "name": "BaseBdev3", 00:35:47.282 "uuid": "a3618845-9f5b-5063-9f3a-be943db5ffb6", 00:35:47.282 "is_configured": true, 00:35:47.282 "data_offset": 0, 00:35:47.282 "data_size": 65536 00:35:47.282 }, 00:35:47.282 { 00:35:47.282 "name": "BaseBdev4", 00:35:47.282 "uuid": "253b6c09-38c6-5b54-8cde-8e5a71ad9c05", 00:35:47.282 "is_configured": true, 00:35:47.282 "data_offset": 0, 00:35:47.282 "data_size": 65536 00:35:47.282 } 00:35:47.282 ] 00:35:47.282 }' 00:35:47.282 21:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:47.282 21:49:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:48.221 21:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:48.221 21:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:48.221 21:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:48.221 21:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:48.221 21:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:48.221 21:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:48.221 21:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:48.221 21:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:48.221 "name": "raid_bdev1", 00:35:48.221 "uuid": "d10c84b2-e9a1-430b-8008-903593d3a1ff", 00:35:48.221 "strip_size_kb": 64, 00:35:48.221 "state": "online", 00:35:48.221 "raid_level": "raid5f", 00:35:48.221 "superblock": false, 00:35:48.221 "num_base_bdevs": 4, 00:35:48.221 "num_base_bdevs_discovered": 3, 00:35:48.221 "num_base_bdevs_operational": 3, 00:35:48.221 "base_bdevs_list": [ 00:35:48.221 { 00:35:48.221 "name": null, 00:35:48.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:48.221 "is_configured": false, 00:35:48.221 "data_offset": 0, 00:35:48.221 "data_size": 65536 00:35:48.221 }, 00:35:48.221 { 00:35:48.221 "name": "BaseBdev2", 00:35:48.221 "uuid": "fb53c94e-86b2-59fb-bf59-890b49c88fe5", 00:35:48.221 "is_configured": true, 00:35:48.221 "data_offset": 0, 00:35:48.221 "data_size": 65536 00:35:48.221 }, 00:35:48.221 { 00:35:48.221 "name": "BaseBdev3", 00:35:48.221 "uuid": "a3618845-9f5b-5063-9f3a-be943db5ffb6", 00:35:48.221 "is_configured": true, 00:35:48.221 "data_offset": 0, 00:35:48.221 "data_size": 65536 00:35:48.221 }, 00:35:48.221 { 00:35:48.221 "name": "BaseBdev4", 00:35:48.221 "uuid": "253b6c09-38c6-5b54-8cde-8e5a71ad9c05", 00:35:48.221 "is_configured": true, 00:35:48.221 "data_offset": 0, 00:35:48.221 "data_size": 65536 00:35:48.221 } 00:35:48.221 ] 00:35:48.221 }' 00:35:48.221 21:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:48.221 21:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:48.221 21:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:48.221 21:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:48.221 21:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:35:48.481 [2024-07-15 21:49:21.791988] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:48.481 [2024-07-15 21:49:21.807332] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d970 00:35:48.481 [2024-07-15 21:49:21.817367] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:48.481 21:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:35:49.860 21:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:49.860 21:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:49.860 21:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:49.860 21:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:49.860 21:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:49.860 21:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:49.860 21:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:49.860 21:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:49.860 "name": "raid_bdev1", 00:35:49.860 "uuid": "d10c84b2-e9a1-430b-8008-903593d3a1ff", 00:35:49.860 "strip_size_kb": 64, 00:35:49.860 "state": "online", 00:35:49.860 "raid_level": "raid5f", 00:35:49.860 "superblock": false, 00:35:49.860 "num_base_bdevs": 4, 00:35:49.860 "num_base_bdevs_discovered": 4, 00:35:49.860 "num_base_bdevs_operational": 4, 00:35:49.860 "process": { 00:35:49.860 "type": "rebuild", 00:35:49.860 "target": "spare", 00:35:49.860 "progress": { 00:35:49.860 "blocks": 23040, 00:35:49.860 "percent": 11 00:35:49.860 } 00:35:49.860 }, 00:35:49.860 "base_bdevs_list": [ 00:35:49.860 { 00:35:49.860 "name": "spare", 00:35:49.860 "uuid": "a6e016fe-7096-50d6-b1a7-6d05c6c89029", 00:35:49.860 "is_configured": true, 00:35:49.860 "data_offset": 0, 00:35:49.860 "data_size": 65536 00:35:49.860 }, 00:35:49.860 { 00:35:49.860 "name": "BaseBdev2", 00:35:49.860 "uuid": "fb53c94e-86b2-59fb-bf59-890b49c88fe5", 00:35:49.860 "is_configured": true, 00:35:49.860 "data_offset": 0, 00:35:49.860 "data_size": 65536 00:35:49.860 }, 00:35:49.860 { 00:35:49.860 "name": "BaseBdev3", 00:35:49.860 "uuid": "a3618845-9f5b-5063-9f3a-be943db5ffb6", 00:35:49.860 "is_configured": true, 00:35:49.860 "data_offset": 0, 00:35:49.860 "data_size": 65536 00:35:49.860 }, 00:35:49.860 { 00:35:49.860 "name": "BaseBdev4", 00:35:49.860 "uuid": "253b6c09-38c6-5b54-8cde-8e5a71ad9c05", 00:35:49.860 "is_configured": true, 00:35:49.860 "data_offset": 0, 00:35:49.860 "data_size": 65536 00:35:49.860 } 00:35:49.860 ] 00:35:49.860 }' 00:35:49.860 21:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:49.860 21:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:49.860 21:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:49.860 21:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:49.860 21:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:35:49.860 21:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:35:49.860 21:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:35:49.860 21:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=1216 00:35:49.860 21:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:49.860 21:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:49.860 21:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:49.860 21:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:49.860 21:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:49.860 21:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:49.860 21:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:49.860 21:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:50.119 21:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:50.119 "name": "raid_bdev1", 00:35:50.119 "uuid": "d10c84b2-e9a1-430b-8008-903593d3a1ff", 00:35:50.119 "strip_size_kb": 64, 00:35:50.119 "state": "online", 00:35:50.119 "raid_level": "raid5f", 00:35:50.119 "superblock": false, 00:35:50.119 "num_base_bdevs": 4, 00:35:50.119 "num_base_bdevs_discovered": 4, 00:35:50.119 "num_base_bdevs_operational": 4, 00:35:50.119 "process": { 00:35:50.119 "type": "rebuild", 00:35:50.119 "target": "spare", 00:35:50.119 "progress": { 00:35:50.119 "blocks": 30720, 00:35:50.119 "percent": 15 00:35:50.119 } 00:35:50.119 }, 00:35:50.119 "base_bdevs_list": [ 00:35:50.119 { 00:35:50.119 "name": "spare", 00:35:50.119 "uuid": "a6e016fe-7096-50d6-b1a7-6d05c6c89029", 00:35:50.119 "is_configured": true, 00:35:50.119 "data_offset": 0, 00:35:50.119 "data_size": 65536 00:35:50.119 }, 00:35:50.119 { 00:35:50.119 "name": "BaseBdev2", 00:35:50.119 "uuid": "fb53c94e-86b2-59fb-bf59-890b49c88fe5", 00:35:50.119 "is_configured": true, 00:35:50.119 "data_offset": 0, 00:35:50.119 "data_size": 65536 00:35:50.119 }, 00:35:50.119 { 00:35:50.119 "name": "BaseBdev3", 00:35:50.119 "uuid": "a3618845-9f5b-5063-9f3a-be943db5ffb6", 00:35:50.119 "is_configured": true, 00:35:50.119 "data_offset": 0, 00:35:50.119 "data_size": 65536 00:35:50.119 }, 00:35:50.119 { 00:35:50.119 "name": "BaseBdev4", 00:35:50.119 "uuid": "253b6c09-38c6-5b54-8cde-8e5a71ad9c05", 00:35:50.119 "is_configured": true, 00:35:50.119 "data_offset": 0, 00:35:50.119 "data_size": 65536 00:35:50.119 } 00:35:50.119 ] 00:35:50.119 }' 00:35:50.119 21:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:50.377 21:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:50.377 21:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:50.377 21:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:50.377 21:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:51.313 21:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:51.313 21:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:51.313 21:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:51.313 21:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:51.313 21:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:51.313 21:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:51.313 21:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:51.313 21:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:51.572 21:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:51.572 "name": "raid_bdev1", 00:35:51.572 "uuid": "d10c84b2-e9a1-430b-8008-903593d3a1ff", 00:35:51.572 "strip_size_kb": 64, 00:35:51.572 "state": "online", 00:35:51.572 "raid_level": "raid5f", 00:35:51.572 "superblock": false, 00:35:51.572 "num_base_bdevs": 4, 00:35:51.572 "num_base_bdevs_discovered": 4, 00:35:51.572 "num_base_bdevs_operational": 4, 00:35:51.572 "process": { 00:35:51.572 "type": "rebuild", 00:35:51.572 "target": "spare", 00:35:51.572 "progress": { 00:35:51.572 "blocks": 55680, 00:35:51.572 "percent": 28 00:35:51.572 } 00:35:51.572 }, 00:35:51.572 "base_bdevs_list": [ 00:35:51.572 { 00:35:51.572 "name": "spare", 00:35:51.572 "uuid": "a6e016fe-7096-50d6-b1a7-6d05c6c89029", 00:35:51.572 "is_configured": true, 00:35:51.572 "data_offset": 0, 00:35:51.572 "data_size": 65536 00:35:51.572 }, 00:35:51.572 { 00:35:51.572 "name": "BaseBdev2", 00:35:51.572 "uuid": "fb53c94e-86b2-59fb-bf59-890b49c88fe5", 00:35:51.572 "is_configured": true, 00:35:51.572 "data_offset": 0, 00:35:51.572 "data_size": 65536 00:35:51.572 }, 00:35:51.572 { 00:35:51.572 "name": "BaseBdev3", 00:35:51.572 "uuid": "a3618845-9f5b-5063-9f3a-be943db5ffb6", 00:35:51.572 "is_configured": true, 00:35:51.572 "data_offset": 0, 00:35:51.572 "data_size": 65536 00:35:51.572 }, 00:35:51.572 { 00:35:51.572 "name": "BaseBdev4", 00:35:51.572 "uuid": "253b6c09-38c6-5b54-8cde-8e5a71ad9c05", 00:35:51.572 "is_configured": true, 00:35:51.572 "data_offset": 0, 00:35:51.572 "data_size": 65536 00:35:51.572 } 00:35:51.572 ] 00:35:51.572 }' 00:35:51.572 21:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:51.572 21:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:51.572 21:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:51.572 21:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:51.572 21:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:52.511 21:49:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:52.511 21:49:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:52.511 21:49:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:52.511 21:49:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:52.511 21:49:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:52.511 21:49:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:52.511 21:49:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:52.511 21:49:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:52.792 21:49:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:52.792 "name": "raid_bdev1", 00:35:52.792 "uuid": "d10c84b2-e9a1-430b-8008-903593d3a1ff", 00:35:52.792 "strip_size_kb": 64, 00:35:52.792 "state": "online", 00:35:52.792 "raid_level": "raid5f", 00:35:52.792 "superblock": false, 00:35:52.792 "num_base_bdevs": 4, 00:35:52.792 "num_base_bdevs_discovered": 4, 00:35:52.792 "num_base_bdevs_operational": 4, 00:35:52.792 "process": { 00:35:52.792 "type": "rebuild", 00:35:52.792 "target": "spare", 00:35:52.792 "progress": { 00:35:52.792 "blocks": 80640, 00:35:52.792 "percent": 41 00:35:52.792 } 00:35:52.792 }, 00:35:52.792 "base_bdevs_list": [ 00:35:52.792 { 00:35:52.793 "name": "spare", 00:35:52.793 "uuid": "a6e016fe-7096-50d6-b1a7-6d05c6c89029", 00:35:52.793 "is_configured": true, 00:35:52.793 "data_offset": 0, 00:35:52.793 "data_size": 65536 00:35:52.793 }, 00:35:52.793 { 00:35:52.793 "name": "BaseBdev2", 00:35:52.793 "uuid": "fb53c94e-86b2-59fb-bf59-890b49c88fe5", 00:35:52.793 "is_configured": true, 00:35:52.793 "data_offset": 0, 00:35:52.793 "data_size": 65536 00:35:52.793 }, 00:35:52.793 { 00:35:52.793 "name": "BaseBdev3", 00:35:52.793 "uuid": "a3618845-9f5b-5063-9f3a-be943db5ffb6", 00:35:52.793 "is_configured": true, 00:35:52.793 "data_offset": 0, 00:35:52.793 "data_size": 65536 00:35:52.793 }, 00:35:52.793 { 00:35:52.793 "name": "BaseBdev4", 00:35:52.793 "uuid": "253b6c09-38c6-5b54-8cde-8e5a71ad9c05", 00:35:52.793 "is_configured": true, 00:35:52.793 "data_offset": 0, 00:35:52.793 "data_size": 65536 00:35:52.793 } 00:35:52.793 ] 00:35:52.793 }' 00:35:52.793 21:49:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:53.055 21:49:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:53.055 21:49:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:53.055 21:49:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:53.055 21:49:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:53.989 21:49:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:53.989 21:49:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:53.989 21:49:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:53.989 21:49:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:53.989 21:49:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:53.989 21:49:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:53.989 21:49:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:53.989 21:49:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:54.247 21:49:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:54.247 "name": "raid_bdev1", 00:35:54.247 "uuid": "d10c84b2-e9a1-430b-8008-903593d3a1ff", 00:35:54.247 "strip_size_kb": 64, 00:35:54.247 "state": "online", 00:35:54.247 "raid_level": "raid5f", 00:35:54.247 "superblock": false, 00:35:54.247 "num_base_bdevs": 4, 00:35:54.247 "num_base_bdevs_discovered": 4, 00:35:54.247 "num_base_bdevs_operational": 4, 00:35:54.247 "process": { 00:35:54.247 "type": "rebuild", 00:35:54.247 "target": "spare", 00:35:54.247 "progress": { 00:35:54.247 "blocks": 107520, 00:35:54.247 "percent": 54 00:35:54.247 } 00:35:54.247 }, 00:35:54.247 "base_bdevs_list": [ 00:35:54.247 { 00:35:54.247 "name": "spare", 00:35:54.247 "uuid": "a6e016fe-7096-50d6-b1a7-6d05c6c89029", 00:35:54.247 "is_configured": true, 00:35:54.247 "data_offset": 0, 00:35:54.247 "data_size": 65536 00:35:54.247 }, 00:35:54.247 { 00:35:54.247 "name": "BaseBdev2", 00:35:54.247 "uuid": "fb53c94e-86b2-59fb-bf59-890b49c88fe5", 00:35:54.247 "is_configured": true, 00:35:54.247 "data_offset": 0, 00:35:54.247 "data_size": 65536 00:35:54.247 }, 00:35:54.247 { 00:35:54.247 "name": "BaseBdev3", 00:35:54.247 "uuid": "a3618845-9f5b-5063-9f3a-be943db5ffb6", 00:35:54.247 "is_configured": true, 00:35:54.247 "data_offset": 0, 00:35:54.247 "data_size": 65536 00:35:54.247 }, 00:35:54.247 { 00:35:54.247 "name": "BaseBdev4", 00:35:54.247 "uuid": "253b6c09-38c6-5b54-8cde-8e5a71ad9c05", 00:35:54.247 "is_configured": true, 00:35:54.247 "data_offset": 0, 00:35:54.247 "data_size": 65536 00:35:54.247 } 00:35:54.247 ] 00:35:54.247 }' 00:35:54.247 21:49:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:54.247 21:49:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:54.247 21:49:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:54.247 21:49:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:54.247 21:49:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:55.626 21:49:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:55.626 21:49:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:55.626 21:49:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:55.626 21:49:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:55.626 21:49:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:55.626 21:49:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:55.626 21:49:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:55.626 21:49:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:55.626 21:49:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:55.626 "name": "raid_bdev1", 00:35:55.626 "uuid": "d10c84b2-e9a1-430b-8008-903593d3a1ff", 00:35:55.626 "strip_size_kb": 64, 00:35:55.626 "state": "online", 00:35:55.626 "raid_level": "raid5f", 00:35:55.626 "superblock": false, 00:35:55.626 "num_base_bdevs": 4, 00:35:55.626 "num_base_bdevs_discovered": 4, 00:35:55.626 "num_base_bdevs_operational": 4, 00:35:55.626 "process": { 00:35:55.626 "type": "rebuild", 00:35:55.626 "target": "spare", 00:35:55.626 "progress": { 00:35:55.626 "blocks": 132480, 00:35:55.626 "percent": 67 00:35:55.626 } 00:35:55.626 }, 00:35:55.626 "base_bdevs_list": [ 00:35:55.626 { 00:35:55.626 "name": "spare", 00:35:55.626 "uuid": "a6e016fe-7096-50d6-b1a7-6d05c6c89029", 00:35:55.626 "is_configured": true, 00:35:55.626 "data_offset": 0, 00:35:55.626 "data_size": 65536 00:35:55.626 }, 00:35:55.626 { 00:35:55.626 "name": "BaseBdev2", 00:35:55.626 "uuid": "fb53c94e-86b2-59fb-bf59-890b49c88fe5", 00:35:55.626 "is_configured": true, 00:35:55.626 "data_offset": 0, 00:35:55.626 "data_size": 65536 00:35:55.626 }, 00:35:55.626 { 00:35:55.626 "name": "BaseBdev3", 00:35:55.626 "uuid": "a3618845-9f5b-5063-9f3a-be943db5ffb6", 00:35:55.626 "is_configured": true, 00:35:55.626 "data_offset": 0, 00:35:55.626 "data_size": 65536 00:35:55.626 }, 00:35:55.626 { 00:35:55.626 "name": "BaseBdev4", 00:35:55.626 "uuid": "253b6c09-38c6-5b54-8cde-8e5a71ad9c05", 00:35:55.626 "is_configured": true, 00:35:55.626 "data_offset": 0, 00:35:55.626 "data_size": 65536 00:35:55.626 } 00:35:55.626 ] 00:35:55.626 }' 00:35:55.626 21:49:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:55.626 21:49:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:55.626 21:49:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:55.626 21:49:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:55.626 21:49:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:57.005 21:49:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:57.005 21:49:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:57.005 21:49:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:57.005 21:49:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:57.005 21:49:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:57.005 21:49:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:57.005 21:49:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:57.005 21:49:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:57.005 21:49:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:57.005 "name": "raid_bdev1", 00:35:57.005 "uuid": "d10c84b2-e9a1-430b-8008-903593d3a1ff", 00:35:57.005 "strip_size_kb": 64, 00:35:57.005 "state": "online", 00:35:57.005 "raid_level": "raid5f", 00:35:57.005 "superblock": false, 00:35:57.005 "num_base_bdevs": 4, 00:35:57.005 "num_base_bdevs_discovered": 4, 00:35:57.005 "num_base_bdevs_operational": 4, 00:35:57.005 "process": { 00:35:57.005 "type": "rebuild", 00:35:57.005 "target": "spare", 00:35:57.005 "progress": { 00:35:57.005 "blocks": 157440, 00:35:57.005 "percent": 80 00:35:57.005 } 00:35:57.005 }, 00:35:57.005 "base_bdevs_list": [ 00:35:57.005 { 00:35:57.005 "name": "spare", 00:35:57.005 "uuid": "a6e016fe-7096-50d6-b1a7-6d05c6c89029", 00:35:57.005 "is_configured": true, 00:35:57.005 "data_offset": 0, 00:35:57.005 "data_size": 65536 00:35:57.005 }, 00:35:57.005 { 00:35:57.005 "name": "BaseBdev2", 00:35:57.005 "uuid": "fb53c94e-86b2-59fb-bf59-890b49c88fe5", 00:35:57.005 "is_configured": true, 00:35:57.005 "data_offset": 0, 00:35:57.005 "data_size": 65536 00:35:57.005 }, 00:35:57.005 { 00:35:57.006 "name": "BaseBdev3", 00:35:57.006 "uuid": "a3618845-9f5b-5063-9f3a-be943db5ffb6", 00:35:57.006 "is_configured": true, 00:35:57.006 "data_offset": 0, 00:35:57.006 "data_size": 65536 00:35:57.006 }, 00:35:57.006 { 00:35:57.006 "name": "BaseBdev4", 00:35:57.006 "uuid": "253b6c09-38c6-5b54-8cde-8e5a71ad9c05", 00:35:57.006 "is_configured": true, 00:35:57.006 "data_offset": 0, 00:35:57.006 "data_size": 65536 00:35:57.006 } 00:35:57.006 ] 00:35:57.006 }' 00:35:57.006 21:49:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:57.006 21:49:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:57.006 21:49:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:57.006 21:49:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:57.006 21:49:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:57.941 21:49:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:57.941 21:49:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:57.941 21:49:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:57.941 21:49:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:57.941 21:49:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:57.941 21:49:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:57.941 21:49:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:57.941 21:49:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:58.200 21:49:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:58.200 "name": "raid_bdev1", 00:35:58.200 "uuid": "d10c84b2-e9a1-430b-8008-903593d3a1ff", 00:35:58.200 "strip_size_kb": 64, 00:35:58.200 "state": "online", 00:35:58.200 "raid_level": "raid5f", 00:35:58.200 "superblock": false, 00:35:58.200 "num_base_bdevs": 4, 00:35:58.200 "num_base_bdevs_discovered": 4, 00:35:58.200 "num_base_bdevs_operational": 4, 00:35:58.200 "process": { 00:35:58.200 "type": "rebuild", 00:35:58.200 "target": "spare", 00:35:58.200 "progress": { 00:35:58.200 "blocks": 182400, 00:35:58.200 "percent": 92 00:35:58.200 } 00:35:58.200 }, 00:35:58.200 "base_bdevs_list": [ 00:35:58.200 { 00:35:58.200 "name": "spare", 00:35:58.200 "uuid": "a6e016fe-7096-50d6-b1a7-6d05c6c89029", 00:35:58.200 "is_configured": true, 00:35:58.200 "data_offset": 0, 00:35:58.200 "data_size": 65536 00:35:58.200 }, 00:35:58.200 { 00:35:58.200 "name": "BaseBdev2", 00:35:58.200 "uuid": "fb53c94e-86b2-59fb-bf59-890b49c88fe5", 00:35:58.200 "is_configured": true, 00:35:58.200 "data_offset": 0, 00:35:58.200 "data_size": 65536 00:35:58.200 }, 00:35:58.200 { 00:35:58.200 "name": "BaseBdev3", 00:35:58.200 "uuid": "a3618845-9f5b-5063-9f3a-be943db5ffb6", 00:35:58.200 "is_configured": true, 00:35:58.200 "data_offset": 0, 00:35:58.200 "data_size": 65536 00:35:58.200 }, 00:35:58.200 { 00:35:58.200 "name": "BaseBdev4", 00:35:58.200 "uuid": "253b6c09-38c6-5b54-8cde-8e5a71ad9c05", 00:35:58.200 "is_configured": true, 00:35:58.200 "data_offset": 0, 00:35:58.200 "data_size": 65536 00:35:58.200 } 00:35:58.200 ] 00:35:58.200 }' 00:35:58.200 21:49:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:58.200 21:49:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:58.200 21:49:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:58.457 21:49:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:58.457 21:49:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:59.025 [2024-07-15 21:49:32.207716] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:35:59.025 [2024-07-15 21:49:32.207907] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:35:59.025 [2024-07-15 21:49:32.208008] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:59.284 21:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:59.284 21:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:59.284 21:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:59.284 21:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:59.284 21:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:59.284 21:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:59.284 21:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:59.284 21:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:59.547 21:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:59.547 "name": "raid_bdev1", 00:35:59.547 "uuid": "d10c84b2-e9a1-430b-8008-903593d3a1ff", 00:35:59.547 "strip_size_kb": 64, 00:35:59.547 "state": "online", 00:35:59.547 "raid_level": "raid5f", 00:35:59.547 "superblock": false, 00:35:59.547 "num_base_bdevs": 4, 00:35:59.547 "num_base_bdevs_discovered": 4, 00:35:59.547 "num_base_bdevs_operational": 4, 00:35:59.547 "base_bdevs_list": [ 00:35:59.547 { 00:35:59.547 "name": "spare", 00:35:59.547 "uuid": "a6e016fe-7096-50d6-b1a7-6d05c6c89029", 00:35:59.547 "is_configured": true, 00:35:59.547 "data_offset": 0, 00:35:59.547 "data_size": 65536 00:35:59.547 }, 00:35:59.547 { 00:35:59.547 "name": "BaseBdev2", 00:35:59.547 "uuid": "fb53c94e-86b2-59fb-bf59-890b49c88fe5", 00:35:59.547 "is_configured": true, 00:35:59.547 "data_offset": 0, 00:35:59.547 "data_size": 65536 00:35:59.547 }, 00:35:59.547 { 00:35:59.547 "name": "BaseBdev3", 00:35:59.547 "uuid": "a3618845-9f5b-5063-9f3a-be943db5ffb6", 00:35:59.547 "is_configured": true, 00:35:59.547 "data_offset": 0, 00:35:59.547 "data_size": 65536 00:35:59.547 }, 00:35:59.547 { 00:35:59.547 "name": "BaseBdev4", 00:35:59.547 "uuid": "253b6c09-38c6-5b54-8cde-8e5a71ad9c05", 00:35:59.547 "is_configured": true, 00:35:59.547 "data_offset": 0, 00:35:59.547 "data_size": 65536 00:35:59.547 } 00:35:59.547 ] 00:35:59.547 }' 00:35:59.547 21:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:59.815 21:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:35:59.815 21:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:59.815 21:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:35:59.815 21:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:35:59.815 21:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:59.815 21:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:59.815 21:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:59.815 21:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:59.815 21:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:59.815 21:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:59.815 21:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:59.815 21:49:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:59.815 "name": "raid_bdev1", 00:35:59.815 "uuid": "d10c84b2-e9a1-430b-8008-903593d3a1ff", 00:35:59.815 "strip_size_kb": 64, 00:35:59.815 "state": "online", 00:35:59.815 "raid_level": "raid5f", 00:35:59.815 "superblock": false, 00:35:59.815 "num_base_bdevs": 4, 00:35:59.815 "num_base_bdevs_discovered": 4, 00:35:59.815 "num_base_bdevs_operational": 4, 00:35:59.815 "base_bdevs_list": [ 00:35:59.815 { 00:35:59.815 "name": "spare", 00:35:59.815 "uuid": "a6e016fe-7096-50d6-b1a7-6d05c6c89029", 00:35:59.815 "is_configured": true, 00:35:59.815 "data_offset": 0, 00:35:59.815 "data_size": 65536 00:35:59.815 }, 00:35:59.815 { 00:35:59.815 "name": "BaseBdev2", 00:35:59.815 "uuid": "fb53c94e-86b2-59fb-bf59-890b49c88fe5", 00:35:59.815 "is_configured": true, 00:35:59.815 "data_offset": 0, 00:35:59.815 "data_size": 65536 00:35:59.815 }, 00:35:59.815 { 00:35:59.815 "name": "BaseBdev3", 00:35:59.815 "uuid": "a3618845-9f5b-5063-9f3a-be943db5ffb6", 00:35:59.815 "is_configured": true, 00:35:59.815 "data_offset": 0, 00:35:59.815 "data_size": 65536 00:35:59.815 }, 00:35:59.815 { 00:35:59.815 "name": "BaseBdev4", 00:35:59.815 "uuid": "253b6c09-38c6-5b54-8cde-8e5a71ad9c05", 00:35:59.815 "is_configured": true, 00:35:59.815 "data_offset": 0, 00:35:59.815 "data_size": 65536 00:35:59.815 } 00:35:59.815 ] 00:35:59.815 }' 00:35:59.815 21:49:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:00.071 21:49:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:00.071 21:49:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:00.071 21:49:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:00.071 21:49:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:36:00.071 21:49:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:00.071 21:49:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:00.071 21:49:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:00.071 21:49:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:00.072 21:49:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:00.072 21:49:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:00.072 21:49:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:00.072 21:49:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:00.072 21:49:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:00.072 21:49:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:00.072 21:49:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:00.332 21:49:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:00.332 "name": "raid_bdev1", 00:36:00.332 "uuid": "d10c84b2-e9a1-430b-8008-903593d3a1ff", 00:36:00.332 "strip_size_kb": 64, 00:36:00.332 "state": "online", 00:36:00.332 "raid_level": "raid5f", 00:36:00.332 "superblock": false, 00:36:00.332 "num_base_bdevs": 4, 00:36:00.332 "num_base_bdevs_discovered": 4, 00:36:00.332 "num_base_bdevs_operational": 4, 00:36:00.332 "base_bdevs_list": [ 00:36:00.332 { 00:36:00.332 "name": "spare", 00:36:00.332 "uuid": "a6e016fe-7096-50d6-b1a7-6d05c6c89029", 00:36:00.332 "is_configured": true, 00:36:00.332 "data_offset": 0, 00:36:00.332 "data_size": 65536 00:36:00.332 }, 00:36:00.332 { 00:36:00.332 "name": "BaseBdev2", 00:36:00.332 "uuid": "fb53c94e-86b2-59fb-bf59-890b49c88fe5", 00:36:00.332 "is_configured": true, 00:36:00.332 "data_offset": 0, 00:36:00.332 "data_size": 65536 00:36:00.332 }, 00:36:00.332 { 00:36:00.332 "name": "BaseBdev3", 00:36:00.332 "uuid": "a3618845-9f5b-5063-9f3a-be943db5ffb6", 00:36:00.332 "is_configured": true, 00:36:00.332 "data_offset": 0, 00:36:00.332 "data_size": 65536 00:36:00.332 }, 00:36:00.332 { 00:36:00.332 "name": "BaseBdev4", 00:36:00.332 "uuid": "253b6c09-38c6-5b54-8cde-8e5a71ad9c05", 00:36:00.332 "is_configured": true, 00:36:00.332 "data_offset": 0, 00:36:00.332 "data_size": 65536 00:36:00.332 } 00:36:00.332 ] 00:36:00.332 }' 00:36:00.332 21:49:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:00.332 21:49:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:00.899 21:49:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:01.159 [2024-07-15 21:49:34.350794] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:01.159 [2024-07-15 21:49:34.350893] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:01.159 [2024-07-15 21:49:34.351007] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:01.159 [2024-07-15 21:49:34.351138] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:01.159 [2024-07-15 21:49:34.351159] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:36:01.159 21:49:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:01.159 21:49:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:36:01.417 21:49:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:36:01.417 21:49:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:36:01.417 21:49:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:36:01.417 21:49:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:36:01.417 21:49:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:01.417 21:49:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:36:01.417 21:49:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:01.417 21:49:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:36:01.417 21:49:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:01.417 21:49:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:36:01.417 21:49:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:01.417 21:49:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:01.417 21:49:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:36:01.417 /dev/nbd0 00:36:01.417 21:49:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:01.417 21:49:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:01.417 21:49:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:36:01.417 21:49:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:36:01.417 21:49:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:36:01.417 21:49:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:36:01.417 21:49:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:36:01.677 21:49:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:36:01.677 21:49:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:36:01.677 21:49:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:36:01.677 21:49:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:01.677 1+0 records in 00:36:01.677 1+0 records out 00:36:01.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410044 s, 10.0 MB/s 00:36:01.677 21:49:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:01.677 21:49:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:36:01.677 21:49:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:01.677 21:49:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:36:01.677 21:49:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:36:01.677 21:49:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:01.677 21:49:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:01.677 21:49:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:36:01.677 /dev/nbd1 00:36:01.677 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:36:01.677 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:36:01.677 21:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:36:01.677 21:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:36:01.677 21:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:36:01.677 21:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:36:01.677 21:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:36:01.677 21:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:36:01.677 21:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:36:01.677 21:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:36:01.677 21:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:01.677 1+0 records in 00:36:01.677 1+0 records out 00:36:01.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031933 s, 12.8 MB/s 00:36:01.677 21:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:01.677 21:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:36:01.677 21:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:01.677 21:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:36:01.677 21:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:36:01.677 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:01.677 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:01.677 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:36:01.936 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:36:01.936 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:01.936 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:36:01.936 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:01.936 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:36:01.936 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:01.936 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:36:02.195 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:02.195 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:02.195 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:02.195 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:02.195 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:02.195 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:02.195 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:36:02.195 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:36:02.195 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:02.195 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:02.195 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:36:02.195 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:36:02.195 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:02.196 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:36:02.454 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:36:02.454 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:36:02.454 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:36:02.454 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:02.454 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:02.454 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:36:02.454 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:36:02.713 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:36:02.713 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:02.713 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:36:02.713 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:36:02.713 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:36:02.713 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:36:02.713 21:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 158949 00:36:02.713 21:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 158949 ']' 00:36:02.713 21:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 158949 00:36:02.713 21:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:36:02.713 21:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:02.713 21:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 158949 00:36:02.713 21:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:02.713 21:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:02.713 21:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 158949' 00:36:02.713 killing process with pid 158949 00:36:02.713 21:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@967 -- # kill 158949 00:36:02.713 21:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # wait 158949 00:36:02.713 Received shutdown signal, test time was about 60.000000 seconds 00:36:02.713 00:36:02.713 Latency(us) 00:36:02.713 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:02.713 =================================================================================================================== 00:36:02.713 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:02.713 [2024-07-15 21:49:35.917444] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:03.292 [2024-07-15 21:49:36.453331] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:04.667 ************************************ 00:36:04.667 END TEST raid5f_rebuild_test 00:36:04.667 ************************************ 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:36:04.667 00:36:04.667 real 0m26.144s 00:36:04.667 user 0m37.539s 00:36:04.667 sys 0m3.134s 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:04.667 21:49:37 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:36:04.667 21:49:37 bdev_raid -- bdev/bdev_raid.sh@891 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:36:04.667 21:49:37 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:36:04.667 21:49:37 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:04.667 21:49:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:04.667 ************************************ 00:36:04.667 START TEST raid5f_rebuild_test_sb 00:36:04.667 ************************************ 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid5f 4 true false true 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=159636 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 159636 /var/tmp/spdk-raid.sock 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 159636 ']' 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:04.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:04.667 21:49:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:04.667 [2024-07-15 21:49:37.976199] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:36:04.667 I/O size of 3145728 is greater than zero copy threshold (65536). 00:36:04.667 Zero copy mechanism will not be used. 00:36:04.667 [2024-07-15 21:49:37.976523] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159636 ] 00:36:04.924 [2024-07-15 21:49:38.154093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:05.182 [2024-07-15 21:49:38.373968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:05.438 [2024-07-15 21:49:38.584279] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:05.696 21:49:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:05.696 21:49:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:36:05.696 21:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:36:05.696 21:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:36:05.696 BaseBdev1_malloc 00:36:05.955 21:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:36:05.955 [2024-07-15 21:49:39.279630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:36:05.955 [2024-07-15 21:49:39.279826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:05.955 [2024-07-15 21:49:39.279892] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:36:05.955 [2024-07-15 21:49:39.279934] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:05.955 [2024-07-15 21:49:39.282210] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:05.955 [2024-07-15 21:49:39.282314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:05.955 BaseBdev1 00:36:05.955 21:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:36:05.955 21:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:36:06.213 BaseBdev2_malloc 00:36:06.213 21:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:36:06.470 [2024-07-15 21:49:39.771137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:36:06.470 [2024-07-15 21:49:39.771344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:06.470 [2024-07-15 21:49:39.771401] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:36:06.470 [2024-07-15 21:49:39.771445] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:06.470 [2024-07-15 21:49:39.773666] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:06.470 [2024-07-15 21:49:39.773752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:36:06.471 BaseBdev2 00:36:06.471 21:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:36:06.471 21:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:36:06.729 BaseBdev3_malloc 00:36:06.729 21:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:36:06.987 [2024-07-15 21:49:40.216401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:36:06.987 [2024-07-15 21:49:40.216577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:06.987 [2024-07-15 21:49:40.216627] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:36:06.987 [2024-07-15 21:49:40.216668] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:06.987 [2024-07-15 21:49:40.218870] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:06.987 [2024-07-15 21:49:40.218976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:36:06.987 BaseBdev3 00:36:06.987 21:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:36:06.987 21:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:36:07.245 BaseBdev4_malloc 00:36:07.245 21:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:36:07.503 [2024-07-15 21:49:40.676301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:36:07.503 [2024-07-15 21:49:40.676451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:07.503 [2024-07-15 21:49:40.676502] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:36:07.503 [2024-07-15 21:49:40.676545] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:07.503 [2024-07-15 21:49:40.678679] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:07.503 [2024-07-15 21:49:40.678764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:36:07.503 BaseBdev4 00:36:07.503 21:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:36:07.762 spare_malloc 00:36:07.762 21:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:36:08.022 spare_delay 00:36:08.022 21:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:36:08.022 [2024-07-15 21:49:41.356759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:08.022 [2024-07-15 21:49:41.356944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:08.022 [2024-07-15 21:49:41.356993] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:36:08.022 [2024-07-15 21:49:41.357043] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:08.022 [2024-07-15 21:49:41.359190] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:08.022 [2024-07-15 21:49:41.359297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:08.022 spare 00:36:08.022 21:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:36:08.281 [2024-07-15 21:49:41.572481] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:08.281 [2024-07-15 21:49:41.574429] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:08.281 [2024-07-15 21:49:41.574548] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:08.281 [2024-07-15 21:49:41.574617] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:36:08.281 [2024-07-15 21:49:41.574899] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:36:08.281 [2024-07-15 21:49:41.574948] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:36:08.281 [2024-07-15 21:49:41.575121] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:36:08.281 [2024-07-15 21:49:41.583826] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:36:08.281 [2024-07-15 21:49:41.583888] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:36:08.281 [2024-07-15 21:49:41.584145] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:08.281 21:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:36:08.281 21:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:08.281 21:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:08.281 21:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:08.281 21:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:08.281 21:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:08.281 21:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:08.281 21:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:08.281 21:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:08.281 21:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:08.281 21:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:08.281 21:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:08.540 21:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:08.540 "name": "raid_bdev1", 00:36:08.540 "uuid": "17c7c8d7-7fb2-4bf6-860c-12a5470b2c38", 00:36:08.540 "strip_size_kb": 64, 00:36:08.540 "state": "online", 00:36:08.540 "raid_level": "raid5f", 00:36:08.540 "superblock": true, 00:36:08.540 "num_base_bdevs": 4, 00:36:08.540 "num_base_bdevs_discovered": 4, 00:36:08.540 "num_base_bdevs_operational": 4, 00:36:08.540 "base_bdevs_list": [ 00:36:08.540 { 00:36:08.540 "name": "BaseBdev1", 00:36:08.540 "uuid": "3cfa9e47-8335-5d41-9011-5b0a06ead505", 00:36:08.540 "is_configured": true, 00:36:08.540 "data_offset": 2048, 00:36:08.540 "data_size": 63488 00:36:08.540 }, 00:36:08.540 { 00:36:08.540 "name": "BaseBdev2", 00:36:08.540 "uuid": "3dd94f7c-57b5-5c64-8ccb-d22063de3997", 00:36:08.540 "is_configured": true, 00:36:08.540 "data_offset": 2048, 00:36:08.540 "data_size": 63488 00:36:08.540 }, 00:36:08.540 { 00:36:08.540 "name": "BaseBdev3", 00:36:08.540 "uuid": "f3d520e2-8c82-51cb-b68d-7e1af89d3fed", 00:36:08.540 "is_configured": true, 00:36:08.540 "data_offset": 2048, 00:36:08.540 "data_size": 63488 00:36:08.540 }, 00:36:08.540 { 00:36:08.540 "name": "BaseBdev4", 00:36:08.540 "uuid": "d1343a76-e811-54b7-a6ba-3ec717620564", 00:36:08.540 "is_configured": true, 00:36:08.540 "data_offset": 2048, 00:36:08.540 "data_size": 63488 00:36:08.540 } 00:36:08.540 ] 00:36:08.540 }' 00:36:08.540 21:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:08.540 21:49:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:09.475 21:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:09.475 21:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:36:09.475 [2024-07-15 21:49:42.691807] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:09.475 21:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=190464 00:36:09.475 21:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:36:09.475 21:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:09.733 21:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:36:09.733 21:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:36:09.733 21:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:36:09.733 21:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:36:09.733 21:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:36:09.733 21:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:09.733 21:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:36:09.733 21:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:09.733 21:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:36:09.733 21:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:09.733 21:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:36:09.733 21:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:09.733 21:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:09.733 21:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:36:09.993 [2024-07-15 21:49:43.122923] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:36:09.993 /dev/nbd0 00:36:09.993 21:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:09.993 21:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:09.993 21:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:36:09.993 21:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:36:09.993 21:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:36:09.993 21:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:36:09.993 21:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:36:09.993 21:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:36:09.993 21:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:36:09.993 21:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:36:09.993 21:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:09.993 1+0 records in 00:36:09.993 1+0 records out 00:36:09.993 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485353 s, 8.4 MB/s 00:36:09.993 21:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:09.993 21:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:36:09.993 21:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:09.993 21:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:36:09.993 21:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:36:09.993 21:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:09.993 21:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:09.993 21:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:36:09.993 21:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # write_unit_size=384 00:36:09.993 21:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # echo 192 00:36:09.993 21:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:36:10.559 496+0 records in 00:36:10.559 496+0 records out 00:36:10.559 97517568 bytes (98 MB, 93 MiB) copied, 0.560012 s, 174 MB/s 00:36:10.559 21:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:36:10.559 21:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:10.559 21:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:36:10.559 21:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:10.559 21:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:36:10.559 21:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:10.559 21:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:36:10.819 21:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:10.819 21:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:10.819 21:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:10.819 21:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:10.819 21:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:10.819 21:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:10.819 21:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:36:10.819 [2024-07-15 21:49:44.025423] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:10.819 21:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:36:10.819 21:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:10.819 21:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:10.819 21:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:36:10.819 21:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:36:10.819 21:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:36:11.078 [2024-07-15 21:49:44.342188] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:11.078 21:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:11.078 21:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:11.078 21:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:11.078 21:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:11.078 21:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:11.078 21:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:36:11.078 21:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:11.078 21:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:11.078 21:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:11.078 21:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:11.078 21:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:11.078 21:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:11.337 21:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:11.337 "name": "raid_bdev1", 00:36:11.337 "uuid": "17c7c8d7-7fb2-4bf6-860c-12a5470b2c38", 00:36:11.337 "strip_size_kb": 64, 00:36:11.337 "state": "online", 00:36:11.337 "raid_level": "raid5f", 00:36:11.337 "superblock": true, 00:36:11.337 "num_base_bdevs": 4, 00:36:11.337 "num_base_bdevs_discovered": 3, 00:36:11.337 "num_base_bdevs_operational": 3, 00:36:11.337 "base_bdevs_list": [ 00:36:11.337 { 00:36:11.337 "name": null, 00:36:11.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:11.337 "is_configured": false, 00:36:11.337 "data_offset": 2048, 00:36:11.337 "data_size": 63488 00:36:11.337 }, 00:36:11.337 { 00:36:11.337 "name": "BaseBdev2", 00:36:11.337 "uuid": "3dd94f7c-57b5-5c64-8ccb-d22063de3997", 00:36:11.337 "is_configured": true, 00:36:11.337 "data_offset": 2048, 00:36:11.337 "data_size": 63488 00:36:11.337 }, 00:36:11.337 { 00:36:11.337 "name": "BaseBdev3", 00:36:11.337 "uuid": "f3d520e2-8c82-51cb-b68d-7e1af89d3fed", 00:36:11.337 "is_configured": true, 00:36:11.337 "data_offset": 2048, 00:36:11.337 "data_size": 63488 00:36:11.337 }, 00:36:11.337 { 00:36:11.337 "name": "BaseBdev4", 00:36:11.337 "uuid": "d1343a76-e811-54b7-a6ba-3ec717620564", 00:36:11.337 "is_configured": true, 00:36:11.337 "data_offset": 2048, 00:36:11.337 "data_size": 63488 00:36:11.337 } 00:36:11.337 ] 00:36:11.337 }' 00:36:11.337 21:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:11.337 21:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:11.905 21:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:36:12.164 [2024-07-15 21:49:45.400324] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:12.164 [2024-07-15 21:49:45.415844] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002cad0 00:36:12.164 [2024-07-15 21:49:45.423633] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:12.164 21:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:36:13.103 21:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:13.103 21:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:13.103 21:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:13.103 21:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:13.103 21:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:13.103 21:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:13.103 21:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:13.361 21:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:13.361 "name": "raid_bdev1", 00:36:13.361 "uuid": "17c7c8d7-7fb2-4bf6-860c-12a5470b2c38", 00:36:13.361 "strip_size_kb": 64, 00:36:13.361 "state": "online", 00:36:13.361 "raid_level": "raid5f", 00:36:13.361 "superblock": true, 00:36:13.361 "num_base_bdevs": 4, 00:36:13.361 "num_base_bdevs_discovered": 4, 00:36:13.361 "num_base_bdevs_operational": 4, 00:36:13.361 "process": { 00:36:13.361 "type": "rebuild", 00:36:13.361 "target": "spare", 00:36:13.361 "progress": { 00:36:13.361 "blocks": 21120, 00:36:13.361 "percent": 11 00:36:13.361 } 00:36:13.361 }, 00:36:13.361 "base_bdevs_list": [ 00:36:13.361 { 00:36:13.361 "name": "spare", 00:36:13.361 "uuid": "a8a418c4-1e77-5196-801a-61550db2c50a", 00:36:13.361 "is_configured": true, 00:36:13.361 "data_offset": 2048, 00:36:13.361 "data_size": 63488 00:36:13.361 }, 00:36:13.361 { 00:36:13.361 "name": "BaseBdev2", 00:36:13.361 "uuid": "3dd94f7c-57b5-5c64-8ccb-d22063de3997", 00:36:13.361 "is_configured": true, 00:36:13.361 "data_offset": 2048, 00:36:13.361 "data_size": 63488 00:36:13.361 }, 00:36:13.361 { 00:36:13.361 "name": "BaseBdev3", 00:36:13.361 "uuid": "f3d520e2-8c82-51cb-b68d-7e1af89d3fed", 00:36:13.361 "is_configured": true, 00:36:13.361 "data_offset": 2048, 00:36:13.361 "data_size": 63488 00:36:13.361 }, 00:36:13.361 { 00:36:13.361 "name": "BaseBdev4", 00:36:13.361 "uuid": "d1343a76-e811-54b7-a6ba-3ec717620564", 00:36:13.361 "is_configured": true, 00:36:13.361 "data_offset": 2048, 00:36:13.361 "data_size": 63488 00:36:13.361 } 00:36:13.361 ] 00:36:13.361 }' 00:36:13.361 21:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:13.361 21:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:13.361 21:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:13.361 21:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:13.361 21:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:36:13.620 [2024-07-15 21:49:46.894468] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:13.620 [2024-07-15 21:49:46.933395] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:13.620 [2024-07-15 21:49:46.933540] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:13.620 [2024-07-15 21:49:46.933574] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:13.620 [2024-07-15 21:49:46.933598] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:13.620 21:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:13.620 21:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:13.620 21:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:13.620 21:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:13.620 21:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:13.620 21:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:36:13.620 21:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:13.621 21:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:13.621 21:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:13.621 21:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:13.621 21:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:13.621 21:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:13.880 21:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:13.880 "name": "raid_bdev1", 00:36:13.880 "uuid": "17c7c8d7-7fb2-4bf6-860c-12a5470b2c38", 00:36:13.880 "strip_size_kb": 64, 00:36:13.880 "state": "online", 00:36:13.880 "raid_level": "raid5f", 00:36:13.880 "superblock": true, 00:36:13.880 "num_base_bdevs": 4, 00:36:13.880 "num_base_bdevs_discovered": 3, 00:36:13.880 "num_base_bdevs_operational": 3, 00:36:13.880 "base_bdevs_list": [ 00:36:13.880 { 00:36:13.880 "name": null, 00:36:13.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:13.880 "is_configured": false, 00:36:13.880 "data_offset": 2048, 00:36:13.880 "data_size": 63488 00:36:13.880 }, 00:36:13.880 { 00:36:13.880 "name": "BaseBdev2", 00:36:13.880 "uuid": "3dd94f7c-57b5-5c64-8ccb-d22063de3997", 00:36:13.880 "is_configured": true, 00:36:13.880 "data_offset": 2048, 00:36:13.880 "data_size": 63488 00:36:13.880 }, 00:36:13.880 { 00:36:13.880 "name": "BaseBdev3", 00:36:13.880 "uuid": "f3d520e2-8c82-51cb-b68d-7e1af89d3fed", 00:36:13.880 "is_configured": true, 00:36:13.880 "data_offset": 2048, 00:36:13.880 "data_size": 63488 00:36:13.880 }, 00:36:13.880 { 00:36:13.880 "name": "BaseBdev4", 00:36:13.880 "uuid": "d1343a76-e811-54b7-a6ba-3ec717620564", 00:36:13.880 "is_configured": true, 00:36:13.880 "data_offset": 2048, 00:36:13.880 "data_size": 63488 00:36:13.880 } 00:36:13.880 ] 00:36:13.880 }' 00:36:13.880 21:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:13.880 21:49:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:14.449 21:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:14.449 21:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:14.449 21:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:14.449 21:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:14.449 21:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:14.449 21:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:14.449 21:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:14.708 21:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:14.708 "name": "raid_bdev1", 00:36:14.708 "uuid": "17c7c8d7-7fb2-4bf6-860c-12a5470b2c38", 00:36:14.708 "strip_size_kb": 64, 00:36:14.708 "state": "online", 00:36:14.708 "raid_level": "raid5f", 00:36:14.708 "superblock": true, 00:36:14.708 "num_base_bdevs": 4, 00:36:14.708 "num_base_bdevs_discovered": 3, 00:36:14.708 "num_base_bdevs_operational": 3, 00:36:14.708 "base_bdevs_list": [ 00:36:14.708 { 00:36:14.708 "name": null, 00:36:14.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:14.708 "is_configured": false, 00:36:14.708 "data_offset": 2048, 00:36:14.708 "data_size": 63488 00:36:14.708 }, 00:36:14.708 { 00:36:14.708 "name": "BaseBdev2", 00:36:14.708 "uuid": "3dd94f7c-57b5-5c64-8ccb-d22063de3997", 00:36:14.708 "is_configured": true, 00:36:14.708 "data_offset": 2048, 00:36:14.708 "data_size": 63488 00:36:14.708 }, 00:36:14.708 { 00:36:14.708 "name": "BaseBdev3", 00:36:14.708 "uuid": "f3d520e2-8c82-51cb-b68d-7e1af89d3fed", 00:36:14.708 "is_configured": true, 00:36:14.708 "data_offset": 2048, 00:36:14.708 "data_size": 63488 00:36:14.708 }, 00:36:14.708 { 00:36:14.708 "name": "BaseBdev4", 00:36:14.708 "uuid": "d1343a76-e811-54b7-a6ba-3ec717620564", 00:36:14.708 "is_configured": true, 00:36:14.708 "data_offset": 2048, 00:36:14.708 "data_size": 63488 00:36:14.708 } 00:36:14.708 ] 00:36:14.708 }' 00:36:14.708 21:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:14.709 21:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:14.709 21:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:14.709 21:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:14.709 21:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:36:14.969 [2024-07-15 21:49:48.249442] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:14.969 [2024-07-15 21:49:48.263695] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002cc70 00:36:14.969 [2024-07-15 21:49:48.272493] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:14.969 21:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:36:15.905 21:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:15.905 21:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:15.905 21:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:15.905 21:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:15.905 21:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:16.162 21:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:16.162 21:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:16.162 21:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:16.162 "name": "raid_bdev1", 00:36:16.162 "uuid": "17c7c8d7-7fb2-4bf6-860c-12a5470b2c38", 00:36:16.162 "strip_size_kb": 64, 00:36:16.162 "state": "online", 00:36:16.162 "raid_level": "raid5f", 00:36:16.162 "superblock": true, 00:36:16.162 "num_base_bdevs": 4, 00:36:16.162 "num_base_bdevs_discovered": 4, 00:36:16.162 "num_base_bdevs_operational": 4, 00:36:16.162 "process": { 00:36:16.162 "type": "rebuild", 00:36:16.162 "target": "spare", 00:36:16.162 "progress": { 00:36:16.162 "blocks": 21120, 00:36:16.162 "percent": 11 00:36:16.162 } 00:36:16.162 }, 00:36:16.162 "base_bdevs_list": [ 00:36:16.162 { 00:36:16.162 "name": "spare", 00:36:16.162 "uuid": "a8a418c4-1e77-5196-801a-61550db2c50a", 00:36:16.162 "is_configured": true, 00:36:16.162 "data_offset": 2048, 00:36:16.162 "data_size": 63488 00:36:16.162 }, 00:36:16.162 { 00:36:16.162 "name": "BaseBdev2", 00:36:16.162 "uuid": "3dd94f7c-57b5-5c64-8ccb-d22063de3997", 00:36:16.162 "is_configured": true, 00:36:16.162 "data_offset": 2048, 00:36:16.162 "data_size": 63488 00:36:16.162 }, 00:36:16.162 { 00:36:16.162 "name": "BaseBdev3", 00:36:16.162 "uuid": "f3d520e2-8c82-51cb-b68d-7e1af89d3fed", 00:36:16.162 "is_configured": true, 00:36:16.162 "data_offset": 2048, 00:36:16.162 "data_size": 63488 00:36:16.162 }, 00:36:16.162 { 00:36:16.162 "name": "BaseBdev4", 00:36:16.162 "uuid": "d1343a76-e811-54b7-a6ba-3ec717620564", 00:36:16.162 "is_configured": true, 00:36:16.162 "data_offset": 2048, 00:36:16.162 "data_size": 63488 00:36:16.162 } 00:36:16.162 ] 00:36:16.162 }' 00:36:16.162 21:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:16.162 21:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:16.421 21:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:16.421 21:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:16.421 21:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:36:16.421 21:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:36:16.421 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:36:16.421 21:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:36:16.421 21:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:36:16.421 21:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=1242 00:36:16.421 21:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:16.421 21:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:16.421 21:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:16.421 21:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:16.421 21:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:16.421 21:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:16.421 21:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:16.421 21:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:16.421 21:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:16.421 "name": "raid_bdev1", 00:36:16.421 "uuid": "17c7c8d7-7fb2-4bf6-860c-12a5470b2c38", 00:36:16.421 "strip_size_kb": 64, 00:36:16.421 "state": "online", 00:36:16.421 "raid_level": "raid5f", 00:36:16.421 "superblock": true, 00:36:16.421 "num_base_bdevs": 4, 00:36:16.421 "num_base_bdevs_discovered": 4, 00:36:16.421 "num_base_bdevs_operational": 4, 00:36:16.421 "process": { 00:36:16.421 "type": "rebuild", 00:36:16.421 "target": "spare", 00:36:16.421 "progress": { 00:36:16.421 "blocks": 28800, 00:36:16.421 "percent": 15 00:36:16.421 } 00:36:16.421 }, 00:36:16.421 "base_bdevs_list": [ 00:36:16.421 { 00:36:16.421 "name": "spare", 00:36:16.421 "uuid": "a8a418c4-1e77-5196-801a-61550db2c50a", 00:36:16.421 "is_configured": true, 00:36:16.421 "data_offset": 2048, 00:36:16.421 "data_size": 63488 00:36:16.421 }, 00:36:16.421 { 00:36:16.421 "name": "BaseBdev2", 00:36:16.421 "uuid": "3dd94f7c-57b5-5c64-8ccb-d22063de3997", 00:36:16.421 "is_configured": true, 00:36:16.421 "data_offset": 2048, 00:36:16.421 "data_size": 63488 00:36:16.421 }, 00:36:16.421 { 00:36:16.421 "name": "BaseBdev3", 00:36:16.421 "uuid": "f3d520e2-8c82-51cb-b68d-7e1af89d3fed", 00:36:16.421 "is_configured": true, 00:36:16.421 "data_offset": 2048, 00:36:16.421 "data_size": 63488 00:36:16.421 }, 00:36:16.421 { 00:36:16.421 "name": "BaseBdev4", 00:36:16.421 "uuid": "d1343a76-e811-54b7-a6ba-3ec717620564", 00:36:16.421 "is_configured": true, 00:36:16.421 "data_offset": 2048, 00:36:16.421 "data_size": 63488 00:36:16.421 } 00:36:16.421 ] 00:36:16.421 }' 00:36:16.421 21:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:16.680 21:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:16.680 21:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:16.680 21:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:16.680 21:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:17.615 21:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:17.615 21:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:17.615 21:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:17.615 21:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:17.615 21:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:17.615 21:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:17.615 21:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:17.615 21:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:17.873 21:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:17.873 "name": "raid_bdev1", 00:36:17.873 "uuid": "17c7c8d7-7fb2-4bf6-860c-12a5470b2c38", 00:36:17.873 "strip_size_kb": 64, 00:36:17.873 "state": "online", 00:36:17.873 "raid_level": "raid5f", 00:36:17.873 "superblock": true, 00:36:17.873 "num_base_bdevs": 4, 00:36:17.873 "num_base_bdevs_discovered": 4, 00:36:17.873 "num_base_bdevs_operational": 4, 00:36:17.873 "process": { 00:36:17.874 "type": "rebuild", 00:36:17.874 "target": "spare", 00:36:17.874 "progress": { 00:36:17.874 "blocks": 53760, 00:36:17.874 "percent": 28 00:36:17.874 } 00:36:17.874 }, 00:36:17.874 "base_bdevs_list": [ 00:36:17.874 { 00:36:17.874 "name": "spare", 00:36:17.874 "uuid": "a8a418c4-1e77-5196-801a-61550db2c50a", 00:36:17.874 "is_configured": true, 00:36:17.874 "data_offset": 2048, 00:36:17.874 "data_size": 63488 00:36:17.874 }, 00:36:17.874 { 00:36:17.874 "name": "BaseBdev2", 00:36:17.874 "uuid": "3dd94f7c-57b5-5c64-8ccb-d22063de3997", 00:36:17.874 "is_configured": true, 00:36:17.874 "data_offset": 2048, 00:36:17.874 "data_size": 63488 00:36:17.874 }, 00:36:17.874 { 00:36:17.874 "name": "BaseBdev3", 00:36:17.874 "uuid": "f3d520e2-8c82-51cb-b68d-7e1af89d3fed", 00:36:17.874 "is_configured": true, 00:36:17.874 "data_offset": 2048, 00:36:17.874 "data_size": 63488 00:36:17.874 }, 00:36:17.874 { 00:36:17.874 "name": "BaseBdev4", 00:36:17.874 "uuid": "d1343a76-e811-54b7-a6ba-3ec717620564", 00:36:17.874 "is_configured": true, 00:36:17.874 "data_offset": 2048, 00:36:17.874 "data_size": 63488 00:36:17.874 } 00:36:17.874 ] 00:36:17.874 }' 00:36:17.874 21:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:17.874 21:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:17.874 21:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:17.874 21:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:17.874 21:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:19.273 21:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:19.273 21:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:19.273 21:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:19.273 21:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:19.273 21:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:19.273 21:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:19.273 21:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:19.273 21:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:19.273 21:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:19.273 "name": "raid_bdev1", 00:36:19.273 "uuid": "17c7c8d7-7fb2-4bf6-860c-12a5470b2c38", 00:36:19.273 "strip_size_kb": 64, 00:36:19.273 "state": "online", 00:36:19.273 "raid_level": "raid5f", 00:36:19.273 "superblock": true, 00:36:19.273 "num_base_bdevs": 4, 00:36:19.273 "num_base_bdevs_discovered": 4, 00:36:19.273 "num_base_bdevs_operational": 4, 00:36:19.273 "process": { 00:36:19.273 "type": "rebuild", 00:36:19.273 "target": "spare", 00:36:19.273 "progress": { 00:36:19.273 "blocks": 78720, 00:36:19.273 "percent": 41 00:36:19.273 } 00:36:19.273 }, 00:36:19.273 "base_bdevs_list": [ 00:36:19.273 { 00:36:19.273 "name": "spare", 00:36:19.273 "uuid": "a8a418c4-1e77-5196-801a-61550db2c50a", 00:36:19.273 "is_configured": true, 00:36:19.273 "data_offset": 2048, 00:36:19.273 "data_size": 63488 00:36:19.273 }, 00:36:19.273 { 00:36:19.273 "name": "BaseBdev2", 00:36:19.273 "uuid": "3dd94f7c-57b5-5c64-8ccb-d22063de3997", 00:36:19.273 "is_configured": true, 00:36:19.273 "data_offset": 2048, 00:36:19.273 "data_size": 63488 00:36:19.273 }, 00:36:19.273 { 00:36:19.273 "name": "BaseBdev3", 00:36:19.273 "uuid": "f3d520e2-8c82-51cb-b68d-7e1af89d3fed", 00:36:19.273 "is_configured": true, 00:36:19.273 "data_offset": 2048, 00:36:19.273 "data_size": 63488 00:36:19.273 }, 00:36:19.273 { 00:36:19.273 "name": "BaseBdev4", 00:36:19.273 "uuid": "d1343a76-e811-54b7-a6ba-3ec717620564", 00:36:19.273 "is_configured": true, 00:36:19.273 "data_offset": 2048, 00:36:19.273 "data_size": 63488 00:36:19.273 } 00:36:19.273 ] 00:36:19.273 }' 00:36:19.273 21:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:19.273 21:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:19.273 21:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:19.273 21:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:19.273 21:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:20.647 21:49:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:20.647 21:49:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:20.647 21:49:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:20.647 21:49:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:20.647 21:49:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:20.647 21:49:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:20.647 21:49:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:20.647 21:49:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:20.647 21:49:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:20.647 "name": "raid_bdev1", 00:36:20.647 "uuid": "17c7c8d7-7fb2-4bf6-860c-12a5470b2c38", 00:36:20.647 "strip_size_kb": 64, 00:36:20.647 "state": "online", 00:36:20.647 "raid_level": "raid5f", 00:36:20.647 "superblock": true, 00:36:20.647 "num_base_bdevs": 4, 00:36:20.647 "num_base_bdevs_discovered": 4, 00:36:20.647 "num_base_bdevs_operational": 4, 00:36:20.647 "process": { 00:36:20.647 "type": "rebuild", 00:36:20.647 "target": "spare", 00:36:20.647 "progress": { 00:36:20.647 "blocks": 103680, 00:36:20.647 "percent": 54 00:36:20.648 } 00:36:20.648 }, 00:36:20.648 "base_bdevs_list": [ 00:36:20.648 { 00:36:20.648 "name": "spare", 00:36:20.648 "uuid": "a8a418c4-1e77-5196-801a-61550db2c50a", 00:36:20.648 "is_configured": true, 00:36:20.648 "data_offset": 2048, 00:36:20.648 "data_size": 63488 00:36:20.648 }, 00:36:20.648 { 00:36:20.648 "name": "BaseBdev2", 00:36:20.648 "uuid": "3dd94f7c-57b5-5c64-8ccb-d22063de3997", 00:36:20.648 "is_configured": true, 00:36:20.648 "data_offset": 2048, 00:36:20.648 "data_size": 63488 00:36:20.648 }, 00:36:20.648 { 00:36:20.648 "name": "BaseBdev3", 00:36:20.648 "uuid": "f3d520e2-8c82-51cb-b68d-7e1af89d3fed", 00:36:20.648 "is_configured": true, 00:36:20.648 "data_offset": 2048, 00:36:20.648 "data_size": 63488 00:36:20.648 }, 00:36:20.648 { 00:36:20.648 "name": "BaseBdev4", 00:36:20.648 "uuid": "d1343a76-e811-54b7-a6ba-3ec717620564", 00:36:20.648 "is_configured": true, 00:36:20.648 "data_offset": 2048, 00:36:20.648 "data_size": 63488 00:36:20.648 } 00:36:20.648 ] 00:36:20.648 }' 00:36:20.648 21:49:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:20.648 21:49:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:20.648 21:49:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:20.648 21:49:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:20.648 21:49:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:21.581 21:49:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:21.581 21:49:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:21.581 21:49:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:21.581 21:49:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:21.581 21:49:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:21.581 21:49:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:21.581 21:49:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:21.581 21:49:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:21.839 21:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:21.839 "name": "raid_bdev1", 00:36:21.839 "uuid": "17c7c8d7-7fb2-4bf6-860c-12a5470b2c38", 00:36:21.839 "strip_size_kb": 64, 00:36:21.839 "state": "online", 00:36:21.839 "raid_level": "raid5f", 00:36:21.839 "superblock": true, 00:36:21.839 "num_base_bdevs": 4, 00:36:21.839 "num_base_bdevs_discovered": 4, 00:36:21.839 "num_base_bdevs_operational": 4, 00:36:21.839 "process": { 00:36:21.839 "type": "rebuild", 00:36:21.839 "target": "spare", 00:36:21.839 "progress": { 00:36:21.839 "blocks": 130560, 00:36:21.839 "percent": 68 00:36:21.839 } 00:36:21.839 }, 00:36:21.839 "base_bdevs_list": [ 00:36:21.839 { 00:36:21.839 "name": "spare", 00:36:21.839 "uuid": "a8a418c4-1e77-5196-801a-61550db2c50a", 00:36:21.839 "is_configured": true, 00:36:21.839 "data_offset": 2048, 00:36:21.839 "data_size": 63488 00:36:21.839 }, 00:36:21.839 { 00:36:21.839 "name": "BaseBdev2", 00:36:21.839 "uuid": "3dd94f7c-57b5-5c64-8ccb-d22063de3997", 00:36:21.839 "is_configured": true, 00:36:21.839 "data_offset": 2048, 00:36:21.839 "data_size": 63488 00:36:21.839 }, 00:36:21.839 { 00:36:21.839 "name": "BaseBdev3", 00:36:21.839 "uuid": "f3d520e2-8c82-51cb-b68d-7e1af89d3fed", 00:36:21.839 "is_configured": true, 00:36:21.839 "data_offset": 2048, 00:36:21.839 "data_size": 63488 00:36:21.839 }, 00:36:21.839 { 00:36:21.839 "name": "BaseBdev4", 00:36:21.839 "uuid": "d1343a76-e811-54b7-a6ba-3ec717620564", 00:36:21.839 "is_configured": true, 00:36:21.839 "data_offset": 2048, 00:36:21.839 "data_size": 63488 00:36:21.839 } 00:36:21.839 ] 00:36:21.839 }' 00:36:21.839 21:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:22.098 21:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:22.098 21:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:22.098 21:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:22.098 21:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:23.033 21:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:23.033 21:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:23.033 21:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:23.033 21:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:23.033 21:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:23.033 21:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:23.033 21:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:23.033 21:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:23.291 21:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:23.291 "name": "raid_bdev1", 00:36:23.291 "uuid": "17c7c8d7-7fb2-4bf6-860c-12a5470b2c38", 00:36:23.291 "strip_size_kb": 64, 00:36:23.291 "state": "online", 00:36:23.291 "raid_level": "raid5f", 00:36:23.291 "superblock": true, 00:36:23.291 "num_base_bdevs": 4, 00:36:23.291 "num_base_bdevs_discovered": 4, 00:36:23.291 "num_base_bdevs_operational": 4, 00:36:23.291 "process": { 00:36:23.291 "type": "rebuild", 00:36:23.291 "target": "spare", 00:36:23.291 "progress": { 00:36:23.291 "blocks": 155520, 00:36:23.291 "percent": 81 00:36:23.291 } 00:36:23.291 }, 00:36:23.291 "base_bdevs_list": [ 00:36:23.291 { 00:36:23.291 "name": "spare", 00:36:23.291 "uuid": "a8a418c4-1e77-5196-801a-61550db2c50a", 00:36:23.291 "is_configured": true, 00:36:23.291 "data_offset": 2048, 00:36:23.291 "data_size": 63488 00:36:23.291 }, 00:36:23.291 { 00:36:23.291 "name": "BaseBdev2", 00:36:23.291 "uuid": "3dd94f7c-57b5-5c64-8ccb-d22063de3997", 00:36:23.291 "is_configured": true, 00:36:23.291 "data_offset": 2048, 00:36:23.291 "data_size": 63488 00:36:23.291 }, 00:36:23.291 { 00:36:23.291 "name": "BaseBdev3", 00:36:23.291 "uuid": "f3d520e2-8c82-51cb-b68d-7e1af89d3fed", 00:36:23.291 "is_configured": true, 00:36:23.291 "data_offset": 2048, 00:36:23.291 "data_size": 63488 00:36:23.291 }, 00:36:23.291 { 00:36:23.291 "name": "BaseBdev4", 00:36:23.291 "uuid": "d1343a76-e811-54b7-a6ba-3ec717620564", 00:36:23.291 "is_configured": true, 00:36:23.291 "data_offset": 2048, 00:36:23.291 "data_size": 63488 00:36:23.291 } 00:36:23.291 ] 00:36:23.291 }' 00:36:23.291 21:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:23.291 21:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:23.291 21:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:23.291 21:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:23.291 21:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:24.667 21:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:24.667 21:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:24.667 21:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:24.667 21:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:24.667 21:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:24.667 21:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:24.667 21:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:24.668 21:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:24.668 21:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:24.668 "name": "raid_bdev1", 00:36:24.668 "uuid": "17c7c8d7-7fb2-4bf6-860c-12a5470b2c38", 00:36:24.668 "strip_size_kb": 64, 00:36:24.668 "state": "online", 00:36:24.668 "raid_level": "raid5f", 00:36:24.668 "superblock": true, 00:36:24.668 "num_base_bdevs": 4, 00:36:24.668 "num_base_bdevs_discovered": 4, 00:36:24.668 "num_base_bdevs_operational": 4, 00:36:24.668 "process": { 00:36:24.668 "type": "rebuild", 00:36:24.668 "target": "spare", 00:36:24.668 "progress": { 00:36:24.668 "blocks": 180480, 00:36:24.668 "percent": 94 00:36:24.668 } 00:36:24.668 }, 00:36:24.668 "base_bdevs_list": [ 00:36:24.668 { 00:36:24.668 "name": "spare", 00:36:24.668 "uuid": "a8a418c4-1e77-5196-801a-61550db2c50a", 00:36:24.668 "is_configured": true, 00:36:24.668 "data_offset": 2048, 00:36:24.668 "data_size": 63488 00:36:24.668 }, 00:36:24.668 { 00:36:24.668 "name": "BaseBdev2", 00:36:24.668 "uuid": "3dd94f7c-57b5-5c64-8ccb-d22063de3997", 00:36:24.668 "is_configured": true, 00:36:24.668 "data_offset": 2048, 00:36:24.668 "data_size": 63488 00:36:24.668 }, 00:36:24.668 { 00:36:24.668 "name": "BaseBdev3", 00:36:24.668 "uuid": "f3d520e2-8c82-51cb-b68d-7e1af89d3fed", 00:36:24.668 "is_configured": true, 00:36:24.668 "data_offset": 2048, 00:36:24.668 "data_size": 63488 00:36:24.668 }, 00:36:24.668 { 00:36:24.668 "name": "BaseBdev4", 00:36:24.668 "uuid": "d1343a76-e811-54b7-a6ba-3ec717620564", 00:36:24.668 "is_configured": true, 00:36:24.668 "data_offset": 2048, 00:36:24.668 "data_size": 63488 00:36:24.668 } 00:36:24.668 ] 00:36:24.668 }' 00:36:24.668 21:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:24.668 21:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:24.668 21:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:24.668 21:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:24.668 21:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:25.236 [2024-07-15 21:49:58.338321] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:36:25.236 [2024-07-15 21:49:58.338434] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:36:25.236 [2024-07-15 21:49:58.338619] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:25.806 21:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:25.806 21:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:25.806 21:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:25.806 21:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:25.806 21:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:25.806 21:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:25.806 21:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:25.806 21:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:25.806 21:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:25.806 "name": "raid_bdev1", 00:36:25.806 "uuid": "17c7c8d7-7fb2-4bf6-860c-12a5470b2c38", 00:36:25.806 "strip_size_kb": 64, 00:36:25.806 "state": "online", 00:36:25.806 "raid_level": "raid5f", 00:36:25.806 "superblock": true, 00:36:25.806 "num_base_bdevs": 4, 00:36:25.806 "num_base_bdevs_discovered": 4, 00:36:25.806 "num_base_bdevs_operational": 4, 00:36:25.806 "base_bdevs_list": [ 00:36:25.806 { 00:36:25.806 "name": "spare", 00:36:25.806 "uuid": "a8a418c4-1e77-5196-801a-61550db2c50a", 00:36:25.806 "is_configured": true, 00:36:25.806 "data_offset": 2048, 00:36:25.806 "data_size": 63488 00:36:25.806 }, 00:36:25.806 { 00:36:25.806 "name": "BaseBdev2", 00:36:25.806 "uuid": "3dd94f7c-57b5-5c64-8ccb-d22063de3997", 00:36:25.806 "is_configured": true, 00:36:25.806 "data_offset": 2048, 00:36:25.806 "data_size": 63488 00:36:25.806 }, 00:36:25.806 { 00:36:25.806 "name": "BaseBdev3", 00:36:25.806 "uuid": "f3d520e2-8c82-51cb-b68d-7e1af89d3fed", 00:36:25.806 "is_configured": true, 00:36:25.806 "data_offset": 2048, 00:36:25.806 "data_size": 63488 00:36:25.806 }, 00:36:25.806 { 00:36:25.806 "name": "BaseBdev4", 00:36:25.806 "uuid": "d1343a76-e811-54b7-a6ba-3ec717620564", 00:36:25.806 "is_configured": true, 00:36:25.806 "data_offset": 2048, 00:36:25.806 "data_size": 63488 00:36:25.806 } 00:36:25.806 ] 00:36:25.806 }' 00:36:25.806 21:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:26.066 21:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:36:26.066 21:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:26.066 21:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:36:26.066 21:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:36:26.066 21:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:26.066 21:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:26.066 21:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:26.066 21:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:26.066 21:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:26.066 21:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:26.066 21:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:26.327 21:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:26.327 "name": "raid_bdev1", 00:36:26.327 "uuid": "17c7c8d7-7fb2-4bf6-860c-12a5470b2c38", 00:36:26.327 "strip_size_kb": 64, 00:36:26.327 "state": "online", 00:36:26.327 "raid_level": "raid5f", 00:36:26.327 "superblock": true, 00:36:26.327 "num_base_bdevs": 4, 00:36:26.327 "num_base_bdevs_discovered": 4, 00:36:26.327 "num_base_bdevs_operational": 4, 00:36:26.327 "base_bdevs_list": [ 00:36:26.327 { 00:36:26.327 "name": "spare", 00:36:26.327 "uuid": "a8a418c4-1e77-5196-801a-61550db2c50a", 00:36:26.327 "is_configured": true, 00:36:26.327 "data_offset": 2048, 00:36:26.327 "data_size": 63488 00:36:26.327 }, 00:36:26.327 { 00:36:26.327 "name": "BaseBdev2", 00:36:26.327 "uuid": "3dd94f7c-57b5-5c64-8ccb-d22063de3997", 00:36:26.327 "is_configured": true, 00:36:26.327 "data_offset": 2048, 00:36:26.327 "data_size": 63488 00:36:26.327 }, 00:36:26.327 { 00:36:26.327 "name": "BaseBdev3", 00:36:26.327 "uuid": "f3d520e2-8c82-51cb-b68d-7e1af89d3fed", 00:36:26.327 "is_configured": true, 00:36:26.327 "data_offset": 2048, 00:36:26.327 "data_size": 63488 00:36:26.327 }, 00:36:26.327 { 00:36:26.327 "name": "BaseBdev4", 00:36:26.327 "uuid": "d1343a76-e811-54b7-a6ba-3ec717620564", 00:36:26.327 "is_configured": true, 00:36:26.327 "data_offset": 2048, 00:36:26.327 "data_size": 63488 00:36:26.327 } 00:36:26.327 ] 00:36:26.327 }' 00:36:26.327 21:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:26.327 21:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:26.327 21:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:26.327 21:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:26.327 21:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:36:26.327 21:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:26.327 21:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:26.327 21:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:26.327 21:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:26.327 21:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:26.327 21:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:26.327 21:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:26.327 21:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:26.327 21:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:26.327 21:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:26.327 21:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:26.587 21:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:26.587 "name": "raid_bdev1", 00:36:26.587 "uuid": "17c7c8d7-7fb2-4bf6-860c-12a5470b2c38", 00:36:26.587 "strip_size_kb": 64, 00:36:26.587 "state": "online", 00:36:26.587 "raid_level": "raid5f", 00:36:26.587 "superblock": true, 00:36:26.587 "num_base_bdevs": 4, 00:36:26.587 "num_base_bdevs_discovered": 4, 00:36:26.587 "num_base_bdevs_operational": 4, 00:36:26.587 "base_bdevs_list": [ 00:36:26.587 { 00:36:26.587 "name": "spare", 00:36:26.587 "uuid": "a8a418c4-1e77-5196-801a-61550db2c50a", 00:36:26.587 "is_configured": true, 00:36:26.587 "data_offset": 2048, 00:36:26.587 "data_size": 63488 00:36:26.587 }, 00:36:26.587 { 00:36:26.587 "name": "BaseBdev2", 00:36:26.587 "uuid": "3dd94f7c-57b5-5c64-8ccb-d22063de3997", 00:36:26.587 "is_configured": true, 00:36:26.587 "data_offset": 2048, 00:36:26.587 "data_size": 63488 00:36:26.587 }, 00:36:26.587 { 00:36:26.587 "name": "BaseBdev3", 00:36:26.587 "uuid": "f3d520e2-8c82-51cb-b68d-7e1af89d3fed", 00:36:26.587 "is_configured": true, 00:36:26.587 "data_offset": 2048, 00:36:26.587 "data_size": 63488 00:36:26.587 }, 00:36:26.587 { 00:36:26.587 "name": "BaseBdev4", 00:36:26.587 "uuid": "d1343a76-e811-54b7-a6ba-3ec717620564", 00:36:26.587 "is_configured": true, 00:36:26.587 "data_offset": 2048, 00:36:26.587 "data_size": 63488 00:36:26.587 } 00:36:26.587 ] 00:36:26.587 }' 00:36:26.587 21:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:26.587 21:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:27.156 21:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:27.416 [2024-07-15 21:50:00.696132] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:27.416 [2024-07-15 21:50:00.696230] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:27.416 [2024-07-15 21:50:00.696326] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:27.416 [2024-07-15 21:50:00.696454] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:27.416 [2024-07-15 21:50:00.696478] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:36:27.416 21:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:27.416 21:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:36:27.675 21:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:36:27.675 21:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:36:27.675 21:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:36:27.675 21:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:36:27.675 21:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:27.675 21:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:36:27.675 21:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:27.675 21:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:36:27.675 21:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:27.675 21:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:36:27.675 21:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:27.675 21:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:27.675 21:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:36:27.934 /dev/nbd0 00:36:27.934 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:27.934 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:27.934 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:36:27.934 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:36:27.934 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:36:27.934 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:36:27.934 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:36:27.934 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:36:27.934 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:36:27.934 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:36:27.934 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:27.934 1+0 records in 00:36:27.934 1+0 records out 00:36:27.934 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397525 s, 10.3 MB/s 00:36:27.934 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:27.934 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:36:27.934 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:27.934 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:36:27.934 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:36:27.934 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:27.934 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:27.934 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:36:28.193 /dev/nbd1 00:36:28.193 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:36:28.193 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:36:28.193 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:36:28.193 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:36:28.193 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:36:28.193 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:36:28.193 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:36:28.193 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:36:28.193 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:36:28.193 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:36:28.193 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:28.193 1+0 records in 00:36:28.193 1+0 records out 00:36:28.193 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376785 s, 10.9 MB/s 00:36:28.193 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:28.193 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:36:28.193 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:28.193 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:36:28.193 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:36:28.193 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:28.193 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:28.193 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:36:28.461 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:36:28.461 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:28.461 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:36:28.461 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:28.461 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:36:28.461 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:28.461 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:36:28.461 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:28.461 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:28.461 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:28.461 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:28.461 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:28.461 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:28.461 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:36:28.731 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:36:28.731 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:28.731 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:28.731 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:36:28.731 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:36:28.731 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:28.731 21:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:36:28.990 21:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:36:28.990 21:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:36:28.990 21:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:36:28.990 21:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:28.990 21:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:28.990 21:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:36:28.990 21:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:36:28.990 21:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:36:28.990 21:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:28.990 21:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:36:28.990 21:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:36:28.990 21:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:36:28.990 21:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:36:28.990 21:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:36:29.250 21:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:36:29.509 [2024-07-15 21:50:02.655097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:29.509 [2024-07-15 21:50:02.655221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:29.509 [2024-07-15 21:50:02.655280] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:36:29.509 [2024-07-15 21:50:02.655384] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:29.509 [2024-07-15 21:50:02.657605] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:29.509 [2024-07-15 21:50:02.657704] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:29.509 [2024-07-15 21:50:02.657852] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:36:29.509 [2024-07-15 21:50:02.657939] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:29.509 [2024-07-15 21:50:02.658114] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:29.509 [2024-07-15 21:50:02.658238] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:29.509 [2024-07-15 21:50:02.658346] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:36:29.509 spare 00:36:29.509 21:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:36:29.509 21:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:29.509 21:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:29.509 21:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:29.509 21:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:29.509 21:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:29.509 21:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:29.509 21:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:29.509 21:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:29.509 21:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:29.509 21:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:29.510 21:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:29.510 [2024-07-15 21:50:02.758280] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:36:29.510 [2024-07-15 21:50:02.758332] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:36:29.510 [2024-07-15 21:50:02.758482] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004d6e0 00:36:29.510 [2024-07-15 21:50:02.765788] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:36:29.510 [2024-07-15 21:50:02.765846] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:36:29.510 [2024-07-15 21:50:02.766033] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:29.510 21:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:29.510 "name": "raid_bdev1", 00:36:29.510 "uuid": "17c7c8d7-7fb2-4bf6-860c-12a5470b2c38", 00:36:29.510 "strip_size_kb": 64, 00:36:29.510 "state": "online", 00:36:29.510 "raid_level": "raid5f", 00:36:29.510 "superblock": true, 00:36:29.510 "num_base_bdevs": 4, 00:36:29.510 "num_base_bdevs_discovered": 4, 00:36:29.510 "num_base_bdevs_operational": 4, 00:36:29.510 "base_bdevs_list": [ 00:36:29.510 { 00:36:29.510 "name": "spare", 00:36:29.510 "uuid": "a8a418c4-1e77-5196-801a-61550db2c50a", 00:36:29.510 "is_configured": true, 00:36:29.510 "data_offset": 2048, 00:36:29.510 "data_size": 63488 00:36:29.510 }, 00:36:29.510 { 00:36:29.510 "name": "BaseBdev2", 00:36:29.510 "uuid": "3dd94f7c-57b5-5c64-8ccb-d22063de3997", 00:36:29.510 "is_configured": true, 00:36:29.510 "data_offset": 2048, 00:36:29.510 "data_size": 63488 00:36:29.510 }, 00:36:29.510 { 00:36:29.510 "name": "BaseBdev3", 00:36:29.510 "uuid": "f3d520e2-8c82-51cb-b68d-7e1af89d3fed", 00:36:29.510 "is_configured": true, 00:36:29.510 "data_offset": 2048, 00:36:29.510 "data_size": 63488 00:36:29.510 }, 00:36:29.510 { 00:36:29.510 "name": "BaseBdev4", 00:36:29.510 "uuid": "d1343a76-e811-54b7-a6ba-3ec717620564", 00:36:29.510 "is_configured": true, 00:36:29.510 "data_offset": 2048, 00:36:29.510 "data_size": 63488 00:36:29.510 } 00:36:29.510 ] 00:36:29.510 }' 00:36:29.510 21:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:29.510 21:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:30.451 21:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:30.451 21:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:30.451 21:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:30.451 21:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:30.452 21:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:30.452 21:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:30.452 21:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:30.452 21:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:30.452 "name": "raid_bdev1", 00:36:30.452 "uuid": "17c7c8d7-7fb2-4bf6-860c-12a5470b2c38", 00:36:30.452 "strip_size_kb": 64, 00:36:30.452 "state": "online", 00:36:30.452 "raid_level": "raid5f", 00:36:30.452 "superblock": true, 00:36:30.452 "num_base_bdevs": 4, 00:36:30.452 "num_base_bdevs_discovered": 4, 00:36:30.452 "num_base_bdevs_operational": 4, 00:36:30.452 "base_bdevs_list": [ 00:36:30.452 { 00:36:30.452 "name": "spare", 00:36:30.452 "uuid": "a8a418c4-1e77-5196-801a-61550db2c50a", 00:36:30.452 "is_configured": true, 00:36:30.452 "data_offset": 2048, 00:36:30.452 "data_size": 63488 00:36:30.452 }, 00:36:30.452 { 00:36:30.452 "name": "BaseBdev2", 00:36:30.452 "uuid": "3dd94f7c-57b5-5c64-8ccb-d22063de3997", 00:36:30.452 "is_configured": true, 00:36:30.452 "data_offset": 2048, 00:36:30.452 "data_size": 63488 00:36:30.452 }, 00:36:30.452 { 00:36:30.452 "name": "BaseBdev3", 00:36:30.452 "uuid": "f3d520e2-8c82-51cb-b68d-7e1af89d3fed", 00:36:30.452 "is_configured": true, 00:36:30.452 "data_offset": 2048, 00:36:30.452 "data_size": 63488 00:36:30.452 }, 00:36:30.452 { 00:36:30.452 "name": "BaseBdev4", 00:36:30.452 "uuid": "d1343a76-e811-54b7-a6ba-3ec717620564", 00:36:30.452 "is_configured": true, 00:36:30.452 "data_offset": 2048, 00:36:30.452 "data_size": 63488 00:36:30.452 } 00:36:30.452 ] 00:36:30.452 }' 00:36:30.452 21:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:30.452 21:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:30.452 21:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:30.452 21:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:30.452 21:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:30.452 21:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:36:30.711 21:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:36:30.711 21:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:36:30.971 [2024-07-15 21:50:04.215714] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:30.971 21:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:30.971 21:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:30.971 21:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:30.971 21:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:30.971 21:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:30.971 21:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:36:30.971 21:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:30.971 21:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:30.971 21:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:30.971 21:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:30.971 21:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:30.971 21:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:31.231 21:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:31.231 "name": "raid_bdev1", 00:36:31.231 "uuid": "17c7c8d7-7fb2-4bf6-860c-12a5470b2c38", 00:36:31.231 "strip_size_kb": 64, 00:36:31.231 "state": "online", 00:36:31.231 "raid_level": "raid5f", 00:36:31.231 "superblock": true, 00:36:31.231 "num_base_bdevs": 4, 00:36:31.231 "num_base_bdevs_discovered": 3, 00:36:31.231 "num_base_bdevs_operational": 3, 00:36:31.231 "base_bdevs_list": [ 00:36:31.231 { 00:36:31.231 "name": null, 00:36:31.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:31.231 "is_configured": false, 00:36:31.231 "data_offset": 2048, 00:36:31.231 "data_size": 63488 00:36:31.231 }, 00:36:31.231 { 00:36:31.231 "name": "BaseBdev2", 00:36:31.231 "uuid": "3dd94f7c-57b5-5c64-8ccb-d22063de3997", 00:36:31.231 "is_configured": true, 00:36:31.231 "data_offset": 2048, 00:36:31.231 "data_size": 63488 00:36:31.231 }, 00:36:31.231 { 00:36:31.231 "name": "BaseBdev3", 00:36:31.231 "uuid": "f3d520e2-8c82-51cb-b68d-7e1af89d3fed", 00:36:31.231 "is_configured": true, 00:36:31.231 "data_offset": 2048, 00:36:31.231 "data_size": 63488 00:36:31.231 }, 00:36:31.231 { 00:36:31.231 "name": "BaseBdev4", 00:36:31.231 "uuid": "d1343a76-e811-54b7-a6ba-3ec717620564", 00:36:31.231 "is_configured": true, 00:36:31.231 "data_offset": 2048, 00:36:31.231 "data_size": 63488 00:36:31.231 } 00:36:31.231 ] 00:36:31.231 }' 00:36:31.231 21:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:31.231 21:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:31.800 21:50:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:36:32.060 [2024-07-15 21:50:05.265927] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:32.060 [2024-07-15 21:50:05.266178] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:36:32.060 [2024-07-15 21:50:05.266221] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:36:32.060 [2024-07-15 21:50:05.266298] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:32.060 [2024-07-15 21:50:05.281777] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004d880 00:36:32.060 [2024-07-15 21:50:05.291329] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:32.060 21:50:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:36:33.000 21:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:33.000 21:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:33.000 21:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:33.000 21:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:33.000 21:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:33.000 21:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:33.000 21:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:33.258 21:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:33.258 "name": "raid_bdev1", 00:36:33.258 "uuid": "17c7c8d7-7fb2-4bf6-860c-12a5470b2c38", 00:36:33.258 "strip_size_kb": 64, 00:36:33.258 "state": "online", 00:36:33.258 "raid_level": "raid5f", 00:36:33.258 "superblock": true, 00:36:33.258 "num_base_bdevs": 4, 00:36:33.258 "num_base_bdevs_discovered": 4, 00:36:33.258 "num_base_bdevs_operational": 4, 00:36:33.258 "process": { 00:36:33.258 "type": "rebuild", 00:36:33.258 "target": "spare", 00:36:33.258 "progress": { 00:36:33.258 "blocks": 21120, 00:36:33.258 "percent": 11 00:36:33.258 } 00:36:33.258 }, 00:36:33.258 "base_bdevs_list": [ 00:36:33.258 { 00:36:33.258 "name": "spare", 00:36:33.258 "uuid": "a8a418c4-1e77-5196-801a-61550db2c50a", 00:36:33.258 "is_configured": true, 00:36:33.258 "data_offset": 2048, 00:36:33.258 "data_size": 63488 00:36:33.258 }, 00:36:33.258 { 00:36:33.258 "name": "BaseBdev2", 00:36:33.258 "uuid": "3dd94f7c-57b5-5c64-8ccb-d22063de3997", 00:36:33.258 "is_configured": true, 00:36:33.258 "data_offset": 2048, 00:36:33.258 "data_size": 63488 00:36:33.258 }, 00:36:33.258 { 00:36:33.258 "name": "BaseBdev3", 00:36:33.258 "uuid": "f3d520e2-8c82-51cb-b68d-7e1af89d3fed", 00:36:33.258 "is_configured": true, 00:36:33.258 "data_offset": 2048, 00:36:33.258 "data_size": 63488 00:36:33.258 }, 00:36:33.258 { 00:36:33.258 "name": "BaseBdev4", 00:36:33.258 "uuid": "d1343a76-e811-54b7-a6ba-3ec717620564", 00:36:33.258 "is_configured": true, 00:36:33.258 "data_offset": 2048, 00:36:33.258 "data_size": 63488 00:36:33.258 } 00:36:33.258 ] 00:36:33.258 }' 00:36:33.258 21:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:33.258 21:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:33.258 21:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:33.258 21:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:33.258 21:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:36:33.516 [2024-07-15 21:50:06.866288] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:33.775 [2024-07-15 21:50:06.901944] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:33.775 [2024-07-15 21:50:06.902027] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:33.775 [2024-07-15 21:50:06.902047] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:33.775 [2024-07-15 21:50:06.902067] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:33.775 21:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:33.775 21:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:33.775 21:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:33.775 21:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:33.775 21:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:33.775 21:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:36:33.775 21:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:33.775 21:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:33.775 21:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:33.775 21:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:33.775 21:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:33.775 21:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:34.033 21:50:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:34.033 "name": "raid_bdev1", 00:36:34.033 "uuid": "17c7c8d7-7fb2-4bf6-860c-12a5470b2c38", 00:36:34.033 "strip_size_kb": 64, 00:36:34.033 "state": "online", 00:36:34.033 "raid_level": "raid5f", 00:36:34.033 "superblock": true, 00:36:34.033 "num_base_bdevs": 4, 00:36:34.033 "num_base_bdevs_discovered": 3, 00:36:34.033 "num_base_bdevs_operational": 3, 00:36:34.033 "base_bdevs_list": [ 00:36:34.033 { 00:36:34.033 "name": null, 00:36:34.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:34.033 "is_configured": false, 00:36:34.033 "data_offset": 2048, 00:36:34.033 "data_size": 63488 00:36:34.033 }, 00:36:34.033 { 00:36:34.033 "name": "BaseBdev2", 00:36:34.033 "uuid": "3dd94f7c-57b5-5c64-8ccb-d22063de3997", 00:36:34.033 "is_configured": true, 00:36:34.033 "data_offset": 2048, 00:36:34.033 "data_size": 63488 00:36:34.033 }, 00:36:34.033 { 00:36:34.033 "name": "BaseBdev3", 00:36:34.033 "uuid": "f3d520e2-8c82-51cb-b68d-7e1af89d3fed", 00:36:34.033 "is_configured": true, 00:36:34.033 "data_offset": 2048, 00:36:34.033 "data_size": 63488 00:36:34.033 }, 00:36:34.033 { 00:36:34.033 "name": "BaseBdev4", 00:36:34.033 "uuid": "d1343a76-e811-54b7-a6ba-3ec717620564", 00:36:34.033 "is_configured": true, 00:36:34.033 "data_offset": 2048, 00:36:34.033 "data_size": 63488 00:36:34.033 } 00:36:34.033 ] 00:36:34.033 }' 00:36:34.033 21:50:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:34.033 21:50:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:34.601 21:50:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:36:34.860 [2024-07-15 21:50:08.085582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:34.860 [2024-07-15 21:50:08.085661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:34.860 [2024-07-15 21:50:08.085705] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:36:34.860 [2024-07-15 21:50:08.085722] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:34.860 [2024-07-15 21:50:08.086231] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:34.860 [2024-07-15 21:50:08.086267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:34.860 [2024-07-15 21:50:08.086388] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:36:34.860 [2024-07-15 21:50:08.086406] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:36:34.860 [2024-07-15 21:50:08.086412] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:36:34.860 [2024-07-15 21:50:08.086440] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:34.860 [2024-07-15 21:50:08.102110] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004dbc0 00:36:34.860 spare 00:36:34.860 [2024-07-15 21:50:08.111027] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:34.860 21:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:36:35.891 21:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:35.891 21:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:35.891 21:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:35.891 21:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:35.891 21:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:35.891 21:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:35.891 21:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:36.149 21:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:36.149 "name": "raid_bdev1", 00:36:36.149 "uuid": "17c7c8d7-7fb2-4bf6-860c-12a5470b2c38", 00:36:36.149 "strip_size_kb": 64, 00:36:36.149 "state": "online", 00:36:36.149 "raid_level": "raid5f", 00:36:36.149 "superblock": true, 00:36:36.149 "num_base_bdevs": 4, 00:36:36.149 "num_base_bdevs_discovered": 4, 00:36:36.149 "num_base_bdevs_operational": 4, 00:36:36.149 "process": { 00:36:36.149 "type": "rebuild", 00:36:36.149 "target": "spare", 00:36:36.149 "progress": { 00:36:36.149 "blocks": 23040, 00:36:36.149 "percent": 12 00:36:36.149 } 00:36:36.149 }, 00:36:36.149 "base_bdevs_list": [ 00:36:36.149 { 00:36:36.149 "name": "spare", 00:36:36.149 "uuid": "a8a418c4-1e77-5196-801a-61550db2c50a", 00:36:36.149 "is_configured": true, 00:36:36.149 "data_offset": 2048, 00:36:36.149 "data_size": 63488 00:36:36.149 }, 00:36:36.149 { 00:36:36.149 "name": "BaseBdev2", 00:36:36.149 "uuid": "3dd94f7c-57b5-5c64-8ccb-d22063de3997", 00:36:36.149 "is_configured": true, 00:36:36.149 "data_offset": 2048, 00:36:36.149 "data_size": 63488 00:36:36.149 }, 00:36:36.149 { 00:36:36.149 "name": "BaseBdev3", 00:36:36.149 "uuid": "f3d520e2-8c82-51cb-b68d-7e1af89d3fed", 00:36:36.149 "is_configured": true, 00:36:36.149 "data_offset": 2048, 00:36:36.149 "data_size": 63488 00:36:36.149 }, 00:36:36.149 { 00:36:36.149 "name": "BaseBdev4", 00:36:36.149 "uuid": "d1343a76-e811-54b7-a6ba-3ec717620564", 00:36:36.149 "is_configured": true, 00:36:36.149 "data_offset": 2048, 00:36:36.149 "data_size": 63488 00:36:36.149 } 00:36:36.149 ] 00:36:36.149 }' 00:36:36.149 21:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:36.149 21:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:36.149 21:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:36.149 21:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:36.149 21:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:36:36.408 [2024-07-15 21:50:09.618096] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:36.408 [2024-07-15 21:50:09.620480] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:36.408 [2024-07-15 21:50:09.620546] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:36.408 [2024-07-15 21:50:09.620559] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:36.408 [2024-07-15 21:50:09.620565] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:36.408 21:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:36.408 21:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:36.408 21:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:36.408 21:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:36.408 21:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:36.408 21:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:36:36.408 21:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:36.408 21:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:36.408 21:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:36.408 21:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:36.408 21:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:36.408 21:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:36.666 21:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:36.666 "name": "raid_bdev1", 00:36:36.666 "uuid": "17c7c8d7-7fb2-4bf6-860c-12a5470b2c38", 00:36:36.666 "strip_size_kb": 64, 00:36:36.666 "state": "online", 00:36:36.666 "raid_level": "raid5f", 00:36:36.666 "superblock": true, 00:36:36.666 "num_base_bdevs": 4, 00:36:36.666 "num_base_bdevs_discovered": 3, 00:36:36.666 "num_base_bdevs_operational": 3, 00:36:36.666 "base_bdevs_list": [ 00:36:36.666 { 00:36:36.666 "name": null, 00:36:36.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:36.666 "is_configured": false, 00:36:36.666 "data_offset": 2048, 00:36:36.666 "data_size": 63488 00:36:36.666 }, 00:36:36.666 { 00:36:36.666 "name": "BaseBdev2", 00:36:36.666 "uuid": "3dd94f7c-57b5-5c64-8ccb-d22063de3997", 00:36:36.666 "is_configured": true, 00:36:36.666 "data_offset": 2048, 00:36:36.666 "data_size": 63488 00:36:36.666 }, 00:36:36.666 { 00:36:36.666 "name": "BaseBdev3", 00:36:36.666 "uuid": "f3d520e2-8c82-51cb-b68d-7e1af89d3fed", 00:36:36.666 "is_configured": true, 00:36:36.666 "data_offset": 2048, 00:36:36.666 "data_size": 63488 00:36:36.666 }, 00:36:36.666 { 00:36:36.666 "name": "BaseBdev4", 00:36:36.666 "uuid": "d1343a76-e811-54b7-a6ba-3ec717620564", 00:36:36.666 "is_configured": true, 00:36:36.666 "data_offset": 2048, 00:36:36.666 "data_size": 63488 00:36:36.666 } 00:36:36.666 ] 00:36:36.666 }' 00:36:36.666 21:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:36.666 21:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:37.233 21:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:37.233 21:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:37.233 21:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:37.233 21:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:37.233 21:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:37.233 21:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:37.233 21:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:37.490 21:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:37.490 "name": "raid_bdev1", 00:36:37.490 "uuid": "17c7c8d7-7fb2-4bf6-860c-12a5470b2c38", 00:36:37.490 "strip_size_kb": 64, 00:36:37.490 "state": "online", 00:36:37.490 "raid_level": "raid5f", 00:36:37.490 "superblock": true, 00:36:37.490 "num_base_bdevs": 4, 00:36:37.490 "num_base_bdevs_discovered": 3, 00:36:37.490 "num_base_bdevs_operational": 3, 00:36:37.490 "base_bdevs_list": [ 00:36:37.490 { 00:36:37.490 "name": null, 00:36:37.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:37.490 "is_configured": false, 00:36:37.490 "data_offset": 2048, 00:36:37.490 "data_size": 63488 00:36:37.490 }, 00:36:37.490 { 00:36:37.490 "name": "BaseBdev2", 00:36:37.490 "uuid": "3dd94f7c-57b5-5c64-8ccb-d22063de3997", 00:36:37.490 "is_configured": true, 00:36:37.490 "data_offset": 2048, 00:36:37.490 "data_size": 63488 00:36:37.490 }, 00:36:37.490 { 00:36:37.490 "name": "BaseBdev3", 00:36:37.490 "uuid": "f3d520e2-8c82-51cb-b68d-7e1af89d3fed", 00:36:37.490 "is_configured": true, 00:36:37.490 "data_offset": 2048, 00:36:37.490 "data_size": 63488 00:36:37.490 }, 00:36:37.490 { 00:36:37.490 "name": "BaseBdev4", 00:36:37.490 "uuid": "d1343a76-e811-54b7-a6ba-3ec717620564", 00:36:37.490 "is_configured": true, 00:36:37.490 "data_offset": 2048, 00:36:37.490 "data_size": 63488 00:36:37.490 } 00:36:37.490 ] 00:36:37.490 }' 00:36:37.490 21:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:37.490 21:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:37.490 21:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:37.490 21:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:37.490 21:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:36:37.748 21:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:36:38.037 [2024-07-15 21:50:11.217524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:36:38.037 [2024-07-15 21:50:11.217610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:38.037 [2024-07-15 21:50:11.217649] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:36:38.037 [2024-07-15 21:50:11.217670] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:38.037 [2024-07-15 21:50:11.218120] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:38.037 [2024-07-15 21:50:11.218157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:38.037 [2024-07-15 21:50:11.218277] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:36:38.037 [2024-07-15 21:50:11.218316] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:36:38.037 [2024-07-15 21:50:11.218323] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:36:38.037 BaseBdev1 00:36:38.037 21:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:36:38.971 21:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:38.971 21:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:38.971 21:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:38.971 21:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:38.971 21:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:38.971 21:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:36:38.971 21:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:38.971 21:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:38.971 21:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:38.971 21:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:38.971 21:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:38.971 21:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:39.230 21:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:39.230 "name": "raid_bdev1", 00:36:39.230 "uuid": "17c7c8d7-7fb2-4bf6-860c-12a5470b2c38", 00:36:39.230 "strip_size_kb": 64, 00:36:39.230 "state": "online", 00:36:39.230 "raid_level": "raid5f", 00:36:39.230 "superblock": true, 00:36:39.230 "num_base_bdevs": 4, 00:36:39.230 "num_base_bdevs_discovered": 3, 00:36:39.230 "num_base_bdevs_operational": 3, 00:36:39.230 "base_bdevs_list": [ 00:36:39.230 { 00:36:39.230 "name": null, 00:36:39.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:39.230 "is_configured": false, 00:36:39.230 "data_offset": 2048, 00:36:39.230 "data_size": 63488 00:36:39.230 }, 00:36:39.230 { 00:36:39.230 "name": "BaseBdev2", 00:36:39.230 "uuid": "3dd94f7c-57b5-5c64-8ccb-d22063de3997", 00:36:39.230 "is_configured": true, 00:36:39.230 "data_offset": 2048, 00:36:39.230 "data_size": 63488 00:36:39.230 }, 00:36:39.230 { 00:36:39.230 "name": "BaseBdev3", 00:36:39.230 "uuid": "f3d520e2-8c82-51cb-b68d-7e1af89d3fed", 00:36:39.230 "is_configured": true, 00:36:39.230 "data_offset": 2048, 00:36:39.230 "data_size": 63488 00:36:39.230 }, 00:36:39.230 { 00:36:39.230 "name": "BaseBdev4", 00:36:39.230 "uuid": "d1343a76-e811-54b7-a6ba-3ec717620564", 00:36:39.230 "is_configured": true, 00:36:39.230 "data_offset": 2048, 00:36:39.230 "data_size": 63488 00:36:39.230 } 00:36:39.230 ] 00:36:39.230 }' 00:36:39.230 21:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:39.230 21:50:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:39.797 21:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:39.797 21:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:39.797 21:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:39.797 21:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:39.797 21:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:39.797 21:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:39.797 21:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:40.055 21:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:40.055 "name": "raid_bdev1", 00:36:40.055 "uuid": "17c7c8d7-7fb2-4bf6-860c-12a5470b2c38", 00:36:40.055 "strip_size_kb": 64, 00:36:40.055 "state": "online", 00:36:40.055 "raid_level": "raid5f", 00:36:40.055 "superblock": true, 00:36:40.055 "num_base_bdevs": 4, 00:36:40.055 "num_base_bdevs_discovered": 3, 00:36:40.055 "num_base_bdevs_operational": 3, 00:36:40.055 "base_bdevs_list": [ 00:36:40.055 { 00:36:40.055 "name": null, 00:36:40.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:40.055 "is_configured": false, 00:36:40.055 "data_offset": 2048, 00:36:40.055 "data_size": 63488 00:36:40.055 }, 00:36:40.055 { 00:36:40.055 "name": "BaseBdev2", 00:36:40.055 "uuid": "3dd94f7c-57b5-5c64-8ccb-d22063de3997", 00:36:40.055 "is_configured": true, 00:36:40.055 "data_offset": 2048, 00:36:40.055 "data_size": 63488 00:36:40.055 }, 00:36:40.055 { 00:36:40.055 "name": "BaseBdev3", 00:36:40.055 "uuid": "f3d520e2-8c82-51cb-b68d-7e1af89d3fed", 00:36:40.055 "is_configured": true, 00:36:40.055 "data_offset": 2048, 00:36:40.055 "data_size": 63488 00:36:40.055 }, 00:36:40.055 { 00:36:40.055 "name": "BaseBdev4", 00:36:40.055 "uuid": "d1343a76-e811-54b7-a6ba-3ec717620564", 00:36:40.055 "is_configured": true, 00:36:40.055 "data_offset": 2048, 00:36:40.055 "data_size": 63488 00:36:40.055 } 00:36:40.055 ] 00:36:40.055 }' 00:36:40.055 21:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:40.055 21:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:40.055 21:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:40.313 21:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:40.313 21:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:40.313 21:50:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:36:40.313 21:50:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:40.313 21:50:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:40.313 21:50:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:40.313 21:50:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:40.313 21:50:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:40.313 21:50:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:40.313 21:50:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:40.313 21:50:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:40.313 21:50:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:36:40.313 21:50:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:40.313 [2024-07-15 21:50:13.618935] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:40.313 [2024-07-15 21:50:13.619093] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:36:40.313 [2024-07-15 21:50:13.619108] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:36:40.313 request: 00:36:40.313 { 00:36:40.313 "base_bdev": "BaseBdev1", 00:36:40.313 "raid_bdev": "raid_bdev1", 00:36:40.313 "method": "bdev_raid_add_base_bdev", 00:36:40.313 "req_id": 1 00:36:40.313 } 00:36:40.313 Got JSON-RPC error response 00:36:40.313 response: 00:36:40.313 { 00:36:40.313 "code": -22, 00:36:40.313 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:36:40.313 } 00:36:40.313 21:50:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:36:40.313 21:50:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:40.313 21:50:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:40.313 21:50:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:40.313 21:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:36:41.273 21:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:41.273 21:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:41.273 21:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:41.273 21:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:41.273 21:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:41.273 21:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:36:41.273 21:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:41.273 21:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:41.274 21:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:41.274 21:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:41.274 21:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:41.274 21:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:41.532 21:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:41.532 "name": "raid_bdev1", 00:36:41.532 "uuid": "17c7c8d7-7fb2-4bf6-860c-12a5470b2c38", 00:36:41.532 "strip_size_kb": 64, 00:36:41.532 "state": "online", 00:36:41.532 "raid_level": "raid5f", 00:36:41.532 "superblock": true, 00:36:41.532 "num_base_bdevs": 4, 00:36:41.532 "num_base_bdevs_discovered": 3, 00:36:41.532 "num_base_bdevs_operational": 3, 00:36:41.532 "base_bdevs_list": [ 00:36:41.532 { 00:36:41.532 "name": null, 00:36:41.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:41.532 "is_configured": false, 00:36:41.532 "data_offset": 2048, 00:36:41.532 "data_size": 63488 00:36:41.532 }, 00:36:41.532 { 00:36:41.532 "name": "BaseBdev2", 00:36:41.532 "uuid": "3dd94f7c-57b5-5c64-8ccb-d22063de3997", 00:36:41.532 "is_configured": true, 00:36:41.532 "data_offset": 2048, 00:36:41.532 "data_size": 63488 00:36:41.532 }, 00:36:41.532 { 00:36:41.532 "name": "BaseBdev3", 00:36:41.532 "uuid": "f3d520e2-8c82-51cb-b68d-7e1af89d3fed", 00:36:41.532 "is_configured": true, 00:36:41.532 "data_offset": 2048, 00:36:41.532 "data_size": 63488 00:36:41.532 }, 00:36:41.532 { 00:36:41.532 "name": "BaseBdev4", 00:36:41.532 "uuid": "d1343a76-e811-54b7-a6ba-3ec717620564", 00:36:41.532 "is_configured": true, 00:36:41.532 "data_offset": 2048, 00:36:41.532 "data_size": 63488 00:36:41.532 } 00:36:41.532 ] 00:36:41.532 }' 00:36:41.532 21:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:41.532 21:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:42.133 21:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:42.133 21:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:42.133 21:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:42.133 21:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:42.133 21:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:42.133 21:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:42.133 21:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:42.390 21:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:42.390 "name": "raid_bdev1", 00:36:42.390 "uuid": "17c7c8d7-7fb2-4bf6-860c-12a5470b2c38", 00:36:42.390 "strip_size_kb": 64, 00:36:42.390 "state": "online", 00:36:42.390 "raid_level": "raid5f", 00:36:42.390 "superblock": true, 00:36:42.390 "num_base_bdevs": 4, 00:36:42.390 "num_base_bdevs_discovered": 3, 00:36:42.390 "num_base_bdevs_operational": 3, 00:36:42.390 "base_bdevs_list": [ 00:36:42.390 { 00:36:42.390 "name": null, 00:36:42.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:42.390 "is_configured": false, 00:36:42.390 "data_offset": 2048, 00:36:42.390 "data_size": 63488 00:36:42.390 }, 00:36:42.390 { 00:36:42.390 "name": "BaseBdev2", 00:36:42.390 "uuid": "3dd94f7c-57b5-5c64-8ccb-d22063de3997", 00:36:42.390 "is_configured": true, 00:36:42.390 "data_offset": 2048, 00:36:42.390 "data_size": 63488 00:36:42.390 }, 00:36:42.390 { 00:36:42.390 "name": "BaseBdev3", 00:36:42.390 "uuid": "f3d520e2-8c82-51cb-b68d-7e1af89d3fed", 00:36:42.390 "is_configured": true, 00:36:42.390 "data_offset": 2048, 00:36:42.390 "data_size": 63488 00:36:42.390 }, 00:36:42.390 { 00:36:42.390 "name": "BaseBdev4", 00:36:42.391 "uuid": "d1343a76-e811-54b7-a6ba-3ec717620564", 00:36:42.391 "is_configured": true, 00:36:42.391 "data_offset": 2048, 00:36:42.391 "data_size": 63488 00:36:42.391 } 00:36:42.391 ] 00:36:42.391 }' 00:36:42.391 21:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:42.391 21:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:42.391 21:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:42.648 21:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:42.648 21:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 159636 00:36:42.648 21:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 159636 ']' 00:36:42.648 21:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 159636 00:36:42.648 21:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:36:42.648 21:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:42.648 21:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 159636 00:36:42.648 killing process with pid 159636 00:36:42.648 Received shutdown signal, test time was about 60.000000 seconds 00:36:42.648 00:36:42.648 Latency(us) 00:36:42.648 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:42.648 =================================================================================================================== 00:36:42.648 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:42.648 21:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:42.648 21:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:42.648 21:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 159636' 00:36:42.648 21:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 159636 00:36:42.648 21:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 159636 00:36:42.648 [2024-07-15 21:50:15.827090] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:42.648 [2024-07-15 21:50:15.827205] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:42.648 [2024-07-15 21:50:15.827285] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:42.648 [2024-07-15 21:50:15.827293] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:36:43.221 [2024-07-15 21:50:16.304036] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:44.187 ************************************ 00:36:44.187 END TEST raid5f_rebuild_test_sb 00:36:44.187 ************************************ 00:36:44.187 21:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:36:44.187 00:36:44.187 real 0m39.651s 00:36:44.187 user 1m0.531s 00:36:44.187 sys 0m4.135s 00:36:44.187 21:50:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:44.187 21:50:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:44.443 21:50:17 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:36:44.443 21:50:17 bdev_raid -- bdev/bdev_raid.sh@896 -- # base_blocklen=4096 00:36:44.443 21:50:17 bdev_raid -- bdev/bdev_raid.sh@898 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:36:44.443 21:50:17 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:36:44.443 21:50:17 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:44.443 21:50:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:44.443 ************************************ 00:36:44.443 START TEST raid_state_function_test_sb_4k 00:36:44.443 ************************************ 00:36:44.443 21:50:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:36:44.443 21:50:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:36:44.443 21:50:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:36:44.443 21:50:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:36:44.443 21:50:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:36:44.443 21:50:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:36:44.443 21:50:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:36:44.443 21:50:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:44.444 21:50:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:36:44.444 21:50:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:36:44.444 21:50:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:44.444 21:50:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:36:44.444 21:50:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:36:44.444 21:50:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:44.444 21:50:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:36:44.444 21:50:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:36:44.444 21:50:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:36:44.444 21:50:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:36:44.444 21:50:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:36:44.444 21:50:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:36:44.444 21:50:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:36:44.444 21:50:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:36:44.444 21:50:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:36:44.444 21:50:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=160701 00:36:44.444 21:50:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 160701' 00:36:44.444 Process raid pid: 160701 00:36:44.444 21:50:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 160701 /var/tmp/spdk-raid.sock 00:36:44.444 21:50:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:36:44.444 21:50:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 160701 ']' 00:36:44.444 21:50:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:44.444 21:50:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:44.444 21:50:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:44.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:44.444 21:50:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:44.444 21:50:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:44.444 [2024-07-15 21:50:17.677890] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:36:44.444 [2024-07-15 21:50:17.678640] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:44.702 [2024-07-15 21:50:17.846545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:44.702 [2024-07-15 21:50:18.049474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:44.961 [2024-07-15 21:50:18.243557] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:45.220 21:50:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:45.220 21:50:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:36:45.220 21:50:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:36:45.480 [2024-07-15 21:50:18.759758] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:45.480 [2024-07-15 21:50:18.759842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:45.480 [2024-07-15 21:50:18.759852] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:45.480 [2024-07-15 21:50:18.759873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:45.480 21:50:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:36:45.480 21:50:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:45.480 21:50:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:45.480 21:50:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:45.480 21:50:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:45.480 21:50:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:45.480 21:50:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:45.480 21:50:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:45.480 21:50:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:45.480 21:50:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:45.480 21:50:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:45.480 21:50:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:45.738 21:50:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:45.738 "name": "Existed_Raid", 00:36:45.738 "uuid": "ba274460-0568-4934-a043-36e8736bc503", 00:36:45.738 "strip_size_kb": 0, 00:36:45.738 "state": "configuring", 00:36:45.738 "raid_level": "raid1", 00:36:45.738 "superblock": true, 00:36:45.738 "num_base_bdevs": 2, 00:36:45.738 "num_base_bdevs_discovered": 0, 00:36:45.739 "num_base_bdevs_operational": 2, 00:36:45.739 "base_bdevs_list": [ 00:36:45.739 { 00:36:45.739 "name": "BaseBdev1", 00:36:45.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:45.739 "is_configured": false, 00:36:45.739 "data_offset": 0, 00:36:45.739 "data_size": 0 00:36:45.739 }, 00:36:45.739 { 00:36:45.739 "name": "BaseBdev2", 00:36:45.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:45.739 "is_configured": false, 00:36:45.739 "data_offset": 0, 00:36:45.739 "data_size": 0 00:36:45.739 } 00:36:45.739 ] 00:36:45.739 }' 00:36:45.739 21:50:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:45.739 21:50:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:46.305 21:50:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:36:46.565 [2024-07-15 21:50:19.713995] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:46.565 [2024-07-15 21:50:19.714035] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:36:46.565 21:50:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:36:46.825 [2024-07-15 21:50:19.953643] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:46.825 [2024-07-15 21:50:19.953714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:46.825 [2024-07-15 21:50:19.953723] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:46.825 [2024-07-15 21:50:19.953746] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:46.825 21:50:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:36:46.825 [2024-07-15 21:50:20.180334] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:46.825 BaseBdev1 00:36:46.825 21:50:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:36:46.825 21:50:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:36:46.825 21:50:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:46.825 21:50:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:36:46.825 21:50:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:46.825 21:50:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:46.825 21:50:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:47.084 21:50:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:47.343 [ 00:36:47.343 { 00:36:47.343 "name": "BaseBdev1", 00:36:47.343 "aliases": [ 00:36:47.343 "84050a2c-742c-46b9-868c-d0031d9ce2ba" 00:36:47.343 ], 00:36:47.343 "product_name": "Malloc disk", 00:36:47.343 "block_size": 4096, 00:36:47.343 "num_blocks": 8192, 00:36:47.343 "uuid": "84050a2c-742c-46b9-868c-d0031d9ce2ba", 00:36:47.343 "assigned_rate_limits": { 00:36:47.343 "rw_ios_per_sec": 0, 00:36:47.343 "rw_mbytes_per_sec": 0, 00:36:47.343 "r_mbytes_per_sec": 0, 00:36:47.343 "w_mbytes_per_sec": 0 00:36:47.343 }, 00:36:47.343 "claimed": true, 00:36:47.343 "claim_type": "exclusive_write", 00:36:47.343 "zoned": false, 00:36:47.343 "supported_io_types": { 00:36:47.343 "read": true, 00:36:47.343 "write": true, 00:36:47.343 "unmap": true, 00:36:47.343 "flush": true, 00:36:47.343 "reset": true, 00:36:47.343 "nvme_admin": false, 00:36:47.343 "nvme_io": false, 00:36:47.343 "nvme_io_md": false, 00:36:47.343 "write_zeroes": true, 00:36:47.343 "zcopy": true, 00:36:47.343 "get_zone_info": false, 00:36:47.343 "zone_management": false, 00:36:47.343 "zone_append": false, 00:36:47.343 "compare": false, 00:36:47.343 "compare_and_write": false, 00:36:47.343 "abort": true, 00:36:47.343 "seek_hole": false, 00:36:47.343 "seek_data": false, 00:36:47.343 "copy": true, 00:36:47.343 "nvme_iov_md": false 00:36:47.343 }, 00:36:47.343 "memory_domains": [ 00:36:47.343 { 00:36:47.343 "dma_device_id": "system", 00:36:47.343 "dma_device_type": 1 00:36:47.343 }, 00:36:47.343 { 00:36:47.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:47.343 "dma_device_type": 2 00:36:47.343 } 00:36:47.343 ], 00:36:47.343 "driver_specific": {} 00:36:47.343 } 00:36:47.343 ] 00:36:47.343 21:50:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:36:47.343 21:50:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:36:47.343 21:50:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:47.343 21:50:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:47.343 21:50:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:47.343 21:50:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:47.343 21:50:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:47.343 21:50:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:47.344 21:50:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:47.344 21:50:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:47.344 21:50:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:47.344 21:50:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:47.344 21:50:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:47.603 21:50:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:47.603 "name": "Existed_Raid", 00:36:47.603 "uuid": "4f7d787f-4838-4008-91ce-8ad7eb63d297", 00:36:47.603 "strip_size_kb": 0, 00:36:47.603 "state": "configuring", 00:36:47.603 "raid_level": "raid1", 00:36:47.603 "superblock": true, 00:36:47.603 "num_base_bdevs": 2, 00:36:47.603 "num_base_bdevs_discovered": 1, 00:36:47.603 "num_base_bdevs_operational": 2, 00:36:47.603 "base_bdevs_list": [ 00:36:47.603 { 00:36:47.603 "name": "BaseBdev1", 00:36:47.603 "uuid": "84050a2c-742c-46b9-868c-d0031d9ce2ba", 00:36:47.603 "is_configured": true, 00:36:47.603 "data_offset": 256, 00:36:47.603 "data_size": 7936 00:36:47.603 }, 00:36:47.603 { 00:36:47.603 "name": "BaseBdev2", 00:36:47.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:47.603 "is_configured": false, 00:36:47.603 "data_offset": 0, 00:36:47.603 "data_size": 0 00:36:47.603 } 00:36:47.603 ] 00:36:47.603 }' 00:36:47.603 21:50:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:47.603 21:50:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:48.172 21:50:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:36:48.429 [2024-07-15 21:50:21.557961] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:48.429 [2024-07-15 21:50:21.558040] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:36:48.429 21:50:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:36:48.429 [2024-07-15 21:50:21.793568] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:48.429 [2024-07-15 21:50:21.795253] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:48.429 [2024-07-15 21:50:21.795312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:48.429 21:50:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:36:48.429 21:50:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:36:48.429 21:50:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:36:48.429 21:50:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:48.429 21:50:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:48.429 21:50:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:48.429 21:50:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:48.429 21:50:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:48.429 21:50:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:48.429 21:50:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:48.429 21:50:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:48.429 21:50:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:48.688 21:50:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:48.688 21:50:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:48.688 21:50:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:48.688 "name": "Existed_Raid", 00:36:48.688 "uuid": "b0a517f0-9d1e-49ce-bd18-889ef31f07da", 00:36:48.688 "strip_size_kb": 0, 00:36:48.688 "state": "configuring", 00:36:48.688 "raid_level": "raid1", 00:36:48.688 "superblock": true, 00:36:48.688 "num_base_bdevs": 2, 00:36:48.688 "num_base_bdevs_discovered": 1, 00:36:48.688 "num_base_bdevs_operational": 2, 00:36:48.688 "base_bdevs_list": [ 00:36:48.688 { 00:36:48.688 "name": "BaseBdev1", 00:36:48.688 "uuid": "84050a2c-742c-46b9-868c-d0031d9ce2ba", 00:36:48.688 "is_configured": true, 00:36:48.688 "data_offset": 256, 00:36:48.688 "data_size": 7936 00:36:48.688 }, 00:36:48.688 { 00:36:48.688 "name": "BaseBdev2", 00:36:48.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:48.688 "is_configured": false, 00:36:48.688 "data_offset": 0, 00:36:48.688 "data_size": 0 00:36:48.688 } 00:36:48.688 ] 00:36:48.688 }' 00:36:48.688 21:50:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:48.688 21:50:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:49.623 21:50:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:36:49.623 [2024-07-15 21:50:22.884080] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:49.623 [2024-07-15 21:50:22.884299] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:36:49.623 [2024-07-15 21:50:22.884310] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:49.623 [2024-07-15 21:50:22.884449] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:36:49.623 BaseBdev2 00:36:49.623 [2024-07-15 21:50:22.884733] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:36:49.623 [2024-07-15 21:50:22.884751] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:36:49.623 [2024-07-15 21:50:22.884885] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:49.623 21:50:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:36:49.623 21:50:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:36:49.623 21:50:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:49.623 21:50:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:36:49.623 21:50:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:49.623 21:50:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:49.623 21:50:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:49.881 21:50:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:49.881 [ 00:36:49.881 { 00:36:49.881 "name": "BaseBdev2", 00:36:49.881 "aliases": [ 00:36:49.882 "bdfca23c-9d27-4f27-9758-2245f0ee7bd9" 00:36:49.882 ], 00:36:49.882 "product_name": "Malloc disk", 00:36:49.882 "block_size": 4096, 00:36:49.882 "num_blocks": 8192, 00:36:49.882 "uuid": "bdfca23c-9d27-4f27-9758-2245f0ee7bd9", 00:36:49.882 "assigned_rate_limits": { 00:36:49.882 "rw_ios_per_sec": 0, 00:36:49.882 "rw_mbytes_per_sec": 0, 00:36:49.882 "r_mbytes_per_sec": 0, 00:36:49.882 "w_mbytes_per_sec": 0 00:36:49.882 }, 00:36:49.882 "claimed": true, 00:36:49.882 "claim_type": "exclusive_write", 00:36:49.882 "zoned": false, 00:36:49.882 "supported_io_types": { 00:36:49.882 "read": true, 00:36:49.882 "write": true, 00:36:49.882 "unmap": true, 00:36:49.882 "flush": true, 00:36:49.882 "reset": true, 00:36:49.882 "nvme_admin": false, 00:36:49.882 "nvme_io": false, 00:36:49.882 "nvme_io_md": false, 00:36:49.882 "write_zeroes": true, 00:36:49.882 "zcopy": true, 00:36:49.882 "get_zone_info": false, 00:36:49.882 "zone_management": false, 00:36:49.882 "zone_append": false, 00:36:49.882 "compare": false, 00:36:49.882 "compare_and_write": false, 00:36:49.882 "abort": true, 00:36:49.882 "seek_hole": false, 00:36:49.882 "seek_data": false, 00:36:49.882 "copy": true, 00:36:49.882 "nvme_iov_md": false 00:36:49.882 }, 00:36:49.882 "memory_domains": [ 00:36:49.882 { 00:36:49.882 "dma_device_id": "system", 00:36:49.882 "dma_device_type": 1 00:36:49.882 }, 00:36:49.882 { 00:36:49.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:49.882 "dma_device_type": 2 00:36:49.882 } 00:36:49.882 ], 00:36:49.882 "driver_specific": {} 00:36:49.882 } 00:36:49.882 ] 00:36:49.882 21:50:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:36:49.882 21:50:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:36:49.882 21:50:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:36:49.882 21:50:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:36:49.882 21:50:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:49.882 21:50:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:49.882 21:50:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:49.882 21:50:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:49.882 21:50:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:49.882 21:50:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:49.882 21:50:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:49.882 21:50:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:49.882 21:50:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:49.882 21:50:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:49.882 21:50:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:50.138 21:50:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:50.138 "name": "Existed_Raid", 00:36:50.138 "uuid": "b0a517f0-9d1e-49ce-bd18-889ef31f07da", 00:36:50.138 "strip_size_kb": 0, 00:36:50.138 "state": "online", 00:36:50.138 "raid_level": "raid1", 00:36:50.138 "superblock": true, 00:36:50.138 "num_base_bdevs": 2, 00:36:50.138 "num_base_bdevs_discovered": 2, 00:36:50.138 "num_base_bdevs_operational": 2, 00:36:50.138 "base_bdevs_list": [ 00:36:50.138 { 00:36:50.138 "name": "BaseBdev1", 00:36:50.138 "uuid": "84050a2c-742c-46b9-868c-d0031d9ce2ba", 00:36:50.138 "is_configured": true, 00:36:50.138 "data_offset": 256, 00:36:50.138 "data_size": 7936 00:36:50.138 }, 00:36:50.138 { 00:36:50.138 "name": "BaseBdev2", 00:36:50.138 "uuid": "bdfca23c-9d27-4f27-9758-2245f0ee7bd9", 00:36:50.138 "is_configured": true, 00:36:50.138 "data_offset": 256, 00:36:50.138 "data_size": 7936 00:36:50.138 } 00:36:50.138 ] 00:36:50.138 }' 00:36:50.138 21:50:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:50.138 21:50:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:50.702 21:50:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:36:50.702 21:50:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:36:50.702 21:50:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:36:50.702 21:50:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:36:50.702 21:50:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:36:50.702 21:50:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:36:50.961 21:50:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:36:50.961 21:50:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:36:50.961 [2024-07-15 21:50:24.269954] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:50.961 21:50:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:36:50.961 "name": "Existed_Raid", 00:36:50.961 "aliases": [ 00:36:50.961 "b0a517f0-9d1e-49ce-bd18-889ef31f07da" 00:36:50.961 ], 00:36:50.961 "product_name": "Raid Volume", 00:36:50.961 "block_size": 4096, 00:36:50.961 "num_blocks": 7936, 00:36:50.961 "uuid": "b0a517f0-9d1e-49ce-bd18-889ef31f07da", 00:36:50.961 "assigned_rate_limits": { 00:36:50.961 "rw_ios_per_sec": 0, 00:36:50.961 "rw_mbytes_per_sec": 0, 00:36:50.961 "r_mbytes_per_sec": 0, 00:36:50.961 "w_mbytes_per_sec": 0 00:36:50.961 }, 00:36:50.961 "claimed": false, 00:36:50.961 "zoned": false, 00:36:50.961 "supported_io_types": { 00:36:50.961 "read": true, 00:36:50.961 "write": true, 00:36:50.961 "unmap": false, 00:36:50.961 "flush": false, 00:36:50.961 "reset": true, 00:36:50.961 "nvme_admin": false, 00:36:50.961 "nvme_io": false, 00:36:50.961 "nvme_io_md": false, 00:36:50.961 "write_zeroes": true, 00:36:50.961 "zcopy": false, 00:36:50.961 "get_zone_info": false, 00:36:50.961 "zone_management": false, 00:36:50.961 "zone_append": false, 00:36:50.961 "compare": false, 00:36:50.961 "compare_and_write": false, 00:36:50.961 "abort": false, 00:36:50.961 "seek_hole": false, 00:36:50.961 "seek_data": false, 00:36:50.961 "copy": false, 00:36:50.961 "nvme_iov_md": false 00:36:50.961 }, 00:36:50.961 "memory_domains": [ 00:36:50.961 { 00:36:50.961 "dma_device_id": "system", 00:36:50.961 "dma_device_type": 1 00:36:50.961 }, 00:36:50.961 { 00:36:50.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:50.961 "dma_device_type": 2 00:36:50.961 }, 00:36:50.961 { 00:36:50.961 "dma_device_id": "system", 00:36:50.961 "dma_device_type": 1 00:36:50.961 }, 00:36:50.961 { 00:36:50.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:50.961 "dma_device_type": 2 00:36:50.961 } 00:36:50.961 ], 00:36:50.961 "driver_specific": { 00:36:50.961 "raid": { 00:36:50.961 "uuid": "b0a517f0-9d1e-49ce-bd18-889ef31f07da", 00:36:50.961 "strip_size_kb": 0, 00:36:50.961 "state": "online", 00:36:50.961 "raid_level": "raid1", 00:36:50.961 "superblock": true, 00:36:50.961 "num_base_bdevs": 2, 00:36:50.961 "num_base_bdevs_discovered": 2, 00:36:50.961 "num_base_bdevs_operational": 2, 00:36:50.961 "base_bdevs_list": [ 00:36:50.961 { 00:36:50.961 "name": "BaseBdev1", 00:36:50.961 "uuid": "84050a2c-742c-46b9-868c-d0031d9ce2ba", 00:36:50.961 "is_configured": true, 00:36:50.961 "data_offset": 256, 00:36:50.961 "data_size": 7936 00:36:50.961 }, 00:36:50.961 { 00:36:50.961 "name": "BaseBdev2", 00:36:50.961 "uuid": "bdfca23c-9d27-4f27-9758-2245f0ee7bd9", 00:36:50.961 "is_configured": true, 00:36:50.961 "data_offset": 256, 00:36:50.961 "data_size": 7936 00:36:50.961 } 00:36:50.961 ] 00:36:50.961 } 00:36:50.961 } 00:36:50.961 }' 00:36:50.961 21:50:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:50.961 21:50:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:36:50.961 BaseBdev2' 00:36:50.961 21:50:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:50.961 21:50:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:36:50.961 21:50:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:51.220 21:50:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:51.220 "name": "BaseBdev1", 00:36:51.220 "aliases": [ 00:36:51.220 "84050a2c-742c-46b9-868c-d0031d9ce2ba" 00:36:51.220 ], 00:36:51.220 "product_name": "Malloc disk", 00:36:51.220 "block_size": 4096, 00:36:51.220 "num_blocks": 8192, 00:36:51.220 "uuid": "84050a2c-742c-46b9-868c-d0031d9ce2ba", 00:36:51.220 "assigned_rate_limits": { 00:36:51.220 "rw_ios_per_sec": 0, 00:36:51.220 "rw_mbytes_per_sec": 0, 00:36:51.220 "r_mbytes_per_sec": 0, 00:36:51.220 "w_mbytes_per_sec": 0 00:36:51.220 }, 00:36:51.220 "claimed": true, 00:36:51.220 "claim_type": "exclusive_write", 00:36:51.220 "zoned": false, 00:36:51.220 "supported_io_types": { 00:36:51.220 "read": true, 00:36:51.220 "write": true, 00:36:51.220 "unmap": true, 00:36:51.220 "flush": true, 00:36:51.220 "reset": true, 00:36:51.220 "nvme_admin": false, 00:36:51.220 "nvme_io": false, 00:36:51.220 "nvme_io_md": false, 00:36:51.220 "write_zeroes": true, 00:36:51.220 "zcopy": true, 00:36:51.220 "get_zone_info": false, 00:36:51.220 "zone_management": false, 00:36:51.220 "zone_append": false, 00:36:51.220 "compare": false, 00:36:51.220 "compare_and_write": false, 00:36:51.220 "abort": true, 00:36:51.220 "seek_hole": false, 00:36:51.220 "seek_data": false, 00:36:51.220 "copy": true, 00:36:51.220 "nvme_iov_md": false 00:36:51.220 }, 00:36:51.220 "memory_domains": [ 00:36:51.220 { 00:36:51.220 "dma_device_id": "system", 00:36:51.220 "dma_device_type": 1 00:36:51.220 }, 00:36:51.220 { 00:36:51.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:51.220 "dma_device_type": 2 00:36:51.220 } 00:36:51.220 ], 00:36:51.220 "driver_specific": {} 00:36:51.220 }' 00:36:51.220 21:50:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:51.220 21:50:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:51.220 21:50:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:36:51.220 21:50:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:51.478 21:50:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:51.478 21:50:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:36:51.478 21:50:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:51.478 21:50:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:51.478 21:50:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:36:51.478 21:50:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:51.736 21:50:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:51.736 21:50:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:36:51.736 21:50:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:51.736 21:50:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:36:51.736 21:50:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:51.994 21:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:51.994 "name": "BaseBdev2", 00:36:51.994 "aliases": [ 00:36:51.994 "bdfca23c-9d27-4f27-9758-2245f0ee7bd9" 00:36:51.994 ], 00:36:51.994 "product_name": "Malloc disk", 00:36:51.994 "block_size": 4096, 00:36:51.994 "num_blocks": 8192, 00:36:51.994 "uuid": "bdfca23c-9d27-4f27-9758-2245f0ee7bd9", 00:36:51.994 "assigned_rate_limits": { 00:36:51.994 "rw_ios_per_sec": 0, 00:36:51.994 "rw_mbytes_per_sec": 0, 00:36:51.994 "r_mbytes_per_sec": 0, 00:36:51.994 "w_mbytes_per_sec": 0 00:36:51.994 }, 00:36:51.994 "claimed": true, 00:36:51.994 "claim_type": "exclusive_write", 00:36:51.994 "zoned": false, 00:36:51.994 "supported_io_types": { 00:36:51.994 "read": true, 00:36:51.994 "write": true, 00:36:51.994 "unmap": true, 00:36:51.994 "flush": true, 00:36:51.994 "reset": true, 00:36:51.994 "nvme_admin": false, 00:36:51.994 "nvme_io": false, 00:36:51.994 "nvme_io_md": false, 00:36:51.994 "write_zeroes": true, 00:36:51.994 "zcopy": true, 00:36:51.994 "get_zone_info": false, 00:36:51.994 "zone_management": false, 00:36:51.994 "zone_append": false, 00:36:51.994 "compare": false, 00:36:51.994 "compare_and_write": false, 00:36:51.994 "abort": true, 00:36:51.994 "seek_hole": false, 00:36:51.994 "seek_data": false, 00:36:51.994 "copy": true, 00:36:51.994 "nvme_iov_md": false 00:36:51.994 }, 00:36:51.994 "memory_domains": [ 00:36:51.994 { 00:36:51.994 "dma_device_id": "system", 00:36:51.994 "dma_device_type": 1 00:36:51.994 }, 00:36:51.994 { 00:36:51.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:51.994 "dma_device_type": 2 00:36:51.994 } 00:36:51.994 ], 00:36:51.994 "driver_specific": {} 00:36:51.994 }' 00:36:51.994 21:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:51.994 21:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:51.994 21:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:36:51.994 21:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:51.994 21:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:51.994 21:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:36:51.994 21:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:52.252 21:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:52.252 21:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:36:52.252 21:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:52.252 21:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:52.252 21:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:36:52.252 21:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:36:52.509 [2024-07-15 21:50:25.744172] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:52.509 21:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:36:52.509 21:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:36:52.509 21:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:36:52.509 21:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:36:52.509 21:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:36:52.509 21:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:36:52.509 21:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:52.509 21:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:52.509 21:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:52.509 21:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:52.509 21:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:52.509 21:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:52.509 21:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:52.509 21:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:52.509 21:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:52.509 21:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:52.509 21:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:52.767 21:50:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:52.767 "name": "Existed_Raid", 00:36:52.767 "uuid": "b0a517f0-9d1e-49ce-bd18-889ef31f07da", 00:36:52.767 "strip_size_kb": 0, 00:36:52.767 "state": "online", 00:36:52.767 "raid_level": "raid1", 00:36:52.767 "superblock": true, 00:36:52.767 "num_base_bdevs": 2, 00:36:52.767 "num_base_bdevs_discovered": 1, 00:36:52.767 "num_base_bdevs_operational": 1, 00:36:52.767 "base_bdevs_list": [ 00:36:52.767 { 00:36:52.767 "name": null, 00:36:52.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:52.767 "is_configured": false, 00:36:52.767 "data_offset": 256, 00:36:52.767 "data_size": 7936 00:36:52.767 }, 00:36:52.767 { 00:36:52.767 "name": "BaseBdev2", 00:36:52.767 "uuid": "bdfca23c-9d27-4f27-9758-2245f0ee7bd9", 00:36:52.767 "is_configured": true, 00:36:52.767 "data_offset": 256, 00:36:52.767 "data_size": 7936 00:36:52.767 } 00:36:52.767 ] 00:36:52.767 }' 00:36:52.767 21:50:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:52.767 21:50:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:53.334 21:50:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:36:53.334 21:50:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:36:53.334 21:50:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:53.334 21:50:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:36:53.593 21:50:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:36:53.593 21:50:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:53.593 21:50:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:36:53.851 [2024-07-15 21:50:27.088848] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:53.851 [2024-07-15 21:50:27.088966] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:53.851 [2024-07-15 21:50:27.181791] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:53.851 [2024-07-15 21:50:27.181846] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:53.851 [2024-07-15 21:50:27.181853] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:36:53.851 21:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:36:53.851 21:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:36:53.851 21:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:53.851 21:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:36:54.108 21:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:36:54.108 21:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:36:54.108 21:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:36:54.108 21:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 160701 00:36:54.108 21:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 160701 ']' 00:36:54.108 21:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 160701 00:36:54.108 21:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:36:54.108 21:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:54.109 21:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 160701 00:36:54.109 21:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:54.109 21:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:54.109 21:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 160701' 00:36:54.109 killing process with pid 160701 00:36:54.109 21:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@967 -- # kill 160701 00:36:54.109 21:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # wait 160701 00:36:54.109 [2024-07-15 21:50:27.424166] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:54.109 [2024-07-15 21:50:27.424273] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:55.487 ************************************ 00:36:55.487 END TEST raid_state_function_test_sb_4k 00:36:55.487 ************************************ 00:36:55.487 21:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:36:55.487 00:36:55.487 real 0m11.038s 00:36:55.487 user 0m19.218s 00:36:55.487 sys 0m1.373s 00:36:55.487 21:50:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:55.487 21:50:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:55.487 21:50:28 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:36:55.487 21:50:28 bdev_raid -- bdev/bdev_raid.sh@899 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:36:55.487 21:50:28 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:36:55.487 21:50:28 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:55.487 21:50:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:55.487 ************************************ 00:36:55.487 START TEST raid_superblock_test_4k 00:36:55.487 ************************************ 00:36:55.487 21:50:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:36:55.487 21:50:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:36:55.487 21:50:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:36:55.487 21:50:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:36:55.487 21:50:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:36:55.487 21:50:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:36:55.487 21:50:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:36:55.487 21:50:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:36:55.487 21:50:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:36:55.487 21:50:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:36:55.487 21:50:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local strip_size 00:36:55.487 21:50:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:36:55.487 21:50:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:36:55.487 21:50:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:36:55.487 21:50:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:36:55.487 21:50:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:36:55.487 21:50:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # raid_pid=161079 00:36:55.487 21:50:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:36:55.487 21:50:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # waitforlisten 161079 /var/tmp/spdk-raid.sock 00:36:55.487 21:50:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@829 -- # '[' -z 161079 ']' 00:36:55.487 21:50:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:55.487 21:50:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:55.487 21:50:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:55.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:55.487 21:50:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:55.487 21:50:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:55.487 [2024-07-15 21:50:28.792047] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:36:55.487 [2024-07-15 21:50:28.792193] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid161079 ] 00:36:55.745 [2024-07-15 21:50:28.952289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:56.004 [2024-07-15 21:50:29.148074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:56.004 [2024-07-15 21:50:29.337108] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:56.262 21:50:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:56.262 21:50:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # return 0 00:36:56.262 21:50:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:36:56.262 21:50:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:36:56.262 21:50:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:36:56.262 21:50:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:36:56.262 21:50:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:36:56.262 21:50:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:56.262 21:50:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:36:56.262 21:50:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:56.262 21:50:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:36:56.521 malloc1 00:36:56.521 21:50:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:56.779 [2024-07-15 21:50:30.009972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:56.780 [2024-07-15 21:50:30.010077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:56.780 [2024-07-15 21:50:30.010123] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:36:56.780 [2024-07-15 21:50:30.010137] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:56.780 [2024-07-15 21:50:30.012069] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:56.780 [2024-07-15 21:50:30.012114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:56.780 pt1 00:36:56.780 21:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:36:56.780 21:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:36:56.780 21:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:36:56.780 21:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:36:56.780 21:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:36:56.780 21:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:56.780 21:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:36:56.780 21:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:56.780 21:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:36:57.038 malloc2 00:36:57.038 21:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:57.296 [2024-07-15 21:50:30.474797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:57.296 [2024-07-15 21:50:30.474908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:57.296 [2024-07-15 21:50:30.474956] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:36:57.296 [2024-07-15 21:50:30.474977] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:57.296 [2024-07-15 21:50:30.477037] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:57.296 [2024-07-15 21:50:30.477084] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:57.296 pt2 00:36:57.296 21:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:36:57.296 21:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:36:57.296 21:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:36:57.296 [2024-07-15 21:50:30.650528] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:57.296 [2024-07-15 21:50:30.652305] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:57.296 [2024-07-15 21:50:30.652524] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:36:57.296 [2024-07-15 21:50:30.652542] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:57.296 [2024-07-15 21:50:30.652719] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:36:57.296 [2024-07-15 21:50:30.653053] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:36:57.296 [2024-07-15 21:50:30.653071] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:36:57.296 [2024-07-15 21:50:30.653219] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:57.296 21:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:57.296 21:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:57.296 21:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:57.296 21:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:57.296 21:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:57.296 21:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:57.296 21:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:57.296 21:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:57.296 21:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:57.296 21:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:57.296 21:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:57.296 21:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:57.555 21:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:57.555 "name": "raid_bdev1", 00:36:57.555 "uuid": "4ca46807-c048-4b7d-8789-9a2059f6d466", 00:36:57.555 "strip_size_kb": 0, 00:36:57.555 "state": "online", 00:36:57.555 "raid_level": "raid1", 00:36:57.555 "superblock": true, 00:36:57.555 "num_base_bdevs": 2, 00:36:57.555 "num_base_bdevs_discovered": 2, 00:36:57.555 "num_base_bdevs_operational": 2, 00:36:57.555 "base_bdevs_list": [ 00:36:57.555 { 00:36:57.555 "name": "pt1", 00:36:57.555 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:57.555 "is_configured": true, 00:36:57.555 "data_offset": 256, 00:36:57.555 "data_size": 7936 00:36:57.555 }, 00:36:57.555 { 00:36:57.555 "name": "pt2", 00:36:57.555 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:57.555 "is_configured": true, 00:36:57.555 "data_offset": 256, 00:36:57.555 "data_size": 7936 00:36:57.555 } 00:36:57.555 ] 00:36:57.555 }' 00:36:57.555 21:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:57.555 21:50:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:58.121 21:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:36:58.122 21:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:36:58.122 21:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:36:58.122 21:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:36:58.122 21:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:36:58.122 21:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:36:58.398 21:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:58.398 21:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:36:58.398 [2024-07-15 21:50:31.668964] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:58.398 21:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:36:58.398 "name": "raid_bdev1", 00:36:58.398 "aliases": [ 00:36:58.398 "4ca46807-c048-4b7d-8789-9a2059f6d466" 00:36:58.398 ], 00:36:58.398 "product_name": "Raid Volume", 00:36:58.398 "block_size": 4096, 00:36:58.398 "num_blocks": 7936, 00:36:58.398 "uuid": "4ca46807-c048-4b7d-8789-9a2059f6d466", 00:36:58.398 "assigned_rate_limits": { 00:36:58.398 "rw_ios_per_sec": 0, 00:36:58.398 "rw_mbytes_per_sec": 0, 00:36:58.398 "r_mbytes_per_sec": 0, 00:36:58.398 "w_mbytes_per_sec": 0 00:36:58.398 }, 00:36:58.398 "claimed": false, 00:36:58.398 "zoned": false, 00:36:58.398 "supported_io_types": { 00:36:58.398 "read": true, 00:36:58.398 "write": true, 00:36:58.398 "unmap": false, 00:36:58.398 "flush": false, 00:36:58.398 "reset": true, 00:36:58.398 "nvme_admin": false, 00:36:58.398 "nvme_io": false, 00:36:58.398 "nvme_io_md": false, 00:36:58.398 "write_zeroes": true, 00:36:58.398 "zcopy": false, 00:36:58.398 "get_zone_info": false, 00:36:58.398 "zone_management": false, 00:36:58.398 "zone_append": false, 00:36:58.398 "compare": false, 00:36:58.398 "compare_and_write": false, 00:36:58.398 "abort": false, 00:36:58.398 "seek_hole": false, 00:36:58.398 "seek_data": false, 00:36:58.398 "copy": false, 00:36:58.398 "nvme_iov_md": false 00:36:58.398 }, 00:36:58.398 "memory_domains": [ 00:36:58.398 { 00:36:58.398 "dma_device_id": "system", 00:36:58.398 "dma_device_type": 1 00:36:58.398 }, 00:36:58.398 { 00:36:58.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:58.398 "dma_device_type": 2 00:36:58.398 }, 00:36:58.398 { 00:36:58.398 "dma_device_id": "system", 00:36:58.398 "dma_device_type": 1 00:36:58.398 }, 00:36:58.398 { 00:36:58.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:58.398 "dma_device_type": 2 00:36:58.398 } 00:36:58.398 ], 00:36:58.398 "driver_specific": { 00:36:58.398 "raid": { 00:36:58.398 "uuid": "4ca46807-c048-4b7d-8789-9a2059f6d466", 00:36:58.398 "strip_size_kb": 0, 00:36:58.398 "state": "online", 00:36:58.398 "raid_level": "raid1", 00:36:58.398 "superblock": true, 00:36:58.398 "num_base_bdevs": 2, 00:36:58.398 "num_base_bdevs_discovered": 2, 00:36:58.398 "num_base_bdevs_operational": 2, 00:36:58.398 "base_bdevs_list": [ 00:36:58.398 { 00:36:58.398 "name": "pt1", 00:36:58.398 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:58.398 "is_configured": true, 00:36:58.398 "data_offset": 256, 00:36:58.398 "data_size": 7936 00:36:58.398 }, 00:36:58.398 { 00:36:58.398 "name": "pt2", 00:36:58.398 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:58.398 "is_configured": true, 00:36:58.398 "data_offset": 256, 00:36:58.398 "data_size": 7936 00:36:58.398 } 00:36:58.398 ] 00:36:58.398 } 00:36:58.398 } 00:36:58.398 }' 00:36:58.398 21:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:58.398 21:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:36:58.398 pt2' 00:36:58.398 21:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:58.398 21:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:58.398 21:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:36:58.706 21:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:58.706 "name": "pt1", 00:36:58.706 "aliases": [ 00:36:58.706 "00000000-0000-0000-0000-000000000001" 00:36:58.706 ], 00:36:58.706 "product_name": "passthru", 00:36:58.706 "block_size": 4096, 00:36:58.706 "num_blocks": 8192, 00:36:58.706 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:58.706 "assigned_rate_limits": { 00:36:58.706 "rw_ios_per_sec": 0, 00:36:58.706 "rw_mbytes_per_sec": 0, 00:36:58.706 "r_mbytes_per_sec": 0, 00:36:58.706 "w_mbytes_per_sec": 0 00:36:58.706 }, 00:36:58.706 "claimed": true, 00:36:58.706 "claim_type": "exclusive_write", 00:36:58.706 "zoned": false, 00:36:58.706 "supported_io_types": { 00:36:58.706 "read": true, 00:36:58.706 "write": true, 00:36:58.706 "unmap": true, 00:36:58.706 "flush": true, 00:36:58.706 "reset": true, 00:36:58.706 "nvme_admin": false, 00:36:58.706 "nvme_io": false, 00:36:58.706 "nvme_io_md": false, 00:36:58.706 "write_zeroes": true, 00:36:58.706 "zcopy": true, 00:36:58.706 "get_zone_info": false, 00:36:58.706 "zone_management": false, 00:36:58.706 "zone_append": false, 00:36:58.706 "compare": false, 00:36:58.706 "compare_and_write": false, 00:36:58.706 "abort": true, 00:36:58.706 "seek_hole": false, 00:36:58.706 "seek_data": false, 00:36:58.706 "copy": true, 00:36:58.706 "nvme_iov_md": false 00:36:58.706 }, 00:36:58.706 "memory_domains": [ 00:36:58.706 { 00:36:58.706 "dma_device_id": "system", 00:36:58.706 "dma_device_type": 1 00:36:58.706 }, 00:36:58.706 { 00:36:58.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:58.706 "dma_device_type": 2 00:36:58.706 } 00:36:58.706 ], 00:36:58.706 "driver_specific": { 00:36:58.706 "passthru": { 00:36:58.707 "name": "pt1", 00:36:58.707 "base_bdev_name": "malloc1" 00:36:58.707 } 00:36:58.707 } 00:36:58.707 }' 00:36:58.707 21:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:58.707 21:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:58.707 21:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:36:58.707 21:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:58.707 21:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:58.966 21:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:36:58.966 21:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:58.966 21:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:58.966 21:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:36:58.966 21:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:58.966 21:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:59.225 21:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:36:59.225 21:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:59.225 21:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:36:59.225 21:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:59.483 21:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:59.483 "name": "pt2", 00:36:59.483 "aliases": [ 00:36:59.483 "00000000-0000-0000-0000-000000000002" 00:36:59.483 ], 00:36:59.483 "product_name": "passthru", 00:36:59.483 "block_size": 4096, 00:36:59.483 "num_blocks": 8192, 00:36:59.483 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:59.483 "assigned_rate_limits": { 00:36:59.483 "rw_ios_per_sec": 0, 00:36:59.483 "rw_mbytes_per_sec": 0, 00:36:59.483 "r_mbytes_per_sec": 0, 00:36:59.483 "w_mbytes_per_sec": 0 00:36:59.483 }, 00:36:59.483 "claimed": true, 00:36:59.483 "claim_type": "exclusive_write", 00:36:59.483 "zoned": false, 00:36:59.483 "supported_io_types": { 00:36:59.483 "read": true, 00:36:59.483 "write": true, 00:36:59.483 "unmap": true, 00:36:59.483 "flush": true, 00:36:59.483 "reset": true, 00:36:59.483 "nvme_admin": false, 00:36:59.483 "nvme_io": false, 00:36:59.483 "nvme_io_md": false, 00:36:59.483 "write_zeroes": true, 00:36:59.483 "zcopy": true, 00:36:59.483 "get_zone_info": false, 00:36:59.483 "zone_management": false, 00:36:59.483 "zone_append": false, 00:36:59.483 "compare": false, 00:36:59.483 "compare_and_write": false, 00:36:59.483 "abort": true, 00:36:59.483 "seek_hole": false, 00:36:59.483 "seek_data": false, 00:36:59.483 "copy": true, 00:36:59.483 "nvme_iov_md": false 00:36:59.483 }, 00:36:59.483 "memory_domains": [ 00:36:59.483 { 00:36:59.483 "dma_device_id": "system", 00:36:59.483 "dma_device_type": 1 00:36:59.483 }, 00:36:59.483 { 00:36:59.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:59.483 "dma_device_type": 2 00:36:59.483 } 00:36:59.483 ], 00:36:59.483 "driver_specific": { 00:36:59.483 "passthru": { 00:36:59.483 "name": "pt2", 00:36:59.483 "base_bdev_name": "malloc2" 00:36:59.483 } 00:36:59.483 } 00:36:59.483 }' 00:36:59.483 21:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:59.483 21:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:59.483 21:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:36:59.483 21:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:59.483 21:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:59.742 21:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:36:59.742 21:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:59.742 21:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:59.742 21:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:36:59.742 21:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:59.742 21:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:59.742 21:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:36:59.742 21:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:59.742 21:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:37:00.001 [2024-07-15 21:50:33.258210] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:00.002 21:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=4ca46807-c048-4b7d-8789-9a2059f6d466 00:37:00.002 21:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # '[' -z 4ca46807-c048-4b7d-8789-9a2059f6d466 ']' 00:37:00.002 21:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:00.261 [2024-07-15 21:50:33.461594] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:00.261 [2024-07-15 21:50:33.461628] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:00.261 [2024-07-15 21:50:33.461697] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:00.261 [2024-07-15 21:50:33.461753] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:00.261 [2024-07-15 21:50:33.461764] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:37:00.261 21:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:00.261 21:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:37:00.520 21:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:37:00.520 21:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:37:00.520 21:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:37:00.520 21:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:37:00.520 21:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:37:00.520 21:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:37:00.778 21:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:37:00.778 21:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:37:01.037 21:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:37:01.037 21:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:37:01.037 21:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@648 -- # local es=0 00:37:01.037 21:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:37:01.037 21:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:01.037 21:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:01.037 21:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:01.037 21:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:01.037 21:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:01.037 21:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:01.037 21:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:01.038 21:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:37:01.038 21:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:37:01.297 [2024-07-15 21:50:34.463843] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:37:01.297 [2024-07-15 21:50:34.465524] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:37:01.297 [2024-07-15 21:50:34.465589] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:37:01.297 [2024-07-15 21:50:34.465680] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:37:01.297 [2024-07-15 21:50:34.465705] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:01.297 [2024-07-15 21:50:34.465713] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state configuring 00:37:01.297 request: 00:37:01.297 { 00:37:01.297 "name": "raid_bdev1", 00:37:01.297 "raid_level": "raid1", 00:37:01.297 "base_bdevs": [ 00:37:01.297 "malloc1", 00:37:01.297 "malloc2" 00:37:01.297 ], 00:37:01.297 "superblock": false, 00:37:01.297 "method": "bdev_raid_create", 00:37:01.297 "req_id": 1 00:37:01.297 } 00:37:01.297 Got JSON-RPC error response 00:37:01.297 response: 00:37:01.297 { 00:37:01.297 "code": -17, 00:37:01.297 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:37:01.297 } 00:37:01.297 21:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # es=1 00:37:01.297 21:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:01.297 21:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:01.297 21:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:01.297 21:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:01.297 21:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:37:01.297 21:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:37:01.297 21:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:37:01.297 21:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:01.556 [2024-07-15 21:50:34.847119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:01.556 [2024-07-15 21:50:34.847202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:01.556 [2024-07-15 21:50:34.847226] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:37:01.556 [2024-07-15 21:50:34.847246] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:01.556 [2024-07-15 21:50:34.849227] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:01.556 [2024-07-15 21:50:34.849294] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:01.556 [2024-07-15 21:50:34.849411] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:37:01.556 [2024-07-15 21:50:34.849462] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:01.556 pt1 00:37:01.556 21:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:37:01.556 21:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:01.556 21:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:01.556 21:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:01.556 21:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:01.556 21:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:01.556 21:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:01.556 21:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:01.556 21:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:01.556 21:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:01.556 21:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:01.556 21:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:01.814 21:50:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:01.814 "name": "raid_bdev1", 00:37:01.814 "uuid": "4ca46807-c048-4b7d-8789-9a2059f6d466", 00:37:01.814 "strip_size_kb": 0, 00:37:01.814 "state": "configuring", 00:37:01.814 "raid_level": "raid1", 00:37:01.814 "superblock": true, 00:37:01.814 "num_base_bdevs": 2, 00:37:01.814 "num_base_bdevs_discovered": 1, 00:37:01.814 "num_base_bdevs_operational": 2, 00:37:01.814 "base_bdevs_list": [ 00:37:01.814 { 00:37:01.814 "name": "pt1", 00:37:01.814 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:01.814 "is_configured": true, 00:37:01.814 "data_offset": 256, 00:37:01.814 "data_size": 7936 00:37:01.814 }, 00:37:01.814 { 00:37:01.814 "name": null, 00:37:01.814 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:01.814 "is_configured": false, 00:37:01.814 "data_offset": 256, 00:37:01.814 "data_size": 7936 00:37:01.814 } 00:37:01.814 ] 00:37:01.814 }' 00:37:01.814 21:50:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:01.814 21:50:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:37:02.380 21:50:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:37:02.380 21:50:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:37:02.380 21:50:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:37:02.380 21:50:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:02.637 [2024-07-15 21:50:35.969172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:02.637 [2024-07-15 21:50:35.969261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:02.637 [2024-07-15 21:50:35.969296] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:37:02.637 [2024-07-15 21:50:35.969316] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:02.637 [2024-07-15 21:50:35.969748] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:02.637 [2024-07-15 21:50:35.969794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:02.637 [2024-07-15 21:50:35.969894] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:37:02.637 [2024-07-15 21:50:35.969924] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:02.637 [2024-07-15 21:50:35.970038] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:37:02.637 [2024-07-15 21:50:35.970055] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:37:02.637 [2024-07-15 21:50:35.970158] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:37:02.638 [2024-07-15 21:50:35.970426] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:37:02.638 [2024-07-15 21:50:35.970444] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:37:02.638 [2024-07-15 21:50:35.970570] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:02.638 pt2 00:37:02.638 21:50:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:37:02.638 21:50:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:37:02.638 21:50:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:02.638 21:50:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:02.638 21:50:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:02.638 21:50:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:02.638 21:50:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:02.638 21:50:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:02.638 21:50:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:02.638 21:50:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:02.638 21:50:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:02.638 21:50:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:02.638 21:50:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:02.638 21:50:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:02.895 21:50:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:02.895 "name": "raid_bdev1", 00:37:02.895 "uuid": "4ca46807-c048-4b7d-8789-9a2059f6d466", 00:37:02.895 "strip_size_kb": 0, 00:37:02.895 "state": "online", 00:37:02.895 "raid_level": "raid1", 00:37:02.895 "superblock": true, 00:37:02.896 "num_base_bdevs": 2, 00:37:02.896 "num_base_bdevs_discovered": 2, 00:37:02.896 "num_base_bdevs_operational": 2, 00:37:02.896 "base_bdevs_list": [ 00:37:02.896 { 00:37:02.896 "name": "pt1", 00:37:02.896 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:02.896 "is_configured": true, 00:37:02.896 "data_offset": 256, 00:37:02.896 "data_size": 7936 00:37:02.896 }, 00:37:02.896 { 00:37:02.896 "name": "pt2", 00:37:02.896 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:02.896 "is_configured": true, 00:37:02.896 "data_offset": 256, 00:37:02.896 "data_size": 7936 00:37:02.896 } 00:37:02.896 ] 00:37:02.896 }' 00:37:02.896 21:50:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:02.896 21:50:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:37:03.460 21:50:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:37:03.460 21:50:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:37:03.460 21:50:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:37:03.460 21:50:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:37:03.460 21:50:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:37:03.460 21:50:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:37:03.460 21:50:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:03.460 21:50:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:37:03.718 [2024-07-15 21:50:36.975656] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:03.718 21:50:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:37:03.718 "name": "raid_bdev1", 00:37:03.718 "aliases": [ 00:37:03.718 "4ca46807-c048-4b7d-8789-9a2059f6d466" 00:37:03.718 ], 00:37:03.718 "product_name": "Raid Volume", 00:37:03.718 "block_size": 4096, 00:37:03.718 "num_blocks": 7936, 00:37:03.718 "uuid": "4ca46807-c048-4b7d-8789-9a2059f6d466", 00:37:03.718 "assigned_rate_limits": { 00:37:03.718 "rw_ios_per_sec": 0, 00:37:03.718 "rw_mbytes_per_sec": 0, 00:37:03.718 "r_mbytes_per_sec": 0, 00:37:03.718 "w_mbytes_per_sec": 0 00:37:03.718 }, 00:37:03.718 "claimed": false, 00:37:03.718 "zoned": false, 00:37:03.718 "supported_io_types": { 00:37:03.718 "read": true, 00:37:03.718 "write": true, 00:37:03.718 "unmap": false, 00:37:03.718 "flush": false, 00:37:03.718 "reset": true, 00:37:03.718 "nvme_admin": false, 00:37:03.718 "nvme_io": false, 00:37:03.718 "nvme_io_md": false, 00:37:03.718 "write_zeroes": true, 00:37:03.718 "zcopy": false, 00:37:03.718 "get_zone_info": false, 00:37:03.718 "zone_management": false, 00:37:03.718 "zone_append": false, 00:37:03.718 "compare": false, 00:37:03.718 "compare_and_write": false, 00:37:03.718 "abort": false, 00:37:03.718 "seek_hole": false, 00:37:03.718 "seek_data": false, 00:37:03.718 "copy": false, 00:37:03.718 "nvme_iov_md": false 00:37:03.718 }, 00:37:03.718 "memory_domains": [ 00:37:03.718 { 00:37:03.718 "dma_device_id": "system", 00:37:03.718 "dma_device_type": 1 00:37:03.718 }, 00:37:03.718 { 00:37:03.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:03.718 "dma_device_type": 2 00:37:03.718 }, 00:37:03.718 { 00:37:03.718 "dma_device_id": "system", 00:37:03.718 "dma_device_type": 1 00:37:03.718 }, 00:37:03.718 { 00:37:03.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:03.718 "dma_device_type": 2 00:37:03.718 } 00:37:03.718 ], 00:37:03.718 "driver_specific": { 00:37:03.718 "raid": { 00:37:03.718 "uuid": "4ca46807-c048-4b7d-8789-9a2059f6d466", 00:37:03.718 "strip_size_kb": 0, 00:37:03.718 "state": "online", 00:37:03.718 "raid_level": "raid1", 00:37:03.718 "superblock": true, 00:37:03.718 "num_base_bdevs": 2, 00:37:03.718 "num_base_bdevs_discovered": 2, 00:37:03.718 "num_base_bdevs_operational": 2, 00:37:03.718 "base_bdevs_list": [ 00:37:03.718 { 00:37:03.718 "name": "pt1", 00:37:03.718 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:03.718 "is_configured": true, 00:37:03.718 "data_offset": 256, 00:37:03.718 "data_size": 7936 00:37:03.718 }, 00:37:03.718 { 00:37:03.718 "name": "pt2", 00:37:03.718 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:03.718 "is_configured": true, 00:37:03.718 "data_offset": 256, 00:37:03.718 "data_size": 7936 00:37:03.718 } 00:37:03.718 ] 00:37:03.718 } 00:37:03.718 } 00:37:03.718 }' 00:37:03.718 21:50:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:03.718 21:50:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:37:03.718 pt2' 00:37:03.718 21:50:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:03.718 21:50:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:37:03.718 21:50:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:03.976 21:50:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:03.976 "name": "pt1", 00:37:03.976 "aliases": [ 00:37:03.976 "00000000-0000-0000-0000-000000000001" 00:37:03.976 ], 00:37:03.976 "product_name": "passthru", 00:37:03.976 "block_size": 4096, 00:37:03.976 "num_blocks": 8192, 00:37:03.976 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:03.976 "assigned_rate_limits": { 00:37:03.976 "rw_ios_per_sec": 0, 00:37:03.976 "rw_mbytes_per_sec": 0, 00:37:03.976 "r_mbytes_per_sec": 0, 00:37:03.976 "w_mbytes_per_sec": 0 00:37:03.976 }, 00:37:03.976 "claimed": true, 00:37:03.976 "claim_type": "exclusive_write", 00:37:03.976 "zoned": false, 00:37:03.976 "supported_io_types": { 00:37:03.976 "read": true, 00:37:03.976 "write": true, 00:37:03.976 "unmap": true, 00:37:03.976 "flush": true, 00:37:03.976 "reset": true, 00:37:03.976 "nvme_admin": false, 00:37:03.976 "nvme_io": false, 00:37:03.976 "nvme_io_md": false, 00:37:03.976 "write_zeroes": true, 00:37:03.976 "zcopy": true, 00:37:03.976 "get_zone_info": false, 00:37:03.976 "zone_management": false, 00:37:03.976 "zone_append": false, 00:37:03.976 "compare": false, 00:37:03.976 "compare_and_write": false, 00:37:03.976 "abort": true, 00:37:03.976 "seek_hole": false, 00:37:03.976 "seek_data": false, 00:37:03.976 "copy": true, 00:37:03.977 "nvme_iov_md": false 00:37:03.977 }, 00:37:03.977 "memory_domains": [ 00:37:03.977 { 00:37:03.977 "dma_device_id": "system", 00:37:03.977 "dma_device_type": 1 00:37:03.977 }, 00:37:03.977 { 00:37:03.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:03.977 "dma_device_type": 2 00:37:03.977 } 00:37:03.977 ], 00:37:03.977 "driver_specific": { 00:37:03.977 "passthru": { 00:37:03.977 "name": "pt1", 00:37:03.977 "base_bdev_name": "malloc1" 00:37:03.977 } 00:37:03.977 } 00:37:03.977 }' 00:37:03.977 21:50:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:03.977 21:50:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:03.977 21:50:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:37:03.977 21:50:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:04.234 21:50:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:04.234 21:50:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:37:04.234 21:50:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:04.234 21:50:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:04.234 21:50:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:37:04.234 21:50:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:04.234 21:50:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:04.490 21:50:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:37:04.490 21:50:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:04.490 21:50:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:04.490 21:50:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:37:04.490 21:50:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:04.490 "name": "pt2", 00:37:04.490 "aliases": [ 00:37:04.490 "00000000-0000-0000-0000-000000000002" 00:37:04.490 ], 00:37:04.490 "product_name": "passthru", 00:37:04.490 "block_size": 4096, 00:37:04.490 "num_blocks": 8192, 00:37:04.490 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:04.491 "assigned_rate_limits": { 00:37:04.491 "rw_ios_per_sec": 0, 00:37:04.491 "rw_mbytes_per_sec": 0, 00:37:04.491 "r_mbytes_per_sec": 0, 00:37:04.491 "w_mbytes_per_sec": 0 00:37:04.491 }, 00:37:04.491 "claimed": true, 00:37:04.491 "claim_type": "exclusive_write", 00:37:04.491 "zoned": false, 00:37:04.491 "supported_io_types": { 00:37:04.491 "read": true, 00:37:04.491 "write": true, 00:37:04.491 "unmap": true, 00:37:04.491 "flush": true, 00:37:04.491 "reset": true, 00:37:04.491 "nvme_admin": false, 00:37:04.491 "nvme_io": false, 00:37:04.491 "nvme_io_md": false, 00:37:04.491 "write_zeroes": true, 00:37:04.491 "zcopy": true, 00:37:04.491 "get_zone_info": false, 00:37:04.491 "zone_management": false, 00:37:04.491 "zone_append": false, 00:37:04.491 "compare": false, 00:37:04.491 "compare_and_write": false, 00:37:04.491 "abort": true, 00:37:04.491 "seek_hole": false, 00:37:04.491 "seek_data": false, 00:37:04.491 "copy": true, 00:37:04.491 "nvme_iov_md": false 00:37:04.491 }, 00:37:04.491 "memory_domains": [ 00:37:04.491 { 00:37:04.491 "dma_device_id": "system", 00:37:04.491 "dma_device_type": 1 00:37:04.491 }, 00:37:04.491 { 00:37:04.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:04.491 "dma_device_type": 2 00:37:04.491 } 00:37:04.491 ], 00:37:04.491 "driver_specific": { 00:37:04.491 "passthru": { 00:37:04.491 "name": "pt2", 00:37:04.491 "base_bdev_name": "malloc2" 00:37:04.491 } 00:37:04.491 } 00:37:04.491 }' 00:37:04.491 21:50:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:04.747 21:50:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:04.747 21:50:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:37:04.747 21:50:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:04.747 21:50:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:04.747 21:50:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:37:04.747 21:50:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:04.747 21:50:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:05.003 21:50:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:37:05.003 21:50:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:05.003 21:50:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:05.003 21:50:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:37:05.003 21:50:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:05.003 21:50:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:37:05.260 [2024-07-15 21:50:38.457106] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:05.260 21:50:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # '[' 4ca46807-c048-4b7d-8789-9a2059f6d466 '!=' 4ca46807-c048-4b7d-8789-9a2059f6d466 ']' 00:37:05.260 21:50:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:37:05.260 21:50:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:37:05.260 21:50:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:37:05.260 21:50:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:37:05.516 [2024-07-15 21:50:38.648598] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:37:05.516 21:50:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:05.516 21:50:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:05.516 21:50:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:05.516 21:50:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:05.516 21:50:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:05.516 21:50:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:05.516 21:50:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:05.516 21:50:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:05.516 21:50:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:05.516 21:50:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:05.516 21:50:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:05.516 21:50:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:05.516 21:50:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:05.516 "name": "raid_bdev1", 00:37:05.516 "uuid": "4ca46807-c048-4b7d-8789-9a2059f6d466", 00:37:05.516 "strip_size_kb": 0, 00:37:05.516 "state": "online", 00:37:05.516 "raid_level": "raid1", 00:37:05.516 "superblock": true, 00:37:05.516 "num_base_bdevs": 2, 00:37:05.516 "num_base_bdevs_discovered": 1, 00:37:05.516 "num_base_bdevs_operational": 1, 00:37:05.516 "base_bdevs_list": [ 00:37:05.516 { 00:37:05.516 "name": null, 00:37:05.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:05.516 "is_configured": false, 00:37:05.516 "data_offset": 256, 00:37:05.516 "data_size": 7936 00:37:05.516 }, 00:37:05.516 { 00:37:05.516 "name": "pt2", 00:37:05.516 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:05.516 "is_configured": true, 00:37:05.516 "data_offset": 256, 00:37:05.516 "data_size": 7936 00:37:05.516 } 00:37:05.516 ] 00:37:05.516 }' 00:37:05.516 21:50:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:05.516 21:50:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:37:06.079 21:50:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:06.336 [2024-07-15 21:50:39.634856] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:06.336 [2024-07-15 21:50:39.634895] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:06.336 [2024-07-15 21:50:39.634962] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:06.336 [2024-07-15 21:50:39.635003] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:06.336 [2024-07-15 21:50:39.635010] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:37:06.336 21:50:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:06.336 21:50:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:37:06.594 21:50:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:37:06.594 21:50:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:37:06.594 21:50:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:37:06.594 21:50:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:37:06.594 21:50:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:37:06.850 21:50:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:37:06.850 21:50:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:37:06.850 21:50:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:37:06.850 21:50:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:37:06.850 21:50:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@518 -- # i=1 00:37:06.850 21:50:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:07.107 [2024-07-15 21:50:40.277705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:07.107 [2024-07-15 21:50:40.277854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:07.107 [2024-07-15 21:50:40.277895] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:37:07.107 [2024-07-15 21:50:40.277936] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:07.107 [2024-07-15 21:50:40.280262] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:07.107 [2024-07-15 21:50:40.280354] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:07.107 [2024-07-15 21:50:40.280478] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:37:07.107 [2024-07-15 21:50:40.280570] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:07.107 [2024-07-15 21:50:40.280722] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:37:07.107 [2024-07-15 21:50:40.280757] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:37:07.107 [2024-07-15 21:50:40.280861] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:37:07.107 [2024-07-15 21:50:40.281170] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:37:07.107 [2024-07-15 21:50:40.281213] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:37:07.107 [2024-07-15 21:50:40.281396] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:07.107 pt2 00:37:07.107 21:50:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:07.107 21:50:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:07.107 21:50:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:07.107 21:50:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:07.108 21:50:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:07.108 21:50:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:07.108 21:50:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:07.108 21:50:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:07.108 21:50:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:07.108 21:50:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:07.108 21:50:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:07.108 21:50:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:07.364 21:50:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:07.364 "name": "raid_bdev1", 00:37:07.364 "uuid": "4ca46807-c048-4b7d-8789-9a2059f6d466", 00:37:07.364 "strip_size_kb": 0, 00:37:07.364 "state": "online", 00:37:07.364 "raid_level": "raid1", 00:37:07.364 "superblock": true, 00:37:07.364 "num_base_bdevs": 2, 00:37:07.364 "num_base_bdevs_discovered": 1, 00:37:07.364 "num_base_bdevs_operational": 1, 00:37:07.364 "base_bdevs_list": [ 00:37:07.364 { 00:37:07.364 "name": null, 00:37:07.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:07.364 "is_configured": false, 00:37:07.364 "data_offset": 256, 00:37:07.364 "data_size": 7936 00:37:07.364 }, 00:37:07.364 { 00:37:07.364 "name": "pt2", 00:37:07.364 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:07.364 "is_configured": true, 00:37:07.364 "data_offset": 256, 00:37:07.364 "data_size": 7936 00:37:07.364 } 00:37:07.364 ] 00:37:07.364 }' 00:37:07.364 21:50:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:07.364 21:50:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:37:07.928 21:50:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:07.928 [2024-07-15 21:50:41.295903] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:07.928 [2024-07-15 21:50:41.295998] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:07.928 [2024-07-15 21:50:41.296077] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:07.928 [2024-07-15 21:50:41.296131] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:07.928 [2024-07-15 21:50:41.296149] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:37:08.186 21:50:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:08.186 21:50:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:37:08.186 21:50:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:37:08.186 21:50:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:37:08.186 21:50:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:37:08.186 21:50:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:08.444 [2024-07-15 21:50:41.743177] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:08.444 [2024-07-15 21:50:41.743323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:08.444 [2024-07-15 21:50:41.743374] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:37:08.444 [2024-07-15 21:50:41.743418] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:08.444 [2024-07-15 21:50:41.745703] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:08.444 [2024-07-15 21:50:41.745799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:08.444 [2024-07-15 21:50:41.745948] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:37:08.444 [2024-07-15 21:50:41.746037] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:08.444 [2024-07-15 21:50:41.746230] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:37:08.444 [2024-07-15 21:50:41.746274] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:08.444 [2024-07-15 21:50:41.746302] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:37:08.444 [2024-07-15 21:50:41.746428] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:08.444 [2024-07-15 21:50:41.746527] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:37:08.444 [2024-07-15 21:50:41.746560] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:37:08.444 [2024-07-15 21:50:41.746688] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:37:08.444 [2024-07-15 21:50:41.747020] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:37:08.444 [2024-07-15 21:50:41.747067] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:37:08.444 [2024-07-15 21:50:41.747256] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:08.444 pt1 00:37:08.444 21:50:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:37:08.444 21:50:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:08.444 21:50:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:08.444 21:50:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:08.444 21:50:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:08.444 21:50:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:08.444 21:50:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:08.444 21:50:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:08.444 21:50:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:08.444 21:50:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:08.444 21:50:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:08.444 21:50:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:08.444 21:50:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:08.702 21:50:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:08.702 "name": "raid_bdev1", 00:37:08.702 "uuid": "4ca46807-c048-4b7d-8789-9a2059f6d466", 00:37:08.702 "strip_size_kb": 0, 00:37:08.702 "state": "online", 00:37:08.702 "raid_level": "raid1", 00:37:08.702 "superblock": true, 00:37:08.702 "num_base_bdevs": 2, 00:37:08.702 "num_base_bdevs_discovered": 1, 00:37:08.702 "num_base_bdevs_operational": 1, 00:37:08.702 "base_bdevs_list": [ 00:37:08.702 { 00:37:08.702 "name": null, 00:37:08.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:08.702 "is_configured": false, 00:37:08.702 "data_offset": 256, 00:37:08.702 "data_size": 7936 00:37:08.702 }, 00:37:08.702 { 00:37:08.702 "name": "pt2", 00:37:08.702 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:08.702 "is_configured": true, 00:37:08.702 "data_offset": 256, 00:37:08.702 "data_size": 7936 00:37:08.702 } 00:37:08.702 ] 00:37:08.702 }' 00:37:08.702 21:50:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:08.702 21:50:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:37:09.293 21:50:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:37:09.293 21:50:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:37:09.552 21:50:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:37:09.552 21:50:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:37:09.552 21:50:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:09.811 [2024-07-15 21:50:43.053172] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:09.811 21:50:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' 4ca46807-c048-4b7d-8789-9a2059f6d466 '!=' 4ca46807-c048-4b7d-8789-9a2059f6d466 ']' 00:37:09.811 21:50:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@562 -- # killprocess 161079 00:37:09.811 21:50:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@948 -- # '[' -z 161079 ']' 00:37:09.811 21:50:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # kill -0 161079 00:37:09.811 21:50:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # uname 00:37:09.811 21:50:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:09.811 21:50:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 161079 00:37:09.811 killing process with pid 161079 00:37:09.811 21:50:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:09.811 21:50:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:09.811 21:50:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 161079' 00:37:09.811 21:50:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@967 -- # kill 161079 00:37:09.811 21:50:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # wait 161079 00:37:09.811 [2024-07-15 21:50:43.098733] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:09.811 [2024-07-15 21:50:43.098806] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:09.811 [2024-07-15 21:50:43.098889] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:09.811 [2024-07-15 21:50:43.098937] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:37:10.074 [2024-07-15 21:50:43.298172] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:11.453 ************************************ 00:37:11.453 END TEST raid_superblock_test_4k 00:37:11.453 ************************************ 00:37:11.453 21:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@564 -- # return 0 00:37:11.453 00:37:11.453 real 0m15.783s 00:37:11.453 user 0m28.770s 00:37:11.453 sys 0m1.921s 00:37:11.453 21:50:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:11.453 21:50:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:37:11.453 21:50:44 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:37:11.453 21:50:44 bdev_raid -- bdev/bdev_raid.sh@900 -- # '[' true = true ']' 00:37:11.453 21:50:44 bdev_raid -- bdev/bdev_raid.sh@901 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:37:11.453 21:50:44 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:37:11.453 21:50:44 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:11.453 21:50:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:11.453 ************************************ 00:37:11.453 START TEST raid_rebuild_test_sb_4k 00:37:11.453 ************************************ 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local verify=true 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local strip_size 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local create_arg 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local data_offset 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # raid_pid=161619 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # waitforlisten 161619 /var/tmp/spdk-raid.sock 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 161619 ']' 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:37:11.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:11.453 21:50:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:11.453 [2024-07-15 21:50:44.670242] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:37:11.453 [2024-07-15 21:50:44.670451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid161619 ] 00:37:11.453 I/O size of 3145728 is greater than zero copy threshold (65536). 00:37:11.453 Zero copy mechanism will not be used. 00:37:11.453 [2024-07-15 21:50:44.821210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:11.711 [2024-07-15 21:50:45.017019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:11.969 [2024-07-15 21:50:45.202251] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:12.227 21:50:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:12.227 21:50:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:37:12.227 21:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:37:12.227 21:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:37:12.488 BaseBdev1_malloc 00:37:12.488 21:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:37:12.756 [2024-07-15 21:50:45.909010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:37:12.756 [2024-07-15 21:50:45.909183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:12.756 [2024-07-15 21:50:45.909233] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:37:12.756 [2024-07-15 21:50:45.909294] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:12.756 [2024-07-15 21:50:45.911500] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:12.756 [2024-07-15 21:50:45.911574] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:12.756 BaseBdev1 00:37:12.756 21:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:37:12.756 21:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:37:13.015 BaseBdev2_malloc 00:37:13.015 21:50:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:37:13.015 [2024-07-15 21:50:46.356140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:37:13.015 [2024-07-15 21:50:46.356338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:13.015 [2024-07-15 21:50:46.356399] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:37:13.015 [2024-07-15 21:50:46.356472] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:13.015 [2024-07-15 21:50:46.358626] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:13.015 [2024-07-15 21:50:46.358710] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:37:13.015 BaseBdev2 00:37:13.015 21:50:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b spare_malloc 00:37:13.273 spare_malloc 00:37:13.273 21:50:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:37:13.531 spare_delay 00:37:13.531 21:50:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:37:13.790 [2024-07-15 21:50:46.998455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:13.790 [2024-07-15 21:50:46.998606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:13.790 [2024-07-15 21:50:46.998653] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:37:13.790 [2024-07-15 21:50:46.998703] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:13.790 [2024-07-15 21:50:47.000763] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:13.790 [2024-07-15 21:50:47.000844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:13.790 spare 00:37:13.790 21:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:37:14.049 [2024-07-15 21:50:47.198257] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:14.049 [2024-07-15 21:50:47.200055] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:14.049 [2024-07-15 21:50:47.200325] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:37:14.049 [2024-07-15 21:50:47.200361] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:37:14.049 [2024-07-15 21:50:47.200558] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:37:14.049 [2024-07-15 21:50:47.200958] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:37:14.049 [2024-07-15 21:50:47.201006] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:37:14.049 [2024-07-15 21:50:47.201224] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:14.049 21:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:14.049 21:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:14.049 21:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:14.049 21:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:14.049 21:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:14.049 21:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:14.049 21:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:14.049 21:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:14.049 21:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:14.049 21:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:14.049 21:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:14.049 21:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:14.049 21:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:14.049 "name": "raid_bdev1", 00:37:14.049 "uuid": "659072c5-00f3-4eee-b684-0bfaff4b8909", 00:37:14.049 "strip_size_kb": 0, 00:37:14.049 "state": "online", 00:37:14.049 "raid_level": "raid1", 00:37:14.049 "superblock": true, 00:37:14.049 "num_base_bdevs": 2, 00:37:14.049 "num_base_bdevs_discovered": 2, 00:37:14.049 "num_base_bdevs_operational": 2, 00:37:14.049 "base_bdevs_list": [ 00:37:14.049 { 00:37:14.049 "name": "BaseBdev1", 00:37:14.049 "uuid": "bf427a10-b330-5918-883e-dc8f5c2d054d", 00:37:14.049 "is_configured": true, 00:37:14.049 "data_offset": 256, 00:37:14.049 "data_size": 7936 00:37:14.049 }, 00:37:14.049 { 00:37:14.049 "name": "BaseBdev2", 00:37:14.049 "uuid": "c7f88884-bb31-56bb-8706-3e3fb6176464", 00:37:14.049 "is_configured": true, 00:37:14.049 "data_offset": 256, 00:37:14.049 "data_size": 7936 00:37:14.049 } 00:37:14.049 ] 00:37:14.049 }' 00:37:14.049 21:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:14.049 21:50:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:15.003 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:15.003 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:37:15.003 [2024-07-15 21:50:48.192741] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:15.003 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:37:15.003 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:15.003 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:37:15.262 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:37:15.262 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:37:15.262 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:37:15.262 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:37:15.262 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:37:15.262 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:15.262 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:37:15.262 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:15.262 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:37:15.262 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:15.262 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:37:15.262 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:15.262 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:15.262 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:37:15.262 [2024-07-15 21:50:48.591984] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:37:15.262 /dev/nbd0 00:37:15.262 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:15.262 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:15.262 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:37:15.262 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:37:15.262 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:37:15.262 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:37:15.262 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:37:15.521 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:37:15.521 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:37:15.521 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:37:15.521 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:15.521 1+0 records in 00:37:15.521 1+0 records out 00:37:15.521 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341421 s, 12.0 MB/s 00:37:15.521 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:15.521 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:37:15.521 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:15.521 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:37:15.522 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:37:15.522 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:15.522 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:15.522 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:37:15.522 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:37:15.522 21:50:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:37:16.089 7936+0 records in 00:37:16.089 7936+0 records out 00:37:16.089 32505856 bytes (33 MB, 31 MiB) copied, 0.644424 s, 50.4 MB/s 00:37:16.089 21:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:37:16.089 21:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:16.089 21:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:37:16.089 21:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:16.089 21:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:37:16.089 21:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:16.089 21:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:37:16.348 21:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:16.348 21:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:16.348 21:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:16.348 21:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:16.348 21:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:16.348 21:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:16.348 [2024-07-15 21:50:49.513171] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:16.348 21:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:37:16.348 21:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:37:16.348 21:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:37:16.348 [2024-07-15 21:50:49.684491] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:16.348 21:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:16.348 21:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:16.348 21:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:16.348 21:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:16.348 21:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:16.348 21:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:16.348 21:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:16.348 21:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:16.348 21:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:16.348 21:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:16.348 21:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:16.348 21:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:16.608 21:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:16.608 "name": "raid_bdev1", 00:37:16.608 "uuid": "659072c5-00f3-4eee-b684-0bfaff4b8909", 00:37:16.608 "strip_size_kb": 0, 00:37:16.608 "state": "online", 00:37:16.608 "raid_level": "raid1", 00:37:16.608 "superblock": true, 00:37:16.608 "num_base_bdevs": 2, 00:37:16.608 "num_base_bdevs_discovered": 1, 00:37:16.608 "num_base_bdevs_operational": 1, 00:37:16.608 "base_bdevs_list": [ 00:37:16.608 { 00:37:16.608 "name": null, 00:37:16.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:16.608 "is_configured": false, 00:37:16.608 "data_offset": 256, 00:37:16.608 "data_size": 7936 00:37:16.608 }, 00:37:16.608 { 00:37:16.608 "name": "BaseBdev2", 00:37:16.608 "uuid": "c7f88884-bb31-56bb-8706-3e3fb6176464", 00:37:16.608 "is_configured": true, 00:37:16.608 "data_offset": 256, 00:37:16.608 "data_size": 7936 00:37:16.608 } 00:37:16.608 ] 00:37:16.608 }' 00:37:16.608 21:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:16.608 21:50:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:17.175 21:50:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:37:17.433 [2024-07-15 21:50:50.666846] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:17.433 [2024-07-15 21:50:50.684010] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019ffd0 00:37:17.433 [2024-07-15 21:50:50.686186] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:17.433 21:50:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # sleep 1 00:37:18.369 21:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:18.369 21:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:18.369 21:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:18.369 21:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:18.369 21:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:18.369 21:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:18.369 21:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:18.629 21:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:18.629 "name": "raid_bdev1", 00:37:18.629 "uuid": "659072c5-00f3-4eee-b684-0bfaff4b8909", 00:37:18.629 "strip_size_kb": 0, 00:37:18.629 "state": "online", 00:37:18.629 "raid_level": "raid1", 00:37:18.629 "superblock": true, 00:37:18.629 "num_base_bdevs": 2, 00:37:18.629 "num_base_bdevs_discovered": 2, 00:37:18.629 "num_base_bdevs_operational": 2, 00:37:18.629 "process": { 00:37:18.629 "type": "rebuild", 00:37:18.629 "target": "spare", 00:37:18.629 "progress": { 00:37:18.629 "blocks": 2816, 00:37:18.629 "percent": 35 00:37:18.629 } 00:37:18.629 }, 00:37:18.629 "base_bdevs_list": [ 00:37:18.629 { 00:37:18.629 "name": "spare", 00:37:18.629 "uuid": "a20604ab-aa93-5eca-a3a3-3d9378bf1d14", 00:37:18.629 "is_configured": true, 00:37:18.629 "data_offset": 256, 00:37:18.629 "data_size": 7936 00:37:18.629 }, 00:37:18.629 { 00:37:18.629 "name": "BaseBdev2", 00:37:18.630 "uuid": "c7f88884-bb31-56bb-8706-3e3fb6176464", 00:37:18.630 "is_configured": true, 00:37:18.630 "data_offset": 256, 00:37:18.630 "data_size": 7936 00:37:18.630 } 00:37:18.630 ] 00:37:18.630 }' 00:37:18.630 21:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:18.630 21:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:18.630 21:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:18.630 21:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:18.630 21:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:37:18.889 [2024-07-15 21:50:52.185561] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:18.889 [2024-07-15 21:50:52.195835] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:18.889 [2024-07-15 21:50:52.195951] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:18.889 [2024-07-15 21:50:52.195980] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:18.889 [2024-07-15 21:50:52.196003] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:18.889 21:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:18.889 21:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:18.889 21:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:18.889 21:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:18.889 21:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:18.889 21:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:18.889 21:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:18.889 21:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:18.889 21:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:18.889 21:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:18.889 21:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:18.889 21:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:19.149 21:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:19.149 "name": "raid_bdev1", 00:37:19.149 "uuid": "659072c5-00f3-4eee-b684-0bfaff4b8909", 00:37:19.149 "strip_size_kb": 0, 00:37:19.149 "state": "online", 00:37:19.149 "raid_level": "raid1", 00:37:19.149 "superblock": true, 00:37:19.149 "num_base_bdevs": 2, 00:37:19.149 "num_base_bdevs_discovered": 1, 00:37:19.149 "num_base_bdevs_operational": 1, 00:37:19.149 "base_bdevs_list": [ 00:37:19.149 { 00:37:19.149 "name": null, 00:37:19.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:19.149 "is_configured": false, 00:37:19.149 "data_offset": 256, 00:37:19.149 "data_size": 7936 00:37:19.149 }, 00:37:19.149 { 00:37:19.149 "name": "BaseBdev2", 00:37:19.149 "uuid": "c7f88884-bb31-56bb-8706-3e3fb6176464", 00:37:19.149 "is_configured": true, 00:37:19.149 "data_offset": 256, 00:37:19.149 "data_size": 7936 00:37:19.149 } 00:37:19.149 ] 00:37:19.149 }' 00:37:19.149 21:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:19.149 21:50:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:19.718 21:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:19.718 21:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:19.718 21:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:19.718 21:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:19.718 21:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:19.718 21:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:19.718 21:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:19.977 21:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:19.977 "name": "raid_bdev1", 00:37:19.977 "uuid": "659072c5-00f3-4eee-b684-0bfaff4b8909", 00:37:19.977 "strip_size_kb": 0, 00:37:19.977 "state": "online", 00:37:19.977 "raid_level": "raid1", 00:37:19.977 "superblock": true, 00:37:19.977 "num_base_bdevs": 2, 00:37:19.977 "num_base_bdevs_discovered": 1, 00:37:19.977 "num_base_bdevs_operational": 1, 00:37:19.977 "base_bdevs_list": [ 00:37:19.977 { 00:37:19.977 "name": null, 00:37:19.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:19.977 "is_configured": false, 00:37:19.977 "data_offset": 256, 00:37:19.977 "data_size": 7936 00:37:19.977 }, 00:37:19.977 { 00:37:19.977 "name": "BaseBdev2", 00:37:19.977 "uuid": "c7f88884-bb31-56bb-8706-3e3fb6176464", 00:37:19.977 "is_configured": true, 00:37:19.977 "data_offset": 256, 00:37:19.977 "data_size": 7936 00:37:19.977 } 00:37:19.977 ] 00:37:19.977 }' 00:37:19.977 21:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:19.977 21:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:19.977 21:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:19.977 21:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:19.977 21:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:37:20.235 [2024-07-15 21:50:53.508901] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:20.235 [2024-07-15 21:50:53.526501] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:37:20.235 [2024-07-15 21:50:53.528576] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:20.235 21:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # sleep 1 00:37:21.171 21:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:21.171 21:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:21.171 21:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:21.171 21:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:21.171 21:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:21.171 21:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:21.171 21:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:21.430 21:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:21.430 "name": "raid_bdev1", 00:37:21.430 "uuid": "659072c5-00f3-4eee-b684-0bfaff4b8909", 00:37:21.430 "strip_size_kb": 0, 00:37:21.430 "state": "online", 00:37:21.430 "raid_level": "raid1", 00:37:21.430 "superblock": true, 00:37:21.430 "num_base_bdevs": 2, 00:37:21.430 "num_base_bdevs_discovered": 2, 00:37:21.430 "num_base_bdevs_operational": 2, 00:37:21.430 "process": { 00:37:21.430 "type": "rebuild", 00:37:21.430 "target": "spare", 00:37:21.430 "progress": { 00:37:21.430 "blocks": 2816, 00:37:21.430 "percent": 35 00:37:21.430 } 00:37:21.430 }, 00:37:21.430 "base_bdevs_list": [ 00:37:21.430 { 00:37:21.430 "name": "spare", 00:37:21.430 "uuid": "a20604ab-aa93-5eca-a3a3-3d9378bf1d14", 00:37:21.430 "is_configured": true, 00:37:21.430 "data_offset": 256, 00:37:21.430 "data_size": 7936 00:37:21.430 }, 00:37:21.430 { 00:37:21.430 "name": "BaseBdev2", 00:37:21.430 "uuid": "c7f88884-bb31-56bb-8706-3e3fb6176464", 00:37:21.430 "is_configured": true, 00:37:21.430 "data_offset": 256, 00:37:21.430 "data_size": 7936 00:37:21.430 } 00:37:21.430 ] 00:37:21.430 }' 00:37:21.430 21:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:21.430 21:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:21.430 21:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:21.689 21:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:21.689 21:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:37:21.689 21:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:37:21.689 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:37:21.689 21:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:37:21.689 21:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:37:21.689 21:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:37:21.689 21:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@705 -- # local timeout=1307 00:37:21.689 21:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:37:21.689 21:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:21.689 21:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:21.689 21:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:21.689 21:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:21.689 21:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:21.689 21:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:21.689 21:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:21.689 21:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:21.689 "name": "raid_bdev1", 00:37:21.689 "uuid": "659072c5-00f3-4eee-b684-0bfaff4b8909", 00:37:21.689 "strip_size_kb": 0, 00:37:21.689 "state": "online", 00:37:21.689 "raid_level": "raid1", 00:37:21.689 "superblock": true, 00:37:21.689 "num_base_bdevs": 2, 00:37:21.689 "num_base_bdevs_discovered": 2, 00:37:21.689 "num_base_bdevs_operational": 2, 00:37:21.689 "process": { 00:37:21.689 "type": "rebuild", 00:37:21.689 "target": "spare", 00:37:21.689 "progress": { 00:37:21.689 "blocks": 3584, 00:37:21.689 "percent": 45 00:37:21.689 } 00:37:21.689 }, 00:37:21.689 "base_bdevs_list": [ 00:37:21.689 { 00:37:21.689 "name": "spare", 00:37:21.689 "uuid": "a20604ab-aa93-5eca-a3a3-3d9378bf1d14", 00:37:21.689 "is_configured": true, 00:37:21.689 "data_offset": 256, 00:37:21.689 "data_size": 7936 00:37:21.689 }, 00:37:21.689 { 00:37:21.689 "name": "BaseBdev2", 00:37:21.689 "uuid": "c7f88884-bb31-56bb-8706-3e3fb6176464", 00:37:21.689 "is_configured": true, 00:37:21.689 "data_offset": 256, 00:37:21.689 "data_size": 7936 00:37:21.689 } 00:37:21.689 ] 00:37:21.689 }' 00:37:21.689 21:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:21.689 21:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:21.689 21:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:21.948 21:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:21.948 21:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:37:22.885 21:50:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:37:22.885 21:50:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:22.885 21:50:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:22.885 21:50:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:22.885 21:50:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:22.885 21:50:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:22.885 21:50:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:22.885 21:50:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:23.143 21:50:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:23.143 "name": "raid_bdev1", 00:37:23.143 "uuid": "659072c5-00f3-4eee-b684-0bfaff4b8909", 00:37:23.143 "strip_size_kb": 0, 00:37:23.143 "state": "online", 00:37:23.143 "raid_level": "raid1", 00:37:23.143 "superblock": true, 00:37:23.143 "num_base_bdevs": 2, 00:37:23.143 "num_base_bdevs_discovered": 2, 00:37:23.143 "num_base_bdevs_operational": 2, 00:37:23.144 "process": { 00:37:23.144 "type": "rebuild", 00:37:23.144 "target": "spare", 00:37:23.144 "progress": { 00:37:23.144 "blocks": 6912, 00:37:23.144 "percent": 87 00:37:23.144 } 00:37:23.144 }, 00:37:23.144 "base_bdevs_list": [ 00:37:23.144 { 00:37:23.144 "name": "spare", 00:37:23.144 "uuid": "a20604ab-aa93-5eca-a3a3-3d9378bf1d14", 00:37:23.144 "is_configured": true, 00:37:23.144 "data_offset": 256, 00:37:23.144 "data_size": 7936 00:37:23.144 }, 00:37:23.144 { 00:37:23.144 "name": "BaseBdev2", 00:37:23.144 "uuid": "c7f88884-bb31-56bb-8706-3e3fb6176464", 00:37:23.144 "is_configured": true, 00:37:23.144 "data_offset": 256, 00:37:23.144 "data_size": 7936 00:37:23.144 } 00:37:23.144 ] 00:37:23.144 }' 00:37:23.144 21:50:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:23.144 21:50:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:23.144 21:50:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:23.144 21:50:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:23.144 21:50:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:37:23.401 [2024-07-15 21:50:56.647973] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:37:23.401 [2024-07-15 21:50:56.648170] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:37:23.401 [2024-07-15 21:50:56.648386] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:24.378 21:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:37:24.378 21:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:24.378 21:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:24.378 21:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:24.378 21:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:24.378 21:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:24.378 21:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:24.378 21:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:24.378 21:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:24.378 "name": "raid_bdev1", 00:37:24.378 "uuid": "659072c5-00f3-4eee-b684-0bfaff4b8909", 00:37:24.378 "strip_size_kb": 0, 00:37:24.378 "state": "online", 00:37:24.378 "raid_level": "raid1", 00:37:24.378 "superblock": true, 00:37:24.378 "num_base_bdevs": 2, 00:37:24.378 "num_base_bdevs_discovered": 2, 00:37:24.378 "num_base_bdevs_operational": 2, 00:37:24.378 "base_bdevs_list": [ 00:37:24.378 { 00:37:24.378 "name": "spare", 00:37:24.378 "uuid": "a20604ab-aa93-5eca-a3a3-3d9378bf1d14", 00:37:24.378 "is_configured": true, 00:37:24.378 "data_offset": 256, 00:37:24.378 "data_size": 7936 00:37:24.378 }, 00:37:24.378 { 00:37:24.378 "name": "BaseBdev2", 00:37:24.378 "uuid": "c7f88884-bb31-56bb-8706-3e3fb6176464", 00:37:24.378 "is_configured": true, 00:37:24.378 "data_offset": 256, 00:37:24.378 "data_size": 7936 00:37:24.378 } 00:37:24.378 ] 00:37:24.378 }' 00:37:24.378 21:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:24.378 21:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:37:24.378 21:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:24.378 21:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:37:24.378 21:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # break 00:37:24.378 21:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:24.378 21:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:24.378 21:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:24.378 21:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:24.378 21:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:24.378 21:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:24.378 21:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:24.637 21:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:24.637 "name": "raid_bdev1", 00:37:24.637 "uuid": "659072c5-00f3-4eee-b684-0bfaff4b8909", 00:37:24.637 "strip_size_kb": 0, 00:37:24.637 "state": "online", 00:37:24.637 "raid_level": "raid1", 00:37:24.637 "superblock": true, 00:37:24.637 "num_base_bdevs": 2, 00:37:24.637 "num_base_bdevs_discovered": 2, 00:37:24.637 "num_base_bdevs_operational": 2, 00:37:24.637 "base_bdevs_list": [ 00:37:24.637 { 00:37:24.637 "name": "spare", 00:37:24.637 "uuid": "a20604ab-aa93-5eca-a3a3-3d9378bf1d14", 00:37:24.637 "is_configured": true, 00:37:24.637 "data_offset": 256, 00:37:24.637 "data_size": 7936 00:37:24.637 }, 00:37:24.637 { 00:37:24.637 "name": "BaseBdev2", 00:37:24.637 "uuid": "c7f88884-bb31-56bb-8706-3e3fb6176464", 00:37:24.637 "is_configured": true, 00:37:24.637 "data_offset": 256, 00:37:24.637 "data_size": 7936 00:37:24.637 } 00:37:24.637 ] 00:37:24.637 }' 00:37:24.637 21:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:24.637 21:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:24.637 21:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:24.896 21:50:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:24.896 21:50:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:24.896 21:50:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:24.896 21:50:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:24.896 21:50:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:24.896 21:50:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:24.896 21:50:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:24.896 21:50:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:24.896 21:50:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:24.896 21:50:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:24.896 21:50:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:24.896 21:50:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:24.896 21:50:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:24.896 21:50:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:24.896 "name": "raid_bdev1", 00:37:24.896 "uuid": "659072c5-00f3-4eee-b684-0bfaff4b8909", 00:37:24.896 "strip_size_kb": 0, 00:37:24.896 "state": "online", 00:37:24.896 "raid_level": "raid1", 00:37:24.896 "superblock": true, 00:37:24.896 "num_base_bdevs": 2, 00:37:24.896 "num_base_bdevs_discovered": 2, 00:37:24.896 "num_base_bdevs_operational": 2, 00:37:24.896 "base_bdevs_list": [ 00:37:24.896 { 00:37:24.896 "name": "spare", 00:37:24.896 "uuid": "a20604ab-aa93-5eca-a3a3-3d9378bf1d14", 00:37:24.896 "is_configured": true, 00:37:24.896 "data_offset": 256, 00:37:24.896 "data_size": 7936 00:37:24.896 }, 00:37:24.896 { 00:37:24.896 "name": "BaseBdev2", 00:37:24.896 "uuid": "c7f88884-bb31-56bb-8706-3e3fb6176464", 00:37:24.896 "is_configured": true, 00:37:24.896 "data_offset": 256, 00:37:24.896 "data_size": 7936 00:37:24.896 } 00:37:24.896 ] 00:37:24.896 }' 00:37:24.896 21:50:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:24.896 21:50:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:25.833 21:50:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:25.833 [2024-07-15 21:50:59.059677] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:25.833 [2024-07-15 21:50:59.059827] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:25.833 [2024-07-15 21:50:59.059940] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:25.833 [2024-07-15 21:50:59.060030] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:25.833 [2024-07-15 21:50:59.060059] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:37:25.833 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:25.833 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # jq length 00:37:26.094 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:37:26.094 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:37:26.094 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:37:26.094 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:37:26.094 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:26.094 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:37:26.094 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:26.094 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:37:26.094 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:26.094 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:37:26.094 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:26.094 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:26.094 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:37:26.354 /dev/nbd0 00:37:26.354 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:26.354 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:26.354 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:37:26.354 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:37:26.354 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:37:26.354 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:37:26.354 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:37:26.354 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:37:26.354 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:37:26.354 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:37:26.354 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:26.354 1+0 records in 00:37:26.354 1+0 records out 00:37:26.354 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000503686 s, 8.1 MB/s 00:37:26.354 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:26.354 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:37:26.354 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:26.354 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:37:26.354 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:37:26.354 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:26.354 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:26.354 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:37:26.354 /dev/nbd1 00:37:26.354 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:37:26.613 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:37:26.613 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:37:26.613 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:37:26.613 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:37:26.613 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:37:26.613 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:37:26.613 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:37:26.614 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:37:26.614 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:37:26.614 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:26.614 1+0 records in 00:37:26.614 1+0 records out 00:37:26.614 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000533302 s, 7.7 MB/s 00:37:26.614 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:26.614 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:37:26.614 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:26.614 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:37:26.614 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:37:26.614 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:26.614 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:26.614 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:37:26.614 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:37:26.614 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:26.614 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:37:26.614 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:26.614 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:37:26.614 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:26.614 21:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:37:26.873 21:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:26.873 21:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:26.873 21:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:26.873 21:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:26.873 21:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:26.873 21:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:26.873 21:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:37:26.873 21:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:37:26.873 21:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:26.873 21:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:37:27.132 21:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:37:27.132 21:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:37:27.132 21:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:37:27.132 21:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:27.132 21:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:27.132 21:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:37:27.132 21:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:37:27.132 21:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:37:27.132 21:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:37:27.132 21:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:37:27.391 21:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:37:27.651 [2024-07-15 21:51:00.807192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:27.651 [2024-07-15 21:51:00.807393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:27.651 [2024-07-15 21:51:00.807492] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:37:27.651 [2024-07-15 21:51:00.807544] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:27.651 [2024-07-15 21:51:00.810318] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:27.651 [2024-07-15 21:51:00.810413] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:27.651 [2024-07-15 21:51:00.810588] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:37:27.651 [2024-07-15 21:51:00.810687] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:27.651 [2024-07-15 21:51:00.810892] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:27.651 spare 00:37:27.651 21:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:27.651 21:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:27.651 21:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:27.651 21:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:27.651 21:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:27.651 21:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:27.651 21:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:27.651 21:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:27.651 21:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:27.651 21:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:27.651 21:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:27.651 21:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:27.651 [2024-07-15 21:51:00.910846] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:37:27.651 [2024-07-15 21:51:00.910960] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:37:27.651 [2024-07-15 21:51:00.911217] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:37:27.651 [2024-07-15 21:51:00.911678] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:37:27.651 [2024-07-15 21:51:00.911726] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:37:27.651 [2024-07-15 21:51:00.911944] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:27.651 21:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:27.651 "name": "raid_bdev1", 00:37:27.651 "uuid": "659072c5-00f3-4eee-b684-0bfaff4b8909", 00:37:27.651 "strip_size_kb": 0, 00:37:27.651 "state": "online", 00:37:27.651 "raid_level": "raid1", 00:37:27.651 "superblock": true, 00:37:27.651 "num_base_bdevs": 2, 00:37:27.651 "num_base_bdevs_discovered": 2, 00:37:27.651 "num_base_bdevs_operational": 2, 00:37:27.651 "base_bdevs_list": [ 00:37:27.651 { 00:37:27.651 "name": "spare", 00:37:27.651 "uuid": "a20604ab-aa93-5eca-a3a3-3d9378bf1d14", 00:37:27.651 "is_configured": true, 00:37:27.651 "data_offset": 256, 00:37:27.651 "data_size": 7936 00:37:27.651 }, 00:37:27.651 { 00:37:27.651 "name": "BaseBdev2", 00:37:27.651 "uuid": "c7f88884-bb31-56bb-8706-3e3fb6176464", 00:37:27.651 "is_configured": true, 00:37:27.651 "data_offset": 256, 00:37:27.651 "data_size": 7936 00:37:27.651 } 00:37:27.651 ] 00:37:27.651 }' 00:37:27.651 21:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:27.651 21:51:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:28.591 21:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:28.591 21:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:28.591 21:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:28.591 21:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:28.591 21:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:28.591 21:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:28.591 21:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:28.591 21:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:28.591 "name": "raid_bdev1", 00:37:28.591 "uuid": "659072c5-00f3-4eee-b684-0bfaff4b8909", 00:37:28.591 "strip_size_kb": 0, 00:37:28.591 "state": "online", 00:37:28.591 "raid_level": "raid1", 00:37:28.591 "superblock": true, 00:37:28.591 "num_base_bdevs": 2, 00:37:28.591 "num_base_bdevs_discovered": 2, 00:37:28.591 "num_base_bdevs_operational": 2, 00:37:28.591 "base_bdevs_list": [ 00:37:28.591 { 00:37:28.591 "name": "spare", 00:37:28.591 "uuid": "a20604ab-aa93-5eca-a3a3-3d9378bf1d14", 00:37:28.591 "is_configured": true, 00:37:28.591 "data_offset": 256, 00:37:28.591 "data_size": 7936 00:37:28.591 }, 00:37:28.591 { 00:37:28.591 "name": "BaseBdev2", 00:37:28.591 "uuid": "c7f88884-bb31-56bb-8706-3e3fb6176464", 00:37:28.591 "is_configured": true, 00:37:28.591 "data_offset": 256, 00:37:28.591 "data_size": 7936 00:37:28.591 } 00:37:28.591 ] 00:37:28.591 }' 00:37:28.591 21:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:28.591 21:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:28.591 21:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:28.591 21:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:28.851 21:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:37:28.851 21:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:28.851 21:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:37:28.851 21:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:37:29.109 [2024-07-15 21:51:02.333617] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:29.109 21:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:29.109 21:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:29.109 21:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:29.109 21:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:29.109 21:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:29.109 21:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:29.109 21:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:29.110 21:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:29.110 21:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:29.110 21:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:29.110 21:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:29.110 21:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:29.369 21:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:29.369 "name": "raid_bdev1", 00:37:29.369 "uuid": "659072c5-00f3-4eee-b684-0bfaff4b8909", 00:37:29.369 "strip_size_kb": 0, 00:37:29.369 "state": "online", 00:37:29.369 "raid_level": "raid1", 00:37:29.369 "superblock": true, 00:37:29.369 "num_base_bdevs": 2, 00:37:29.369 "num_base_bdevs_discovered": 1, 00:37:29.369 "num_base_bdevs_operational": 1, 00:37:29.369 "base_bdevs_list": [ 00:37:29.369 { 00:37:29.369 "name": null, 00:37:29.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:29.369 "is_configured": false, 00:37:29.369 "data_offset": 256, 00:37:29.369 "data_size": 7936 00:37:29.369 }, 00:37:29.369 { 00:37:29.369 "name": "BaseBdev2", 00:37:29.369 "uuid": "c7f88884-bb31-56bb-8706-3e3fb6176464", 00:37:29.369 "is_configured": true, 00:37:29.369 "data_offset": 256, 00:37:29.369 "data_size": 7936 00:37:29.369 } 00:37:29.369 ] 00:37:29.369 }' 00:37:29.369 21:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:29.369 21:51:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:29.952 21:51:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:37:30.211 [2024-07-15 21:51:03.355871] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:30.211 [2024-07-15 21:51:03.356220] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:37:30.211 [2024-07-15 21:51:03.356267] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:37:30.211 [2024-07-15 21:51:03.356367] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:30.211 [2024-07-15 21:51:03.372792] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:37:30.211 [2024-07-15 21:51:03.374946] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:30.211 21:51:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # sleep 1 00:37:31.149 21:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:31.149 21:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:31.149 21:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:31.149 21:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:31.149 21:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:31.149 21:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:31.149 21:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:31.409 21:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:31.409 "name": "raid_bdev1", 00:37:31.409 "uuid": "659072c5-00f3-4eee-b684-0bfaff4b8909", 00:37:31.409 "strip_size_kb": 0, 00:37:31.409 "state": "online", 00:37:31.409 "raid_level": "raid1", 00:37:31.409 "superblock": true, 00:37:31.409 "num_base_bdevs": 2, 00:37:31.409 "num_base_bdevs_discovered": 2, 00:37:31.409 "num_base_bdevs_operational": 2, 00:37:31.409 "process": { 00:37:31.409 "type": "rebuild", 00:37:31.409 "target": "spare", 00:37:31.409 "progress": { 00:37:31.409 "blocks": 2816, 00:37:31.409 "percent": 35 00:37:31.409 } 00:37:31.409 }, 00:37:31.409 "base_bdevs_list": [ 00:37:31.409 { 00:37:31.409 "name": "spare", 00:37:31.409 "uuid": "a20604ab-aa93-5eca-a3a3-3d9378bf1d14", 00:37:31.409 "is_configured": true, 00:37:31.409 "data_offset": 256, 00:37:31.409 "data_size": 7936 00:37:31.409 }, 00:37:31.409 { 00:37:31.409 "name": "BaseBdev2", 00:37:31.409 "uuid": "c7f88884-bb31-56bb-8706-3e3fb6176464", 00:37:31.409 "is_configured": true, 00:37:31.409 "data_offset": 256, 00:37:31.409 "data_size": 7936 00:37:31.409 } 00:37:31.409 ] 00:37:31.409 }' 00:37:31.409 21:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:31.409 21:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:31.409 21:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:31.409 21:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:31.409 21:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:37:31.668 [2024-07-15 21:51:04.870296] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:31.668 [2024-07-15 21:51:04.884289] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:31.668 [2024-07-15 21:51:04.884413] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:31.668 [2024-07-15 21:51:04.884445] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:31.669 [2024-07-15 21:51:04.884467] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:31.669 21:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:31.669 21:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:31.669 21:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:31.669 21:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:31.669 21:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:31.669 21:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:31.669 21:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:31.669 21:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:31.669 21:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:31.669 21:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:31.669 21:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:31.669 21:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:31.928 21:51:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:31.928 "name": "raid_bdev1", 00:37:31.928 "uuid": "659072c5-00f3-4eee-b684-0bfaff4b8909", 00:37:31.928 "strip_size_kb": 0, 00:37:31.928 "state": "online", 00:37:31.928 "raid_level": "raid1", 00:37:31.928 "superblock": true, 00:37:31.928 "num_base_bdevs": 2, 00:37:31.928 "num_base_bdevs_discovered": 1, 00:37:31.928 "num_base_bdevs_operational": 1, 00:37:31.928 "base_bdevs_list": [ 00:37:31.928 { 00:37:31.928 "name": null, 00:37:31.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:31.928 "is_configured": false, 00:37:31.928 "data_offset": 256, 00:37:31.928 "data_size": 7936 00:37:31.928 }, 00:37:31.928 { 00:37:31.928 "name": "BaseBdev2", 00:37:31.928 "uuid": "c7f88884-bb31-56bb-8706-3e3fb6176464", 00:37:31.928 "is_configured": true, 00:37:31.928 "data_offset": 256, 00:37:31.928 "data_size": 7936 00:37:31.928 } 00:37:31.928 ] 00:37:31.928 }' 00:37:31.928 21:51:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:31.928 21:51:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:32.497 21:51:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:37:32.757 [2024-07-15 21:51:05.934466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:32.757 [2024-07-15 21:51:05.934718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:32.757 [2024-07-15 21:51:05.934775] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:37:32.757 [2024-07-15 21:51:05.934825] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:32.757 [2024-07-15 21:51:05.935463] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:32.757 [2024-07-15 21:51:05.935542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:32.757 [2024-07-15 21:51:05.935713] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:37:32.757 [2024-07-15 21:51:05.935747] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:37:32.757 [2024-07-15 21:51:05.935770] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:37:32.757 [2024-07-15 21:51:05.935822] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:32.757 [2024-07-15 21:51:05.953092] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1f60 00:37:32.757 spare 00:37:32.757 [2024-07-15 21:51:05.955224] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:32.757 21:51:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # sleep 1 00:37:33.695 21:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:33.695 21:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:33.695 21:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:33.695 21:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:33.695 21:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:33.695 21:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:33.695 21:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:33.954 21:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:33.954 "name": "raid_bdev1", 00:37:33.954 "uuid": "659072c5-00f3-4eee-b684-0bfaff4b8909", 00:37:33.954 "strip_size_kb": 0, 00:37:33.954 "state": "online", 00:37:33.954 "raid_level": "raid1", 00:37:33.954 "superblock": true, 00:37:33.954 "num_base_bdevs": 2, 00:37:33.954 "num_base_bdevs_discovered": 2, 00:37:33.954 "num_base_bdevs_operational": 2, 00:37:33.954 "process": { 00:37:33.954 "type": "rebuild", 00:37:33.954 "target": "spare", 00:37:33.954 "progress": { 00:37:33.954 "blocks": 2816, 00:37:33.954 "percent": 35 00:37:33.954 } 00:37:33.954 }, 00:37:33.954 "base_bdevs_list": [ 00:37:33.954 { 00:37:33.954 "name": "spare", 00:37:33.954 "uuid": "a20604ab-aa93-5eca-a3a3-3d9378bf1d14", 00:37:33.954 "is_configured": true, 00:37:33.954 "data_offset": 256, 00:37:33.954 "data_size": 7936 00:37:33.954 }, 00:37:33.954 { 00:37:33.954 "name": "BaseBdev2", 00:37:33.954 "uuid": "c7f88884-bb31-56bb-8706-3e3fb6176464", 00:37:33.954 "is_configured": true, 00:37:33.954 "data_offset": 256, 00:37:33.954 "data_size": 7936 00:37:33.954 } 00:37:33.954 ] 00:37:33.954 }' 00:37:33.954 21:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:33.954 21:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:33.954 21:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:33.954 21:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:33.954 21:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:37:34.214 [2024-07-15 21:51:07.482399] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:34.214 [2024-07-15 21:51:07.565010] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:34.214 [2024-07-15 21:51:07.565178] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:34.214 [2024-07-15 21:51:07.565212] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:34.214 [2024-07-15 21:51:07.565240] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:34.474 21:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:34.474 21:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:34.474 21:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:34.474 21:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:34.474 21:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:34.474 21:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:34.474 21:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:34.474 21:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:34.474 21:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:34.474 21:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:34.474 21:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:34.474 21:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:34.474 21:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:34.474 "name": "raid_bdev1", 00:37:34.474 "uuid": "659072c5-00f3-4eee-b684-0bfaff4b8909", 00:37:34.474 "strip_size_kb": 0, 00:37:34.474 "state": "online", 00:37:34.474 "raid_level": "raid1", 00:37:34.474 "superblock": true, 00:37:34.474 "num_base_bdevs": 2, 00:37:34.474 "num_base_bdevs_discovered": 1, 00:37:34.474 "num_base_bdevs_operational": 1, 00:37:34.474 "base_bdevs_list": [ 00:37:34.474 { 00:37:34.474 "name": null, 00:37:34.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:34.474 "is_configured": false, 00:37:34.474 "data_offset": 256, 00:37:34.474 "data_size": 7936 00:37:34.474 }, 00:37:34.474 { 00:37:34.474 "name": "BaseBdev2", 00:37:34.474 "uuid": "c7f88884-bb31-56bb-8706-3e3fb6176464", 00:37:34.474 "is_configured": true, 00:37:34.474 "data_offset": 256, 00:37:34.474 "data_size": 7936 00:37:34.474 } 00:37:34.474 ] 00:37:34.474 }' 00:37:34.474 21:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:34.474 21:51:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:35.410 21:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:35.410 21:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:35.410 21:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:35.410 21:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:35.410 21:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:35.410 21:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:35.410 21:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:35.410 21:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:35.410 "name": "raid_bdev1", 00:37:35.410 "uuid": "659072c5-00f3-4eee-b684-0bfaff4b8909", 00:37:35.410 "strip_size_kb": 0, 00:37:35.410 "state": "online", 00:37:35.410 "raid_level": "raid1", 00:37:35.410 "superblock": true, 00:37:35.410 "num_base_bdevs": 2, 00:37:35.410 "num_base_bdevs_discovered": 1, 00:37:35.410 "num_base_bdevs_operational": 1, 00:37:35.410 "base_bdevs_list": [ 00:37:35.410 { 00:37:35.410 "name": null, 00:37:35.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:35.410 "is_configured": false, 00:37:35.410 "data_offset": 256, 00:37:35.410 "data_size": 7936 00:37:35.410 }, 00:37:35.410 { 00:37:35.410 "name": "BaseBdev2", 00:37:35.410 "uuid": "c7f88884-bb31-56bb-8706-3e3fb6176464", 00:37:35.410 "is_configured": true, 00:37:35.410 "data_offset": 256, 00:37:35.410 "data_size": 7936 00:37:35.410 } 00:37:35.410 ] 00:37:35.410 }' 00:37:35.410 21:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:35.410 21:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:35.410 21:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:35.410 21:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:35.410 21:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:37:35.698 21:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:37:35.972 [2024-07-15 21:51:09.070371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:37:35.972 [2024-07-15 21:51:09.070571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:35.972 [2024-07-15 21:51:09.070630] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:37:35.972 [2024-07-15 21:51:09.070679] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:35.972 [2024-07-15 21:51:09.071285] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:35.972 [2024-07-15 21:51:09.071356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:35.972 [2024-07-15 21:51:09.071513] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:37:35.972 [2024-07-15 21:51:09.071553] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:37:35.972 [2024-07-15 21:51:09.071577] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:37:35.972 BaseBdev1 00:37:35.972 21:51:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # sleep 1 00:37:36.910 21:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:36.910 21:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:36.910 21:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:36.910 21:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:36.910 21:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:36.910 21:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:36.910 21:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:36.910 21:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:36.910 21:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:36.910 21:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:36.910 21:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:36.910 21:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:36.910 21:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:36.910 "name": "raid_bdev1", 00:37:36.910 "uuid": "659072c5-00f3-4eee-b684-0bfaff4b8909", 00:37:36.910 "strip_size_kb": 0, 00:37:36.910 "state": "online", 00:37:36.910 "raid_level": "raid1", 00:37:36.910 "superblock": true, 00:37:36.910 "num_base_bdevs": 2, 00:37:36.910 "num_base_bdevs_discovered": 1, 00:37:36.910 "num_base_bdevs_operational": 1, 00:37:36.910 "base_bdevs_list": [ 00:37:36.910 { 00:37:36.910 "name": null, 00:37:36.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:36.910 "is_configured": false, 00:37:36.910 "data_offset": 256, 00:37:36.910 "data_size": 7936 00:37:36.910 }, 00:37:36.910 { 00:37:36.910 "name": "BaseBdev2", 00:37:36.910 "uuid": "c7f88884-bb31-56bb-8706-3e3fb6176464", 00:37:36.910 "is_configured": true, 00:37:36.910 "data_offset": 256, 00:37:36.910 "data_size": 7936 00:37:36.910 } 00:37:36.910 ] 00:37:36.910 }' 00:37:36.910 21:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:36.910 21:51:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:37.844 21:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:37.844 21:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:37.844 21:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:37.844 21:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:37.844 21:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:37.844 21:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:37.844 21:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:37.844 21:51:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:37.844 "name": "raid_bdev1", 00:37:37.844 "uuid": "659072c5-00f3-4eee-b684-0bfaff4b8909", 00:37:37.844 "strip_size_kb": 0, 00:37:37.844 "state": "online", 00:37:37.844 "raid_level": "raid1", 00:37:37.844 "superblock": true, 00:37:37.844 "num_base_bdevs": 2, 00:37:37.844 "num_base_bdevs_discovered": 1, 00:37:37.844 "num_base_bdevs_operational": 1, 00:37:37.844 "base_bdevs_list": [ 00:37:37.844 { 00:37:37.844 "name": null, 00:37:37.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:37.844 "is_configured": false, 00:37:37.844 "data_offset": 256, 00:37:37.844 "data_size": 7936 00:37:37.844 }, 00:37:37.844 { 00:37:37.844 "name": "BaseBdev2", 00:37:37.844 "uuid": "c7f88884-bb31-56bb-8706-3e3fb6176464", 00:37:37.844 "is_configured": true, 00:37:37.844 "data_offset": 256, 00:37:37.844 "data_size": 7936 00:37:37.844 } 00:37:37.844 ] 00:37:37.844 }' 00:37:37.844 21:51:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:38.103 21:51:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:38.103 21:51:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:38.103 21:51:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:38.104 21:51:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:37:38.104 21:51:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@648 -- # local es=0 00:37:38.104 21:51:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:37:38.104 21:51:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:38.104 21:51:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:38.104 21:51:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:38.104 21:51:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:38.104 21:51:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:38.104 21:51:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:38.104 21:51:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:38.104 21:51:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:37:38.104 21:51:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:37:38.363 [2024-07-15 21:51:11.496935] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:38.363 [2024-07-15 21:51:11.497241] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:37:38.363 [2024-07-15 21:51:11.497294] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:37:38.363 request: 00:37:38.363 { 00:37:38.363 "base_bdev": "BaseBdev1", 00:37:38.363 "raid_bdev": "raid_bdev1", 00:37:38.363 "method": "bdev_raid_add_base_bdev", 00:37:38.363 "req_id": 1 00:37:38.363 } 00:37:38.363 Got JSON-RPC error response 00:37:38.363 response: 00:37:38.363 { 00:37:38.363 "code": -22, 00:37:38.363 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:37:38.363 } 00:37:38.363 21:51:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # es=1 00:37:38.363 21:51:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:38.363 21:51:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:38.363 21:51:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:38.363 21:51:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # sleep 1 00:37:39.313 21:51:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:39.313 21:51:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:39.313 21:51:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:39.313 21:51:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:39.313 21:51:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:39.313 21:51:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:39.313 21:51:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:39.313 21:51:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:39.313 21:51:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:39.313 21:51:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:39.313 21:51:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:39.313 21:51:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:39.573 21:51:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:39.573 "name": "raid_bdev1", 00:37:39.573 "uuid": "659072c5-00f3-4eee-b684-0bfaff4b8909", 00:37:39.573 "strip_size_kb": 0, 00:37:39.573 "state": "online", 00:37:39.573 "raid_level": "raid1", 00:37:39.573 "superblock": true, 00:37:39.573 "num_base_bdevs": 2, 00:37:39.573 "num_base_bdevs_discovered": 1, 00:37:39.573 "num_base_bdevs_operational": 1, 00:37:39.573 "base_bdevs_list": [ 00:37:39.573 { 00:37:39.573 "name": null, 00:37:39.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:39.573 "is_configured": false, 00:37:39.573 "data_offset": 256, 00:37:39.573 "data_size": 7936 00:37:39.573 }, 00:37:39.573 { 00:37:39.573 "name": "BaseBdev2", 00:37:39.573 "uuid": "c7f88884-bb31-56bb-8706-3e3fb6176464", 00:37:39.573 "is_configured": true, 00:37:39.573 "data_offset": 256, 00:37:39.573 "data_size": 7936 00:37:39.573 } 00:37:39.573 ] 00:37:39.573 }' 00:37:39.573 21:51:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:39.573 21:51:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:40.142 21:51:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:40.142 21:51:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:40.142 21:51:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:40.142 21:51:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:40.142 21:51:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:40.142 21:51:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:40.142 21:51:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:40.402 21:51:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:40.402 "name": "raid_bdev1", 00:37:40.402 "uuid": "659072c5-00f3-4eee-b684-0bfaff4b8909", 00:37:40.402 "strip_size_kb": 0, 00:37:40.402 "state": "online", 00:37:40.402 "raid_level": "raid1", 00:37:40.402 "superblock": true, 00:37:40.402 "num_base_bdevs": 2, 00:37:40.402 "num_base_bdevs_discovered": 1, 00:37:40.402 "num_base_bdevs_operational": 1, 00:37:40.402 "base_bdevs_list": [ 00:37:40.402 { 00:37:40.402 "name": null, 00:37:40.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:40.402 "is_configured": false, 00:37:40.402 "data_offset": 256, 00:37:40.402 "data_size": 7936 00:37:40.402 }, 00:37:40.402 { 00:37:40.402 "name": "BaseBdev2", 00:37:40.402 "uuid": "c7f88884-bb31-56bb-8706-3e3fb6176464", 00:37:40.402 "is_configured": true, 00:37:40.402 "data_offset": 256, 00:37:40.402 "data_size": 7936 00:37:40.402 } 00:37:40.402 ] 00:37:40.402 }' 00:37:40.402 21:51:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:40.402 21:51:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:40.402 21:51:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:40.402 21:51:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:40.402 21:51:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@782 -- # killprocess 161619 00:37:40.402 21:51:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 161619 ']' 00:37:40.402 21:51:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 161619 00:37:40.402 21:51:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:37:40.402 21:51:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:40.402 21:51:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 161619 00:37:40.402 killing process with pid 161619 00:37:40.402 Received shutdown signal, test time was about 60.000000 seconds 00:37:40.402 00:37:40.402 Latency(us) 00:37:40.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:40.402 =================================================================================================================== 00:37:40.402 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:40.402 21:51:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:40.402 21:51:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:40.402 21:51:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 161619' 00:37:40.402 21:51:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@967 -- # kill 161619 00:37:40.402 21:51:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # wait 161619 00:37:40.402 [2024-07-15 21:51:13.666524] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:40.402 [2024-07-15 21:51:13.666751] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:40.402 [2024-07-15 21:51:13.666853] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:40.402 [2024-07-15 21:51:13.666895] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:37:40.662 [2024-07-15 21:51:13.996810] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:42.038 ************************************ 00:37:42.038 END TEST raid_rebuild_test_sb_4k 00:37:42.038 ************************************ 00:37:42.038 21:51:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # return 0 00:37:42.038 00:37:42.038 real 0m30.812s 00:37:42.038 user 0m48.332s 00:37:42.038 sys 0m3.463s 00:37:42.038 21:51:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:42.038 21:51:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:42.296 21:51:15 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:37:42.296 21:51:15 bdev_raid -- bdev/bdev_raid.sh@904 -- # base_malloc_params='-m 32' 00:37:42.296 21:51:15 bdev_raid -- bdev/bdev_raid.sh@905 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:37:42.296 21:51:15 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:37:42.296 21:51:15 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:42.296 21:51:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:42.296 ************************************ 00:37:42.296 START TEST raid_state_function_test_sb_md_separate 00:37:42.296 ************************************ 00:37:42.296 21:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:37:42.296 21:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:37:42.296 21:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:37:42.296 21:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:37:42.296 21:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:37:42.296 21:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:37:42.296 21:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:37:42.297 21:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:37:42.297 21:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:37:42.297 21:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:37:42.297 21:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:37:42.297 21:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:37:42.297 21:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:37:42.297 21:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:37:42.297 Process raid pid: 162535 00:37:42.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:37:42.297 21:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:37:42.297 21:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:37:42.297 21:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:37:42.297 21:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:37:42.297 21:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:37:42.297 21:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:37:42.297 21:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:37:42.297 21:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:37:42.297 21:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:37:42.297 21:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=162535 00:37:42.297 21:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 162535' 00:37:42.297 21:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 162535 /var/tmp/spdk-raid.sock 00:37:42.297 21:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 162535 ']' 00:37:42.297 21:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:37:42.297 21:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:37:42.297 21:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:42.297 21:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:37:42.297 21:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:42.297 21:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:42.297 [2024-07-15 21:51:15.518861] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:37:42.297 [2024-07-15 21:51:15.519104] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:42.297 [2024-07-15 21:51:15.665593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:42.556 [2024-07-15 21:51:15.915500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:42.815 [2024-07-15 21:51:16.166193] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:43.077 21:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:43.077 21:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:37:43.077 21:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:37:43.339 [2024-07-15 21:51:16.530601] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:43.339 [2024-07-15 21:51:16.530822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:43.339 [2024-07-15 21:51:16.530859] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:43.339 [2024-07-15 21:51:16.530902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:43.339 21:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:37:43.339 21:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:43.339 21:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:43.339 21:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:43.339 21:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:43.339 21:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:43.339 21:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:43.339 21:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:43.339 21:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:43.339 21:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:43.339 21:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:43.339 21:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:43.597 21:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:43.597 "name": "Existed_Raid", 00:37:43.597 "uuid": "31efd981-8109-4905-ab03-f1725f7ca233", 00:37:43.597 "strip_size_kb": 0, 00:37:43.597 "state": "configuring", 00:37:43.597 "raid_level": "raid1", 00:37:43.597 "superblock": true, 00:37:43.597 "num_base_bdevs": 2, 00:37:43.597 "num_base_bdevs_discovered": 0, 00:37:43.597 "num_base_bdevs_operational": 2, 00:37:43.597 "base_bdevs_list": [ 00:37:43.597 { 00:37:43.597 "name": "BaseBdev1", 00:37:43.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:43.597 "is_configured": false, 00:37:43.597 "data_offset": 0, 00:37:43.597 "data_size": 0 00:37:43.597 }, 00:37:43.597 { 00:37:43.597 "name": "BaseBdev2", 00:37:43.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:43.597 "is_configured": false, 00:37:43.597 "data_offset": 0, 00:37:43.597 "data_size": 0 00:37:43.597 } 00:37:43.597 ] 00:37:43.597 }' 00:37:43.597 21:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:43.597 21:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:44.162 21:51:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:37:44.418 [2024-07-15 21:51:17.564633] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:44.418 [2024-07-15 21:51:17.564785] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:37:44.418 21:51:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:37:44.418 [2024-07-15 21:51:17.760328] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:44.418 [2024-07-15 21:51:17.760509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:44.418 [2024-07-15 21:51:17.760538] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:44.418 [2024-07-15 21:51:17.760577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:44.419 21:51:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:37:44.676 [2024-07-15 21:51:18.029030] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:44.676 BaseBdev1 00:37:44.676 21:51:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:37:44.676 21:51:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:37:44.676 21:51:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:37:44.676 21:51:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:37:44.676 21:51:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:37:44.676 21:51:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:37:44.676 21:51:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:44.933 21:51:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:37:45.190 [ 00:37:45.190 { 00:37:45.190 "name": "BaseBdev1", 00:37:45.190 "aliases": [ 00:37:45.190 "eadd41cd-4203-4a5a-9e6d-20768789c662" 00:37:45.190 ], 00:37:45.190 "product_name": "Malloc disk", 00:37:45.190 "block_size": 4096, 00:37:45.190 "num_blocks": 8192, 00:37:45.190 "uuid": "eadd41cd-4203-4a5a-9e6d-20768789c662", 00:37:45.190 "md_size": 32, 00:37:45.190 "md_interleave": false, 00:37:45.190 "dif_type": 0, 00:37:45.190 "assigned_rate_limits": { 00:37:45.190 "rw_ios_per_sec": 0, 00:37:45.190 "rw_mbytes_per_sec": 0, 00:37:45.190 "r_mbytes_per_sec": 0, 00:37:45.190 "w_mbytes_per_sec": 0 00:37:45.190 }, 00:37:45.190 "claimed": true, 00:37:45.190 "claim_type": "exclusive_write", 00:37:45.190 "zoned": false, 00:37:45.190 "supported_io_types": { 00:37:45.190 "read": true, 00:37:45.190 "write": true, 00:37:45.190 "unmap": true, 00:37:45.190 "flush": true, 00:37:45.190 "reset": true, 00:37:45.190 "nvme_admin": false, 00:37:45.190 "nvme_io": false, 00:37:45.190 "nvme_io_md": false, 00:37:45.190 "write_zeroes": true, 00:37:45.190 "zcopy": true, 00:37:45.190 "get_zone_info": false, 00:37:45.190 "zone_management": false, 00:37:45.190 "zone_append": false, 00:37:45.190 "compare": false, 00:37:45.190 "compare_and_write": false, 00:37:45.190 "abort": true, 00:37:45.190 "seek_hole": false, 00:37:45.190 "seek_data": false, 00:37:45.190 "copy": true, 00:37:45.190 "nvme_iov_md": false 00:37:45.190 }, 00:37:45.190 "memory_domains": [ 00:37:45.190 { 00:37:45.190 "dma_device_id": "system", 00:37:45.190 "dma_device_type": 1 00:37:45.190 }, 00:37:45.190 { 00:37:45.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:45.190 "dma_device_type": 2 00:37:45.190 } 00:37:45.190 ], 00:37:45.190 "driver_specific": {} 00:37:45.190 } 00:37:45.190 ] 00:37:45.190 21:51:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:37:45.190 21:51:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:37:45.190 21:51:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:45.190 21:51:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:45.190 21:51:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:45.190 21:51:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:45.190 21:51:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:45.190 21:51:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:45.190 21:51:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:45.190 21:51:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:45.190 21:51:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:45.190 21:51:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:45.190 21:51:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:45.447 21:51:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:45.447 "name": "Existed_Raid", 00:37:45.447 "uuid": "31ab2c90-7d5e-407c-951d-ece073bf02ec", 00:37:45.447 "strip_size_kb": 0, 00:37:45.447 "state": "configuring", 00:37:45.447 "raid_level": "raid1", 00:37:45.447 "superblock": true, 00:37:45.447 "num_base_bdevs": 2, 00:37:45.447 "num_base_bdevs_discovered": 1, 00:37:45.447 "num_base_bdevs_operational": 2, 00:37:45.447 "base_bdevs_list": [ 00:37:45.447 { 00:37:45.447 "name": "BaseBdev1", 00:37:45.447 "uuid": "eadd41cd-4203-4a5a-9e6d-20768789c662", 00:37:45.447 "is_configured": true, 00:37:45.447 "data_offset": 256, 00:37:45.447 "data_size": 7936 00:37:45.447 }, 00:37:45.447 { 00:37:45.447 "name": "BaseBdev2", 00:37:45.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:45.447 "is_configured": false, 00:37:45.447 "data_offset": 0, 00:37:45.447 "data_size": 0 00:37:45.447 } 00:37:45.447 ] 00:37:45.447 }' 00:37:45.447 21:51:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:45.447 21:51:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:46.011 21:51:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:37:46.268 [2024-07-15 21:51:19.498596] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:46.268 [2024-07-15 21:51:19.498772] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:37:46.268 21:51:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:37:46.526 [2024-07-15 21:51:19.750277] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:46.526 [2024-07-15 21:51:19.752509] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:46.526 [2024-07-15 21:51:19.752616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:46.526 21:51:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:37:46.526 21:51:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:37:46.526 21:51:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:37:46.526 21:51:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:46.526 21:51:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:46.526 21:51:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:46.526 21:51:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:46.527 21:51:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:46.527 21:51:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:46.527 21:51:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:46.527 21:51:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:46.527 21:51:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:46.527 21:51:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:46.527 21:51:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:46.784 21:51:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:46.784 "name": "Existed_Raid", 00:37:46.784 "uuid": "505c211f-9335-4072-b1be-c3b675869dc8", 00:37:46.784 "strip_size_kb": 0, 00:37:46.784 "state": "configuring", 00:37:46.784 "raid_level": "raid1", 00:37:46.784 "superblock": true, 00:37:46.784 "num_base_bdevs": 2, 00:37:46.784 "num_base_bdevs_discovered": 1, 00:37:46.784 "num_base_bdevs_operational": 2, 00:37:46.784 "base_bdevs_list": [ 00:37:46.784 { 00:37:46.784 "name": "BaseBdev1", 00:37:46.784 "uuid": "eadd41cd-4203-4a5a-9e6d-20768789c662", 00:37:46.784 "is_configured": true, 00:37:46.784 "data_offset": 256, 00:37:46.784 "data_size": 7936 00:37:46.784 }, 00:37:46.784 { 00:37:46.784 "name": "BaseBdev2", 00:37:46.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:46.784 "is_configured": false, 00:37:46.784 "data_offset": 0, 00:37:46.784 "data_size": 0 00:37:46.784 } 00:37:46.784 ] 00:37:46.784 }' 00:37:46.784 21:51:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:46.784 21:51:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:47.350 21:51:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:37:47.608 [2024-07-15 21:51:20.843171] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:47.608 [2024-07-15 21:51:20.843481] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:37:47.608 [2024-07-15 21:51:20.843525] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:37:47.608 [2024-07-15 21:51:20.843685] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:37:47.608 [2024-07-15 21:51:20.843836] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:37:47.608 [2024-07-15 21:51:20.843869] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:37:47.608 BaseBdev2 00:37:47.608 [2024-07-15 21:51:20.843979] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:47.608 21:51:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:37:47.608 21:51:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:37:47.608 21:51:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:37:47.608 21:51:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:37:47.608 21:51:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:37:47.608 21:51:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:37:47.608 21:51:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:47.866 21:51:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:37:47.866 [ 00:37:47.866 { 00:37:47.866 "name": "BaseBdev2", 00:37:47.866 "aliases": [ 00:37:47.866 "d3d464bd-f7df-41f3-bc60-e3a5f958cdc9" 00:37:47.866 ], 00:37:47.866 "product_name": "Malloc disk", 00:37:47.866 "block_size": 4096, 00:37:47.866 "num_blocks": 8192, 00:37:47.866 "uuid": "d3d464bd-f7df-41f3-bc60-e3a5f958cdc9", 00:37:47.866 "md_size": 32, 00:37:47.866 "md_interleave": false, 00:37:47.866 "dif_type": 0, 00:37:47.866 "assigned_rate_limits": { 00:37:47.866 "rw_ios_per_sec": 0, 00:37:47.866 "rw_mbytes_per_sec": 0, 00:37:47.866 "r_mbytes_per_sec": 0, 00:37:47.866 "w_mbytes_per_sec": 0 00:37:47.866 }, 00:37:47.866 "claimed": true, 00:37:47.866 "claim_type": "exclusive_write", 00:37:47.866 "zoned": false, 00:37:47.866 "supported_io_types": { 00:37:47.866 "read": true, 00:37:47.866 "write": true, 00:37:47.866 "unmap": true, 00:37:47.866 "flush": true, 00:37:47.866 "reset": true, 00:37:47.866 "nvme_admin": false, 00:37:47.866 "nvme_io": false, 00:37:47.866 "nvme_io_md": false, 00:37:47.866 "write_zeroes": true, 00:37:47.866 "zcopy": true, 00:37:47.866 "get_zone_info": false, 00:37:47.866 "zone_management": false, 00:37:47.866 "zone_append": false, 00:37:47.866 "compare": false, 00:37:47.866 "compare_and_write": false, 00:37:47.866 "abort": true, 00:37:47.866 "seek_hole": false, 00:37:47.866 "seek_data": false, 00:37:47.866 "copy": true, 00:37:47.866 "nvme_iov_md": false 00:37:47.866 }, 00:37:47.866 "memory_domains": [ 00:37:47.866 { 00:37:47.866 "dma_device_id": "system", 00:37:47.866 "dma_device_type": 1 00:37:47.866 }, 00:37:47.866 { 00:37:47.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:47.866 "dma_device_type": 2 00:37:47.866 } 00:37:47.866 ], 00:37:47.866 "driver_specific": {} 00:37:47.866 } 00:37:47.866 ] 00:37:47.866 21:51:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:37:47.866 21:51:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:37:47.866 21:51:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:37:47.866 21:51:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:37:47.866 21:51:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:47.866 21:51:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:47.866 21:51:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:47.866 21:51:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:47.866 21:51:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:47.866 21:51:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:47.866 21:51:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:47.866 21:51:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:47.866 21:51:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:47.866 21:51:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:47.866 21:51:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:48.124 21:51:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:48.124 "name": "Existed_Raid", 00:37:48.124 "uuid": "505c211f-9335-4072-b1be-c3b675869dc8", 00:37:48.124 "strip_size_kb": 0, 00:37:48.124 "state": "online", 00:37:48.124 "raid_level": "raid1", 00:37:48.124 "superblock": true, 00:37:48.124 "num_base_bdevs": 2, 00:37:48.124 "num_base_bdevs_discovered": 2, 00:37:48.124 "num_base_bdevs_operational": 2, 00:37:48.124 "base_bdevs_list": [ 00:37:48.124 { 00:37:48.124 "name": "BaseBdev1", 00:37:48.124 "uuid": "eadd41cd-4203-4a5a-9e6d-20768789c662", 00:37:48.124 "is_configured": true, 00:37:48.124 "data_offset": 256, 00:37:48.124 "data_size": 7936 00:37:48.124 }, 00:37:48.124 { 00:37:48.124 "name": "BaseBdev2", 00:37:48.124 "uuid": "d3d464bd-f7df-41f3-bc60-e3a5f958cdc9", 00:37:48.124 "is_configured": true, 00:37:48.124 "data_offset": 256, 00:37:48.124 "data_size": 7936 00:37:48.124 } 00:37:48.124 ] 00:37:48.124 }' 00:37:48.124 21:51:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:48.124 21:51:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:49.057 21:51:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:37:49.057 21:51:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:37:49.057 21:51:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:37:49.057 21:51:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:37:49.057 21:51:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:37:49.057 21:51:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:37:49.057 21:51:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:37:49.057 21:51:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:37:49.057 [2024-07-15 21:51:22.249229] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:49.057 21:51:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:37:49.057 "name": "Existed_Raid", 00:37:49.057 "aliases": [ 00:37:49.057 "505c211f-9335-4072-b1be-c3b675869dc8" 00:37:49.057 ], 00:37:49.057 "product_name": "Raid Volume", 00:37:49.057 "block_size": 4096, 00:37:49.057 "num_blocks": 7936, 00:37:49.057 "uuid": "505c211f-9335-4072-b1be-c3b675869dc8", 00:37:49.057 "md_size": 32, 00:37:49.057 "md_interleave": false, 00:37:49.057 "dif_type": 0, 00:37:49.057 "assigned_rate_limits": { 00:37:49.057 "rw_ios_per_sec": 0, 00:37:49.057 "rw_mbytes_per_sec": 0, 00:37:49.057 "r_mbytes_per_sec": 0, 00:37:49.057 "w_mbytes_per_sec": 0 00:37:49.057 }, 00:37:49.057 "claimed": false, 00:37:49.057 "zoned": false, 00:37:49.057 "supported_io_types": { 00:37:49.057 "read": true, 00:37:49.057 "write": true, 00:37:49.057 "unmap": false, 00:37:49.057 "flush": false, 00:37:49.057 "reset": true, 00:37:49.057 "nvme_admin": false, 00:37:49.057 "nvme_io": false, 00:37:49.057 "nvme_io_md": false, 00:37:49.057 "write_zeroes": true, 00:37:49.057 "zcopy": false, 00:37:49.057 "get_zone_info": false, 00:37:49.057 "zone_management": false, 00:37:49.057 "zone_append": false, 00:37:49.057 "compare": false, 00:37:49.057 "compare_and_write": false, 00:37:49.057 "abort": false, 00:37:49.057 "seek_hole": false, 00:37:49.057 "seek_data": false, 00:37:49.057 "copy": false, 00:37:49.057 "nvme_iov_md": false 00:37:49.057 }, 00:37:49.057 "memory_domains": [ 00:37:49.057 { 00:37:49.057 "dma_device_id": "system", 00:37:49.057 "dma_device_type": 1 00:37:49.057 }, 00:37:49.057 { 00:37:49.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:49.057 "dma_device_type": 2 00:37:49.057 }, 00:37:49.057 { 00:37:49.057 "dma_device_id": "system", 00:37:49.057 "dma_device_type": 1 00:37:49.057 }, 00:37:49.057 { 00:37:49.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:49.057 "dma_device_type": 2 00:37:49.057 } 00:37:49.057 ], 00:37:49.057 "driver_specific": { 00:37:49.057 "raid": { 00:37:49.057 "uuid": "505c211f-9335-4072-b1be-c3b675869dc8", 00:37:49.057 "strip_size_kb": 0, 00:37:49.057 "state": "online", 00:37:49.057 "raid_level": "raid1", 00:37:49.057 "superblock": true, 00:37:49.057 "num_base_bdevs": 2, 00:37:49.057 "num_base_bdevs_discovered": 2, 00:37:49.057 "num_base_bdevs_operational": 2, 00:37:49.057 "base_bdevs_list": [ 00:37:49.057 { 00:37:49.057 "name": "BaseBdev1", 00:37:49.057 "uuid": "eadd41cd-4203-4a5a-9e6d-20768789c662", 00:37:49.057 "is_configured": true, 00:37:49.057 "data_offset": 256, 00:37:49.057 "data_size": 7936 00:37:49.057 }, 00:37:49.057 { 00:37:49.057 "name": "BaseBdev2", 00:37:49.057 "uuid": "d3d464bd-f7df-41f3-bc60-e3a5f958cdc9", 00:37:49.057 "is_configured": true, 00:37:49.057 "data_offset": 256, 00:37:49.057 "data_size": 7936 00:37:49.057 } 00:37:49.057 ] 00:37:49.057 } 00:37:49.057 } 00:37:49.057 }' 00:37:49.057 21:51:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:49.057 21:51:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:37:49.057 BaseBdev2' 00:37:49.058 21:51:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:49.058 21:51:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:49.058 21:51:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:37:49.315 21:51:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:49.315 "name": "BaseBdev1", 00:37:49.315 "aliases": [ 00:37:49.315 "eadd41cd-4203-4a5a-9e6d-20768789c662" 00:37:49.315 ], 00:37:49.315 "product_name": "Malloc disk", 00:37:49.315 "block_size": 4096, 00:37:49.315 "num_blocks": 8192, 00:37:49.315 "uuid": "eadd41cd-4203-4a5a-9e6d-20768789c662", 00:37:49.315 "md_size": 32, 00:37:49.315 "md_interleave": false, 00:37:49.315 "dif_type": 0, 00:37:49.315 "assigned_rate_limits": { 00:37:49.315 "rw_ios_per_sec": 0, 00:37:49.315 "rw_mbytes_per_sec": 0, 00:37:49.315 "r_mbytes_per_sec": 0, 00:37:49.315 "w_mbytes_per_sec": 0 00:37:49.315 }, 00:37:49.315 "claimed": true, 00:37:49.315 "claim_type": "exclusive_write", 00:37:49.315 "zoned": false, 00:37:49.315 "supported_io_types": { 00:37:49.315 "read": true, 00:37:49.315 "write": true, 00:37:49.315 "unmap": true, 00:37:49.315 "flush": true, 00:37:49.315 "reset": true, 00:37:49.315 "nvme_admin": false, 00:37:49.315 "nvme_io": false, 00:37:49.315 "nvme_io_md": false, 00:37:49.315 "write_zeroes": true, 00:37:49.315 "zcopy": true, 00:37:49.315 "get_zone_info": false, 00:37:49.315 "zone_management": false, 00:37:49.315 "zone_append": false, 00:37:49.315 "compare": false, 00:37:49.315 "compare_and_write": false, 00:37:49.315 "abort": true, 00:37:49.315 "seek_hole": false, 00:37:49.315 "seek_data": false, 00:37:49.315 "copy": true, 00:37:49.315 "nvme_iov_md": false 00:37:49.315 }, 00:37:49.315 "memory_domains": [ 00:37:49.315 { 00:37:49.315 "dma_device_id": "system", 00:37:49.315 "dma_device_type": 1 00:37:49.315 }, 00:37:49.315 { 00:37:49.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:49.315 "dma_device_type": 2 00:37:49.315 } 00:37:49.315 ], 00:37:49.315 "driver_specific": {} 00:37:49.315 }' 00:37:49.315 21:51:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:49.315 21:51:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:49.315 21:51:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:37:49.315 21:51:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:49.315 21:51:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:49.585 21:51:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:37:49.585 21:51:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:49.585 21:51:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:49.585 21:51:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:37:49.585 21:51:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:49.585 21:51:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:49.842 21:51:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:37:49.842 21:51:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:49.842 21:51:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:37:49.842 21:51:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:49.842 21:51:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:49.842 "name": "BaseBdev2", 00:37:49.842 "aliases": [ 00:37:49.842 "d3d464bd-f7df-41f3-bc60-e3a5f958cdc9" 00:37:49.842 ], 00:37:49.842 "product_name": "Malloc disk", 00:37:49.842 "block_size": 4096, 00:37:49.842 "num_blocks": 8192, 00:37:49.842 "uuid": "d3d464bd-f7df-41f3-bc60-e3a5f958cdc9", 00:37:49.842 "md_size": 32, 00:37:49.842 "md_interleave": false, 00:37:49.842 "dif_type": 0, 00:37:49.842 "assigned_rate_limits": { 00:37:49.842 "rw_ios_per_sec": 0, 00:37:49.842 "rw_mbytes_per_sec": 0, 00:37:49.842 "r_mbytes_per_sec": 0, 00:37:49.842 "w_mbytes_per_sec": 0 00:37:49.842 }, 00:37:49.842 "claimed": true, 00:37:49.842 "claim_type": "exclusive_write", 00:37:49.842 "zoned": false, 00:37:49.842 "supported_io_types": { 00:37:49.842 "read": true, 00:37:49.842 "write": true, 00:37:49.842 "unmap": true, 00:37:49.842 "flush": true, 00:37:49.842 "reset": true, 00:37:49.842 "nvme_admin": false, 00:37:49.842 "nvme_io": false, 00:37:49.842 "nvme_io_md": false, 00:37:49.842 "write_zeroes": true, 00:37:49.842 "zcopy": true, 00:37:49.842 "get_zone_info": false, 00:37:49.842 "zone_management": false, 00:37:49.842 "zone_append": false, 00:37:49.842 "compare": false, 00:37:49.842 "compare_and_write": false, 00:37:49.842 "abort": true, 00:37:49.842 "seek_hole": false, 00:37:49.842 "seek_data": false, 00:37:49.842 "copy": true, 00:37:49.842 "nvme_iov_md": false 00:37:49.842 }, 00:37:49.842 "memory_domains": [ 00:37:49.842 { 00:37:49.842 "dma_device_id": "system", 00:37:49.842 "dma_device_type": 1 00:37:49.842 }, 00:37:49.842 { 00:37:49.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:49.842 "dma_device_type": 2 00:37:49.842 } 00:37:49.842 ], 00:37:49.842 "driver_specific": {} 00:37:49.842 }' 00:37:49.842 21:51:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:50.099 21:51:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:50.099 21:51:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:37:50.099 21:51:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:50.099 21:51:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:50.099 21:51:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:37:50.099 21:51:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:50.099 21:51:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:50.357 21:51:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:37:50.357 21:51:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:50.357 21:51:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:50.357 21:51:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:37:50.357 21:51:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:37:50.616 [2024-07-15 21:51:23.802411] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:50.616 21:51:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:37:50.616 21:51:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:37:50.616 21:51:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:37:50.616 21:51:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:37:50.616 21:51:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:37:50.616 21:51:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:37:50.616 21:51:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:50.616 21:51:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:50.616 21:51:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:50.616 21:51:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:50.616 21:51:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:50.616 21:51:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:50.616 21:51:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:50.616 21:51:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:50.616 21:51:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:50.616 21:51:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:50.616 21:51:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:50.875 21:51:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:50.875 "name": "Existed_Raid", 00:37:50.875 "uuid": "505c211f-9335-4072-b1be-c3b675869dc8", 00:37:50.875 "strip_size_kb": 0, 00:37:50.875 "state": "online", 00:37:50.875 "raid_level": "raid1", 00:37:50.875 "superblock": true, 00:37:50.875 "num_base_bdevs": 2, 00:37:50.875 "num_base_bdevs_discovered": 1, 00:37:50.875 "num_base_bdevs_operational": 1, 00:37:50.875 "base_bdevs_list": [ 00:37:50.875 { 00:37:50.875 "name": null, 00:37:50.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:50.875 "is_configured": false, 00:37:50.875 "data_offset": 256, 00:37:50.875 "data_size": 7936 00:37:50.875 }, 00:37:50.875 { 00:37:50.875 "name": "BaseBdev2", 00:37:50.875 "uuid": "d3d464bd-f7df-41f3-bc60-e3a5f958cdc9", 00:37:50.875 "is_configured": true, 00:37:50.875 "data_offset": 256, 00:37:50.875 "data_size": 7936 00:37:50.875 } 00:37:50.875 ] 00:37:50.875 }' 00:37:50.875 21:51:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:50.875 21:51:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:51.441 21:51:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:37:51.441 21:51:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:37:51.441 21:51:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:51.441 21:51:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:37:51.701 21:51:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:37:51.701 21:51:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:51.701 21:51:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:37:51.959 [2024-07-15 21:51:25.184741] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:37:51.959 [2024-07-15 21:51:25.184902] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:51.959 [2024-07-15 21:51:25.291971] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:51.959 [2024-07-15 21:51:25.292085] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:51.959 [2024-07-15 21:51:25.292111] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:37:51.959 21:51:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:37:51.959 21:51:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:37:51.959 21:51:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:51.959 21:51:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:37:52.217 21:51:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:37:52.218 21:51:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:37:52.218 21:51:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:37:52.218 21:51:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 162535 00:37:52.218 21:51:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 162535 ']' 00:37:52.218 21:51:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 162535 00:37:52.218 21:51:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:37:52.218 21:51:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:52.218 21:51:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 162535 00:37:52.218 killing process with pid 162535 00:37:52.218 21:51:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:52.218 21:51:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:52.218 21:51:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 162535' 00:37:52.218 21:51:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 162535 00:37:52.218 21:51:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 162535 00:37:52.218 [2024-07-15 21:51:25.532588] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:52.218 [2024-07-15 21:51:25.532718] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:53.595 ************************************ 00:37:53.595 END TEST raid_state_function_test_sb_md_separate 00:37:53.595 ************************************ 00:37:53.595 21:51:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:37:53.595 00:37:53.595 real 0m11.340s 00:37:53.595 user 0m19.617s 00:37:53.595 sys 0m1.521s 00:37:53.595 21:51:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:53.595 21:51:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:53.595 21:51:26 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:37:53.595 21:51:26 bdev_raid -- bdev/bdev_raid.sh@906 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:37:53.595 21:51:26 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:37:53.595 21:51:26 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:53.595 21:51:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:53.595 ************************************ 00:37:53.595 START TEST raid_superblock_test_md_separate 00:37:53.595 ************************************ 00:37:53.595 21:51:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:37:53.595 21:51:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:37:53.595 21:51:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:37:53.595 21:51:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:37:53.595 21:51:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:37:53.595 21:51:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:37:53.595 21:51:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:37:53.595 21:51:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:37:53.595 21:51:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:37:53.595 21:51:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:37:53.595 21:51:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local strip_size 00:37:53.595 21:51:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:37:53.595 21:51:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:37:53.595 21:51:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:37:53.595 21:51:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:37:53.596 21:51:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:37:53.596 21:51:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # raid_pid=162920 00:37:53.596 21:51:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # waitforlisten 162920 /var/tmp/spdk-raid.sock 00:37:53.596 21:51:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:37:53.596 21:51:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@829 -- # '[' -z 162920 ']' 00:37:53.596 21:51:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:37:53.596 21:51:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:53.596 21:51:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:37:53.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:37:53.596 21:51:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:53.596 21:51:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:53.596 [2024-07-15 21:51:26.940261] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:37:53.596 [2024-07-15 21:51:26.940532] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162920 ] 00:37:53.854 [2024-07-15 21:51:27.106772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:54.112 [2024-07-15 21:51:27.366575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:54.370 [2024-07-15 21:51:27.605441] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:54.629 21:51:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:54.629 21:51:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # return 0 00:37:54.629 21:51:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:37:54.629 21:51:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:37:54.629 21:51:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:37:54.629 21:51:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:37:54.629 21:51:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:37:54.629 21:51:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:54.629 21:51:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:37:54.629 21:51:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:54.629 21:51:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:37:54.629 malloc1 00:37:54.887 21:51:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:54.888 [2024-07-15 21:51:28.198956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:54.888 [2024-07-15 21:51:28.199194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:54.888 [2024-07-15 21:51:28.199253] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:37:54.888 [2024-07-15 21:51:28.199305] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:54.888 [2024-07-15 21:51:28.201709] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:54.888 [2024-07-15 21:51:28.201803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:54.888 pt1 00:37:54.888 21:51:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:37:54.888 21:51:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:37:54.888 21:51:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:37:54.888 21:51:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:37:54.888 21:51:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:37:54.888 21:51:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:54.888 21:51:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:37:54.888 21:51:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:54.888 21:51:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:37:55.147 malloc2 00:37:55.147 21:51:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:55.406 [2024-07-15 21:51:28.680966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:55.406 [2024-07-15 21:51:28.681186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:55.407 [2024-07-15 21:51:28.681242] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:37:55.407 [2024-07-15 21:51:28.681304] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:55.407 [2024-07-15 21:51:28.683359] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:55.407 [2024-07-15 21:51:28.683476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:55.407 pt2 00:37:55.407 21:51:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:37:55.407 21:51:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:37:55.407 21:51:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:37:55.666 [2024-07-15 21:51:28.892633] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:55.666 [2024-07-15 21:51:28.894506] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:55.666 [2024-07-15 21:51:28.894766] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:37:55.666 [2024-07-15 21:51:28.894808] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:37:55.666 [2024-07-15 21:51:28.894968] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:37:55.666 [2024-07-15 21:51:28.895094] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:37:55.666 [2024-07-15 21:51:28.895127] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:37:55.666 [2024-07-15 21:51:28.895263] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:55.666 21:51:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:55.666 21:51:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:55.666 21:51:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:55.666 21:51:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:55.666 21:51:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:55.666 21:51:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:55.667 21:51:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:55.667 21:51:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:55.667 21:51:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:55.667 21:51:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:55.667 21:51:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:55.667 21:51:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:55.926 21:51:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:55.926 "name": "raid_bdev1", 00:37:55.926 "uuid": "b912bf76-9991-4d44-8e4b-8ff01d64d71b", 00:37:55.926 "strip_size_kb": 0, 00:37:55.926 "state": "online", 00:37:55.926 "raid_level": "raid1", 00:37:55.926 "superblock": true, 00:37:55.926 "num_base_bdevs": 2, 00:37:55.926 "num_base_bdevs_discovered": 2, 00:37:55.926 "num_base_bdevs_operational": 2, 00:37:55.926 "base_bdevs_list": [ 00:37:55.926 { 00:37:55.926 "name": "pt1", 00:37:55.926 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:55.926 "is_configured": true, 00:37:55.926 "data_offset": 256, 00:37:55.926 "data_size": 7936 00:37:55.926 }, 00:37:55.926 { 00:37:55.926 "name": "pt2", 00:37:55.926 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:55.926 "is_configured": true, 00:37:55.926 "data_offset": 256, 00:37:55.926 "data_size": 7936 00:37:55.926 } 00:37:55.926 ] 00:37:55.926 }' 00:37:55.926 21:51:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:55.926 21:51:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:56.494 21:51:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:37:56.494 21:51:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:37:56.494 21:51:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:37:56.494 21:51:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:37:56.494 21:51:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:37:56.494 21:51:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:37:56.494 21:51:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:56.494 21:51:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:37:56.754 [2024-07-15 21:51:29.931040] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:56.754 21:51:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:37:56.754 "name": "raid_bdev1", 00:37:56.754 "aliases": [ 00:37:56.754 "b912bf76-9991-4d44-8e4b-8ff01d64d71b" 00:37:56.754 ], 00:37:56.754 "product_name": "Raid Volume", 00:37:56.754 "block_size": 4096, 00:37:56.754 "num_blocks": 7936, 00:37:56.754 "uuid": "b912bf76-9991-4d44-8e4b-8ff01d64d71b", 00:37:56.754 "md_size": 32, 00:37:56.754 "md_interleave": false, 00:37:56.754 "dif_type": 0, 00:37:56.754 "assigned_rate_limits": { 00:37:56.754 "rw_ios_per_sec": 0, 00:37:56.754 "rw_mbytes_per_sec": 0, 00:37:56.754 "r_mbytes_per_sec": 0, 00:37:56.754 "w_mbytes_per_sec": 0 00:37:56.754 }, 00:37:56.754 "claimed": false, 00:37:56.754 "zoned": false, 00:37:56.754 "supported_io_types": { 00:37:56.754 "read": true, 00:37:56.754 "write": true, 00:37:56.754 "unmap": false, 00:37:56.754 "flush": false, 00:37:56.754 "reset": true, 00:37:56.754 "nvme_admin": false, 00:37:56.754 "nvme_io": false, 00:37:56.754 "nvme_io_md": false, 00:37:56.754 "write_zeroes": true, 00:37:56.754 "zcopy": false, 00:37:56.754 "get_zone_info": false, 00:37:56.754 "zone_management": false, 00:37:56.754 "zone_append": false, 00:37:56.754 "compare": false, 00:37:56.754 "compare_and_write": false, 00:37:56.754 "abort": false, 00:37:56.754 "seek_hole": false, 00:37:56.754 "seek_data": false, 00:37:56.754 "copy": false, 00:37:56.754 "nvme_iov_md": false 00:37:56.754 }, 00:37:56.754 "memory_domains": [ 00:37:56.754 { 00:37:56.754 "dma_device_id": "system", 00:37:56.754 "dma_device_type": 1 00:37:56.754 }, 00:37:56.754 { 00:37:56.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:56.754 "dma_device_type": 2 00:37:56.754 }, 00:37:56.754 { 00:37:56.754 "dma_device_id": "system", 00:37:56.754 "dma_device_type": 1 00:37:56.754 }, 00:37:56.754 { 00:37:56.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:56.754 "dma_device_type": 2 00:37:56.754 } 00:37:56.754 ], 00:37:56.754 "driver_specific": { 00:37:56.754 "raid": { 00:37:56.754 "uuid": "b912bf76-9991-4d44-8e4b-8ff01d64d71b", 00:37:56.754 "strip_size_kb": 0, 00:37:56.754 "state": "online", 00:37:56.754 "raid_level": "raid1", 00:37:56.754 "superblock": true, 00:37:56.754 "num_base_bdevs": 2, 00:37:56.754 "num_base_bdevs_discovered": 2, 00:37:56.754 "num_base_bdevs_operational": 2, 00:37:56.754 "base_bdevs_list": [ 00:37:56.754 { 00:37:56.754 "name": "pt1", 00:37:56.754 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:56.754 "is_configured": true, 00:37:56.754 "data_offset": 256, 00:37:56.754 "data_size": 7936 00:37:56.754 }, 00:37:56.754 { 00:37:56.754 "name": "pt2", 00:37:56.754 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:56.754 "is_configured": true, 00:37:56.754 "data_offset": 256, 00:37:56.754 "data_size": 7936 00:37:56.754 } 00:37:56.754 ] 00:37:56.754 } 00:37:56.754 } 00:37:56.754 }' 00:37:56.754 21:51:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:56.754 21:51:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:37:56.754 pt2' 00:37:56.754 21:51:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:56.754 21:51:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:37:56.754 21:51:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:57.015 21:51:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:57.015 "name": "pt1", 00:37:57.015 "aliases": [ 00:37:57.015 "00000000-0000-0000-0000-000000000001" 00:37:57.015 ], 00:37:57.015 "product_name": "passthru", 00:37:57.015 "block_size": 4096, 00:37:57.015 "num_blocks": 8192, 00:37:57.015 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:57.015 "md_size": 32, 00:37:57.015 "md_interleave": false, 00:37:57.015 "dif_type": 0, 00:37:57.015 "assigned_rate_limits": { 00:37:57.015 "rw_ios_per_sec": 0, 00:37:57.015 "rw_mbytes_per_sec": 0, 00:37:57.015 "r_mbytes_per_sec": 0, 00:37:57.015 "w_mbytes_per_sec": 0 00:37:57.015 }, 00:37:57.015 "claimed": true, 00:37:57.015 "claim_type": "exclusive_write", 00:37:57.015 "zoned": false, 00:37:57.015 "supported_io_types": { 00:37:57.015 "read": true, 00:37:57.015 "write": true, 00:37:57.015 "unmap": true, 00:37:57.015 "flush": true, 00:37:57.015 "reset": true, 00:37:57.015 "nvme_admin": false, 00:37:57.015 "nvme_io": false, 00:37:57.015 "nvme_io_md": false, 00:37:57.015 "write_zeroes": true, 00:37:57.015 "zcopy": true, 00:37:57.015 "get_zone_info": false, 00:37:57.015 "zone_management": false, 00:37:57.015 "zone_append": false, 00:37:57.015 "compare": false, 00:37:57.015 "compare_and_write": false, 00:37:57.015 "abort": true, 00:37:57.015 "seek_hole": false, 00:37:57.015 "seek_data": false, 00:37:57.015 "copy": true, 00:37:57.015 "nvme_iov_md": false 00:37:57.015 }, 00:37:57.015 "memory_domains": [ 00:37:57.015 { 00:37:57.015 "dma_device_id": "system", 00:37:57.015 "dma_device_type": 1 00:37:57.015 }, 00:37:57.015 { 00:37:57.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:57.015 "dma_device_type": 2 00:37:57.015 } 00:37:57.015 ], 00:37:57.015 "driver_specific": { 00:37:57.015 "passthru": { 00:37:57.015 "name": "pt1", 00:37:57.015 "base_bdev_name": "malloc1" 00:37:57.015 } 00:37:57.015 } 00:37:57.015 }' 00:37:57.015 21:51:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:57.015 21:51:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:57.015 21:51:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:37:57.015 21:51:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:57.015 21:51:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:57.275 21:51:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:37:57.275 21:51:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:57.275 21:51:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:57.275 21:51:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:37:57.275 21:51:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:57.275 21:51:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:57.275 21:51:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:37:57.275 21:51:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:57.275 21:51:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:37:57.275 21:51:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:57.535 21:51:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:57.535 "name": "pt2", 00:37:57.535 "aliases": [ 00:37:57.535 "00000000-0000-0000-0000-000000000002" 00:37:57.535 ], 00:37:57.535 "product_name": "passthru", 00:37:57.535 "block_size": 4096, 00:37:57.535 "num_blocks": 8192, 00:37:57.535 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:57.535 "md_size": 32, 00:37:57.535 "md_interleave": false, 00:37:57.535 "dif_type": 0, 00:37:57.535 "assigned_rate_limits": { 00:37:57.535 "rw_ios_per_sec": 0, 00:37:57.535 "rw_mbytes_per_sec": 0, 00:37:57.535 "r_mbytes_per_sec": 0, 00:37:57.535 "w_mbytes_per_sec": 0 00:37:57.535 }, 00:37:57.535 "claimed": true, 00:37:57.535 "claim_type": "exclusive_write", 00:37:57.535 "zoned": false, 00:37:57.535 "supported_io_types": { 00:37:57.535 "read": true, 00:37:57.535 "write": true, 00:37:57.535 "unmap": true, 00:37:57.535 "flush": true, 00:37:57.535 "reset": true, 00:37:57.535 "nvme_admin": false, 00:37:57.535 "nvme_io": false, 00:37:57.535 "nvme_io_md": false, 00:37:57.535 "write_zeroes": true, 00:37:57.535 "zcopy": true, 00:37:57.535 "get_zone_info": false, 00:37:57.535 "zone_management": false, 00:37:57.535 "zone_append": false, 00:37:57.535 "compare": false, 00:37:57.535 "compare_and_write": false, 00:37:57.535 "abort": true, 00:37:57.535 "seek_hole": false, 00:37:57.535 "seek_data": false, 00:37:57.535 "copy": true, 00:37:57.535 "nvme_iov_md": false 00:37:57.535 }, 00:37:57.535 "memory_domains": [ 00:37:57.535 { 00:37:57.535 "dma_device_id": "system", 00:37:57.535 "dma_device_type": 1 00:37:57.535 }, 00:37:57.535 { 00:37:57.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:57.535 "dma_device_type": 2 00:37:57.535 } 00:37:57.535 ], 00:37:57.535 "driver_specific": { 00:37:57.535 "passthru": { 00:37:57.535 "name": "pt2", 00:37:57.535 "base_bdev_name": "malloc2" 00:37:57.535 } 00:37:57.535 } 00:37:57.535 }' 00:37:57.535 21:51:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:57.795 21:51:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:57.795 21:51:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:37:57.795 21:51:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:57.795 21:51:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:57.795 21:51:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:37:57.795 21:51:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:57.795 21:51:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:58.054 21:51:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:37:58.054 21:51:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:58.054 21:51:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:58.054 21:51:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:37:58.054 21:51:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:58.054 21:51:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:37:58.313 [2024-07-15 21:51:31.476376] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:58.313 21:51:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=b912bf76-9991-4d44-8e4b-8ff01d64d71b 00:37:58.313 21:51:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # '[' -z b912bf76-9991-4d44-8e4b-8ff01d64d71b ']' 00:37:58.313 21:51:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:58.572 [2024-07-15 21:51:31.719675] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:58.572 [2024-07-15 21:51:31.719780] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:58.572 [2024-07-15 21:51:31.719877] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:58.572 [2024-07-15 21:51:31.719960] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:58.572 [2024-07-15 21:51:31.719982] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:37:58.572 21:51:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:37:58.572 21:51:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:58.860 21:51:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:37:58.860 21:51:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:37:58.860 21:51:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:37:58.860 21:51:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:37:58.860 21:51:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:37:58.860 21:51:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:37:59.116 21:51:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:37:59.116 21:51:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:37:59.374 21:51:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:37:59.374 21:51:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:37:59.374 21:51:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:37:59.374 21:51:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:37:59.374 21:51:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:59.374 21:51:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:59.374 21:51:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:59.375 21:51:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:59.375 21:51:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:59.375 21:51:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:59.375 21:51:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:59.375 21:51:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:37:59.375 21:51:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:37:59.634 [2024-07-15 21:51:32.757821] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:37:59.634 [2024-07-15 21:51:32.759796] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:37:59.634 [2024-07-15 21:51:32.759902] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:37:59.634 [2024-07-15 21:51:32.760029] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:37:59.634 [2024-07-15 21:51:32.760090] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:59.634 [2024-07-15 21:51:32.760121] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state configuring 00:37:59.634 request: 00:37:59.634 { 00:37:59.634 "name": "raid_bdev1", 00:37:59.634 "raid_level": "raid1", 00:37:59.634 "base_bdevs": [ 00:37:59.634 "malloc1", 00:37:59.634 "malloc2" 00:37:59.634 ], 00:37:59.634 "superblock": false, 00:37:59.634 "method": "bdev_raid_create", 00:37:59.634 "req_id": 1 00:37:59.634 } 00:37:59.634 Got JSON-RPC error response 00:37:59.634 response: 00:37:59.634 { 00:37:59.634 "code": -17, 00:37:59.634 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:37:59.634 } 00:37:59.634 21:51:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # es=1 00:37:59.634 21:51:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:59.634 21:51:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:59.634 21:51:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:59.634 21:51:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:59.634 21:51:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:37:59.893 21:51:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:37:59.893 21:51:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:37:59.893 21:51:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:59.893 [2024-07-15 21:51:33.205026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:59.893 [2024-07-15 21:51:33.205167] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:59.893 [2024-07-15 21:51:33.205207] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:37:59.893 [2024-07-15 21:51:33.205246] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:59.893 [2024-07-15 21:51:33.207190] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:59.893 [2024-07-15 21:51:33.207287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:59.893 [2024-07-15 21:51:33.207448] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:37:59.893 [2024-07-15 21:51:33.207535] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:59.893 pt1 00:37:59.893 21:51:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:37:59.893 21:51:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:59.893 21:51:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:59.893 21:51:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:59.893 21:51:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:59.893 21:51:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:59.893 21:51:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:59.893 21:51:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:59.893 21:51:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:59.893 21:51:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:59.893 21:51:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:59.893 21:51:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:00.151 21:51:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:00.151 "name": "raid_bdev1", 00:38:00.151 "uuid": "b912bf76-9991-4d44-8e4b-8ff01d64d71b", 00:38:00.151 "strip_size_kb": 0, 00:38:00.151 "state": "configuring", 00:38:00.151 "raid_level": "raid1", 00:38:00.151 "superblock": true, 00:38:00.151 "num_base_bdevs": 2, 00:38:00.151 "num_base_bdevs_discovered": 1, 00:38:00.151 "num_base_bdevs_operational": 2, 00:38:00.151 "base_bdevs_list": [ 00:38:00.151 { 00:38:00.151 "name": "pt1", 00:38:00.151 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:00.151 "is_configured": true, 00:38:00.151 "data_offset": 256, 00:38:00.151 "data_size": 7936 00:38:00.151 }, 00:38:00.151 { 00:38:00.151 "name": null, 00:38:00.151 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:00.151 "is_configured": false, 00:38:00.151 "data_offset": 256, 00:38:00.151 "data_size": 7936 00:38:00.151 } 00:38:00.151 ] 00:38:00.151 }' 00:38:00.151 21:51:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:00.151 21:51:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:00.717 21:51:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:38:00.717 21:51:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:38:00.717 21:51:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:38:00.717 21:51:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:38:00.975 [2024-07-15 21:51:34.287246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:38:00.976 [2024-07-15 21:51:34.287465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:00.976 [2024-07-15 21:51:34.287579] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:38:00.976 [2024-07-15 21:51:34.287651] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:00.976 [2024-07-15 21:51:34.288026] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:00.976 [2024-07-15 21:51:34.288149] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:38:00.976 [2024-07-15 21:51:34.288328] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:38:00.976 [2024-07-15 21:51:34.288403] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:00.976 [2024-07-15 21:51:34.288550] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:38:00.976 [2024-07-15 21:51:34.288602] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:38:00.976 [2024-07-15 21:51:34.288765] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:38:00.976 [2024-07-15 21:51:34.288945] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:38:00.976 [2024-07-15 21:51:34.289011] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:38:00.976 [2024-07-15 21:51:34.289189] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:00.976 pt2 00:38:00.976 21:51:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:38:00.976 21:51:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:38:00.976 21:51:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:00.976 21:51:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:00.976 21:51:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:00.976 21:51:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:00.976 21:51:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:00.976 21:51:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:00.976 21:51:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:00.976 21:51:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:00.976 21:51:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:00.976 21:51:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:00.976 21:51:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:00.976 21:51:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:01.235 21:51:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:01.235 "name": "raid_bdev1", 00:38:01.235 "uuid": "b912bf76-9991-4d44-8e4b-8ff01d64d71b", 00:38:01.235 "strip_size_kb": 0, 00:38:01.235 "state": "online", 00:38:01.235 "raid_level": "raid1", 00:38:01.235 "superblock": true, 00:38:01.235 "num_base_bdevs": 2, 00:38:01.235 "num_base_bdevs_discovered": 2, 00:38:01.235 "num_base_bdevs_operational": 2, 00:38:01.235 "base_bdevs_list": [ 00:38:01.235 { 00:38:01.235 "name": "pt1", 00:38:01.235 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:01.235 "is_configured": true, 00:38:01.235 "data_offset": 256, 00:38:01.235 "data_size": 7936 00:38:01.235 }, 00:38:01.235 { 00:38:01.235 "name": "pt2", 00:38:01.235 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:01.235 "is_configured": true, 00:38:01.235 "data_offset": 256, 00:38:01.235 "data_size": 7936 00:38:01.235 } 00:38:01.235 ] 00:38:01.235 }' 00:38:01.235 21:51:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:01.235 21:51:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:02.173 21:51:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:38:02.173 21:51:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:38:02.173 21:51:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:38:02.173 21:51:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:38:02.173 21:51:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:38:02.173 21:51:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:38:02.173 21:51:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:38:02.173 21:51:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:38:02.173 [2024-07-15 21:51:35.413593] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:02.173 21:51:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:38:02.173 "name": "raid_bdev1", 00:38:02.173 "aliases": [ 00:38:02.173 "b912bf76-9991-4d44-8e4b-8ff01d64d71b" 00:38:02.173 ], 00:38:02.173 "product_name": "Raid Volume", 00:38:02.173 "block_size": 4096, 00:38:02.173 "num_blocks": 7936, 00:38:02.173 "uuid": "b912bf76-9991-4d44-8e4b-8ff01d64d71b", 00:38:02.173 "md_size": 32, 00:38:02.173 "md_interleave": false, 00:38:02.173 "dif_type": 0, 00:38:02.173 "assigned_rate_limits": { 00:38:02.173 "rw_ios_per_sec": 0, 00:38:02.173 "rw_mbytes_per_sec": 0, 00:38:02.173 "r_mbytes_per_sec": 0, 00:38:02.173 "w_mbytes_per_sec": 0 00:38:02.173 }, 00:38:02.173 "claimed": false, 00:38:02.173 "zoned": false, 00:38:02.173 "supported_io_types": { 00:38:02.173 "read": true, 00:38:02.173 "write": true, 00:38:02.173 "unmap": false, 00:38:02.173 "flush": false, 00:38:02.173 "reset": true, 00:38:02.173 "nvme_admin": false, 00:38:02.173 "nvme_io": false, 00:38:02.173 "nvme_io_md": false, 00:38:02.173 "write_zeroes": true, 00:38:02.173 "zcopy": false, 00:38:02.173 "get_zone_info": false, 00:38:02.173 "zone_management": false, 00:38:02.173 "zone_append": false, 00:38:02.173 "compare": false, 00:38:02.173 "compare_and_write": false, 00:38:02.173 "abort": false, 00:38:02.173 "seek_hole": false, 00:38:02.173 "seek_data": false, 00:38:02.173 "copy": false, 00:38:02.173 "nvme_iov_md": false 00:38:02.173 }, 00:38:02.173 "memory_domains": [ 00:38:02.173 { 00:38:02.173 "dma_device_id": "system", 00:38:02.173 "dma_device_type": 1 00:38:02.173 }, 00:38:02.173 { 00:38:02.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:02.173 "dma_device_type": 2 00:38:02.173 }, 00:38:02.173 { 00:38:02.173 "dma_device_id": "system", 00:38:02.173 "dma_device_type": 1 00:38:02.173 }, 00:38:02.173 { 00:38:02.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:02.173 "dma_device_type": 2 00:38:02.173 } 00:38:02.173 ], 00:38:02.173 "driver_specific": { 00:38:02.173 "raid": { 00:38:02.173 "uuid": "b912bf76-9991-4d44-8e4b-8ff01d64d71b", 00:38:02.173 "strip_size_kb": 0, 00:38:02.173 "state": "online", 00:38:02.173 "raid_level": "raid1", 00:38:02.173 "superblock": true, 00:38:02.173 "num_base_bdevs": 2, 00:38:02.173 "num_base_bdevs_discovered": 2, 00:38:02.173 "num_base_bdevs_operational": 2, 00:38:02.173 "base_bdevs_list": [ 00:38:02.173 { 00:38:02.173 "name": "pt1", 00:38:02.173 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:02.173 "is_configured": true, 00:38:02.173 "data_offset": 256, 00:38:02.173 "data_size": 7936 00:38:02.173 }, 00:38:02.173 { 00:38:02.174 "name": "pt2", 00:38:02.174 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:02.174 "is_configured": true, 00:38:02.174 "data_offset": 256, 00:38:02.174 "data_size": 7936 00:38:02.174 } 00:38:02.174 ] 00:38:02.174 } 00:38:02.174 } 00:38:02.174 }' 00:38:02.174 21:51:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:38:02.174 21:51:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:38:02.174 pt2' 00:38:02.174 21:51:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:38:02.174 21:51:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:38:02.174 21:51:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:38:02.432 21:51:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:38:02.432 "name": "pt1", 00:38:02.432 "aliases": [ 00:38:02.432 "00000000-0000-0000-0000-000000000001" 00:38:02.432 ], 00:38:02.432 "product_name": "passthru", 00:38:02.432 "block_size": 4096, 00:38:02.432 "num_blocks": 8192, 00:38:02.432 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:02.432 "md_size": 32, 00:38:02.432 "md_interleave": false, 00:38:02.432 "dif_type": 0, 00:38:02.432 "assigned_rate_limits": { 00:38:02.432 "rw_ios_per_sec": 0, 00:38:02.432 "rw_mbytes_per_sec": 0, 00:38:02.432 "r_mbytes_per_sec": 0, 00:38:02.432 "w_mbytes_per_sec": 0 00:38:02.432 }, 00:38:02.432 "claimed": true, 00:38:02.432 "claim_type": "exclusive_write", 00:38:02.432 "zoned": false, 00:38:02.432 "supported_io_types": { 00:38:02.432 "read": true, 00:38:02.432 "write": true, 00:38:02.432 "unmap": true, 00:38:02.432 "flush": true, 00:38:02.432 "reset": true, 00:38:02.432 "nvme_admin": false, 00:38:02.432 "nvme_io": false, 00:38:02.432 "nvme_io_md": false, 00:38:02.432 "write_zeroes": true, 00:38:02.432 "zcopy": true, 00:38:02.432 "get_zone_info": false, 00:38:02.432 "zone_management": false, 00:38:02.432 "zone_append": false, 00:38:02.432 "compare": false, 00:38:02.432 "compare_and_write": false, 00:38:02.432 "abort": true, 00:38:02.432 "seek_hole": false, 00:38:02.432 "seek_data": false, 00:38:02.432 "copy": true, 00:38:02.432 "nvme_iov_md": false 00:38:02.432 }, 00:38:02.432 "memory_domains": [ 00:38:02.432 { 00:38:02.432 "dma_device_id": "system", 00:38:02.432 "dma_device_type": 1 00:38:02.432 }, 00:38:02.432 { 00:38:02.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:02.432 "dma_device_type": 2 00:38:02.432 } 00:38:02.432 ], 00:38:02.432 "driver_specific": { 00:38:02.432 "passthru": { 00:38:02.432 "name": "pt1", 00:38:02.432 "base_bdev_name": "malloc1" 00:38:02.432 } 00:38:02.432 } 00:38:02.432 }' 00:38:02.432 21:51:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:02.432 21:51:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:02.691 21:51:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:38:02.691 21:51:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:02.691 21:51:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:02.691 21:51:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:38:02.691 21:51:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:02.691 21:51:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:02.691 21:51:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:38:02.691 21:51:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:02.691 21:51:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:02.950 21:51:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:38:02.950 21:51:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:38:02.950 21:51:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:38:02.950 21:51:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:38:02.950 21:51:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:38:02.950 "name": "pt2", 00:38:02.950 "aliases": [ 00:38:02.950 "00000000-0000-0000-0000-000000000002" 00:38:02.950 ], 00:38:02.950 "product_name": "passthru", 00:38:02.950 "block_size": 4096, 00:38:02.950 "num_blocks": 8192, 00:38:02.950 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:02.950 "md_size": 32, 00:38:02.950 "md_interleave": false, 00:38:02.950 "dif_type": 0, 00:38:02.950 "assigned_rate_limits": { 00:38:02.950 "rw_ios_per_sec": 0, 00:38:02.950 "rw_mbytes_per_sec": 0, 00:38:02.950 "r_mbytes_per_sec": 0, 00:38:02.950 "w_mbytes_per_sec": 0 00:38:02.950 }, 00:38:02.950 "claimed": true, 00:38:02.950 "claim_type": "exclusive_write", 00:38:02.950 "zoned": false, 00:38:02.951 "supported_io_types": { 00:38:02.951 "read": true, 00:38:02.951 "write": true, 00:38:02.951 "unmap": true, 00:38:02.951 "flush": true, 00:38:02.951 "reset": true, 00:38:02.951 "nvme_admin": false, 00:38:02.951 "nvme_io": false, 00:38:02.951 "nvme_io_md": false, 00:38:02.951 "write_zeroes": true, 00:38:02.951 "zcopy": true, 00:38:02.951 "get_zone_info": false, 00:38:02.951 "zone_management": false, 00:38:02.951 "zone_append": false, 00:38:02.951 "compare": false, 00:38:02.951 "compare_and_write": false, 00:38:02.951 "abort": true, 00:38:02.951 "seek_hole": false, 00:38:02.951 "seek_data": false, 00:38:02.951 "copy": true, 00:38:02.951 "nvme_iov_md": false 00:38:02.951 }, 00:38:02.951 "memory_domains": [ 00:38:02.951 { 00:38:02.951 "dma_device_id": "system", 00:38:02.951 "dma_device_type": 1 00:38:02.951 }, 00:38:02.951 { 00:38:02.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:02.951 "dma_device_type": 2 00:38:02.951 } 00:38:02.951 ], 00:38:02.951 "driver_specific": { 00:38:02.951 "passthru": { 00:38:02.951 "name": "pt2", 00:38:02.951 "base_bdev_name": "malloc2" 00:38:02.951 } 00:38:02.951 } 00:38:02.951 }' 00:38:02.951 21:51:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:03.209 21:51:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:03.209 21:51:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:38:03.209 21:51:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:03.210 21:51:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:03.210 21:51:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:38:03.210 21:51:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:03.468 21:51:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:03.468 21:51:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:38:03.468 21:51:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:03.468 21:51:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:03.468 21:51:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:38:03.468 21:51:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:38:03.468 21:51:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:38:03.726 [2024-07-15 21:51:36.943171] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:03.726 21:51:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # '[' b912bf76-9991-4d44-8e4b-8ff01d64d71b '!=' b912bf76-9991-4d44-8e4b-8ff01d64d71b ']' 00:38:03.726 21:51:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:38:03.726 21:51:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:38:03.726 21:51:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:38:03.726 21:51:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:38:03.985 [2024-07-15 21:51:37.150683] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:38:03.985 21:51:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:03.985 21:51:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:03.985 21:51:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:03.985 21:51:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:03.985 21:51:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:03.985 21:51:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:03.985 21:51:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:03.985 21:51:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:03.985 21:51:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:03.985 21:51:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:03.985 21:51:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:03.985 21:51:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:04.243 21:51:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:04.243 "name": "raid_bdev1", 00:38:04.243 "uuid": "b912bf76-9991-4d44-8e4b-8ff01d64d71b", 00:38:04.243 "strip_size_kb": 0, 00:38:04.243 "state": "online", 00:38:04.243 "raid_level": "raid1", 00:38:04.243 "superblock": true, 00:38:04.243 "num_base_bdevs": 2, 00:38:04.243 "num_base_bdevs_discovered": 1, 00:38:04.243 "num_base_bdevs_operational": 1, 00:38:04.243 "base_bdevs_list": [ 00:38:04.243 { 00:38:04.243 "name": null, 00:38:04.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:04.243 "is_configured": false, 00:38:04.243 "data_offset": 256, 00:38:04.243 "data_size": 7936 00:38:04.243 }, 00:38:04.243 { 00:38:04.243 "name": "pt2", 00:38:04.243 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:04.243 "is_configured": true, 00:38:04.243 "data_offset": 256, 00:38:04.243 "data_size": 7936 00:38:04.243 } 00:38:04.243 ] 00:38:04.243 }' 00:38:04.243 21:51:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:04.243 21:51:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:04.809 21:51:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:38:05.067 [2024-07-15 21:51:38.304878] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:05.067 [2024-07-15 21:51:38.304986] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:05.067 [2024-07-15 21:51:38.305085] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:05.067 [2024-07-15 21:51:38.305171] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:05.067 [2024-07-15 21:51:38.305203] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:38:05.067 21:51:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:05.067 21:51:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:38:05.333 21:51:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:38:05.333 21:51:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:38:05.333 21:51:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:38:05.333 21:51:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:38:05.333 21:51:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:38:05.607 21:51:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:38:05.607 21:51:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:38:05.607 21:51:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:38:05.607 21:51:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:38:05.607 21:51:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@518 -- # i=1 00:38:05.607 21:51:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:38:05.867 [2024-07-15 21:51:38.992793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:38:05.867 [2024-07-15 21:51:38.992965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:05.867 [2024-07-15 21:51:38.993029] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:38:05.867 [2024-07-15 21:51:38.993076] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:05.867 [2024-07-15 21:51:38.995156] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:05.867 [2024-07-15 21:51:38.995262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:38:05.867 [2024-07-15 21:51:38.995428] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:38:05.867 [2024-07-15 21:51:38.995529] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:05.867 [2024-07-15 21:51:38.995636] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:38:05.867 [2024-07-15 21:51:38.995670] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:38:05.867 [2024-07-15 21:51:38.995792] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:38:05.867 [2024-07-15 21:51:38.995947] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:38:05.867 [2024-07-15 21:51:38.995981] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:38:05.867 [2024-07-15 21:51:38.996091] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:05.867 pt2 00:38:05.867 21:51:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:05.867 21:51:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:05.868 21:51:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:05.868 21:51:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:05.868 21:51:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:05.868 21:51:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:05.868 21:51:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:05.868 21:51:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:05.868 21:51:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:05.868 21:51:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:05.868 21:51:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:05.868 21:51:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:05.868 21:51:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:05.868 "name": "raid_bdev1", 00:38:05.868 "uuid": "b912bf76-9991-4d44-8e4b-8ff01d64d71b", 00:38:05.868 "strip_size_kb": 0, 00:38:05.868 "state": "online", 00:38:05.868 "raid_level": "raid1", 00:38:05.868 "superblock": true, 00:38:05.868 "num_base_bdevs": 2, 00:38:05.868 "num_base_bdevs_discovered": 1, 00:38:05.868 "num_base_bdevs_operational": 1, 00:38:05.868 "base_bdevs_list": [ 00:38:05.868 { 00:38:05.868 "name": null, 00:38:05.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:05.868 "is_configured": false, 00:38:05.868 "data_offset": 256, 00:38:05.868 "data_size": 7936 00:38:05.868 }, 00:38:05.868 { 00:38:05.868 "name": "pt2", 00:38:05.868 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:05.868 "is_configured": true, 00:38:05.868 "data_offset": 256, 00:38:05.868 "data_size": 7936 00:38:05.868 } 00:38:05.868 ] 00:38:05.868 }' 00:38:05.868 21:51:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:05.868 21:51:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:06.806 21:51:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:38:06.806 [2024-07-15 21:51:40.066938] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:06.806 [2024-07-15 21:51:40.067041] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:06.806 [2024-07-15 21:51:40.067137] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:06.806 [2024-07-15 21:51:40.067208] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:06.806 [2024-07-15 21:51:40.067232] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:38:06.806 21:51:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:06.806 21:51:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:38:07.064 21:51:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:38:07.064 21:51:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:38:07.064 21:51:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:38:07.064 21:51:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:38:07.323 [2024-07-15 21:51:40.514165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:38:07.323 [2024-07-15 21:51:40.514314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:07.323 [2024-07-15 21:51:40.514365] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:38:07.323 [2024-07-15 21:51:40.514407] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:07.323 [2024-07-15 21:51:40.516460] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:07.323 [2024-07-15 21:51:40.516560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:38:07.323 [2024-07-15 21:51:40.516695] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:38:07.323 [2024-07-15 21:51:40.516769] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:38:07.323 [2024-07-15 21:51:40.516904] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:38:07.323 [2024-07-15 21:51:40.516942] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:07.323 [2024-07-15 21:51:40.516970] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:38:07.323 [2024-07-15 21:51:40.517115] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:07.323 [2024-07-15 21:51:40.517213] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:38:07.323 [2024-07-15 21:51:40.517244] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:38:07.323 [2024-07-15 21:51:40.517383] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:38:07.323 [2024-07-15 21:51:40.517509] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:38:07.323 [2024-07-15 21:51:40.517545] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:38:07.323 [2024-07-15 21:51:40.517664] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:07.323 pt1 00:38:07.323 21:51:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:38:07.323 21:51:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:07.324 21:51:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:07.324 21:51:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:07.324 21:51:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:07.324 21:51:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:07.324 21:51:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:07.324 21:51:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:07.324 21:51:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:07.324 21:51:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:07.324 21:51:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:07.324 21:51:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:07.324 21:51:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:07.583 21:51:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:07.583 "name": "raid_bdev1", 00:38:07.583 "uuid": "b912bf76-9991-4d44-8e4b-8ff01d64d71b", 00:38:07.583 "strip_size_kb": 0, 00:38:07.583 "state": "online", 00:38:07.583 "raid_level": "raid1", 00:38:07.583 "superblock": true, 00:38:07.583 "num_base_bdevs": 2, 00:38:07.583 "num_base_bdevs_discovered": 1, 00:38:07.583 "num_base_bdevs_operational": 1, 00:38:07.583 "base_bdevs_list": [ 00:38:07.583 { 00:38:07.583 "name": null, 00:38:07.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:07.583 "is_configured": false, 00:38:07.583 "data_offset": 256, 00:38:07.583 "data_size": 7936 00:38:07.583 }, 00:38:07.583 { 00:38:07.583 "name": "pt2", 00:38:07.583 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:07.583 "is_configured": true, 00:38:07.583 "data_offset": 256, 00:38:07.583 "data_size": 7936 00:38:07.583 } 00:38:07.583 ] 00:38:07.583 }' 00:38:07.583 21:51:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:07.583 21:51:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:08.154 21:51:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:38:08.154 21:51:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:38:08.414 21:51:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:38:08.414 21:51:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:38:08.414 21:51:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:38:08.673 [2024-07-15 21:51:41.852179] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:08.673 21:51:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' b912bf76-9991-4d44-8e4b-8ff01d64d71b '!=' b912bf76-9991-4d44-8e4b-8ff01d64d71b ']' 00:38:08.673 21:51:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@562 -- # killprocess 162920 00:38:08.673 21:51:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@948 -- # '[' -z 162920 ']' 00:38:08.673 21:51:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # kill -0 162920 00:38:08.674 21:51:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # uname 00:38:08.674 21:51:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:08.674 21:51:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 162920 00:38:08.674 killing process with pid 162920 00:38:08.674 21:51:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:08.674 21:51:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:08.674 21:51:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 162920' 00:38:08.674 21:51:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@967 -- # kill 162920 00:38:08.674 21:51:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # wait 162920 00:38:08.674 [2024-07-15 21:51:41.896137] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:08.674 [2024-07-15 21:51:41.896232] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:08.674 [2024-07-15 21:51:41.896314] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:08.674 [2024-07-15 21:51:41.896334] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:38:08.941 [2024-07-15 21:51:42.144835] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:10.320 ************************************ 00:38:10.320 END TEST raid_superblock_test_md_separate 00:38:10.320 ************************************ 00:38:10.320 21:51:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@564 -- # return 0 00:38:10.320 00:38:10.320 real 0m16.734s 00:38:10.320 user 0m30.031s 00:38:10.320 sys 0m2.230s 00:38:10.320 21:51:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:10.320 21:51:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:10.320 21:51:43 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:38:10.320 21:51:43 bdev_raid -- bdev/bdev_raid.sh@907 -- # '[' true = true ']' 00:38:10.320 21:51:43 bdev_raid -- bdev/bdev_raid.sh@908 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:38:10.320 21:51:43 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:38:10.320 21:51:43 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:10.320 21:51:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:10.320 ************************************ 00:38:10.320 START TEST raid_rebuild_test_sb_md_separate 00:38:10.320 ************************************ 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local verify=true 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local strip_size 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local create_arg 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local data_offset 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # raid_pid=163454 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # waitforlisten 163454 /var/tmp/spdk-raid.sock 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 163454 ']' 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:38:10.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:10.320 21:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:10.579 [2024-07-15 21:51:43.758149] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:38:10.579 [2024-07-15 21:51:43.758433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163454 ] 00:38:10.579 I/O size of 3145728 is greater than zero copy threshold (65536). 00:38:10.579 Zero copy mechanism will not be used. 00:38:10.579 [2024-07-15 21:51:43.928079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:11.147 [2024-07-15 21:51:44.237911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:11.147 [2024-07-15 21:51:44.495926] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:11.404 21:51:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:11.404 21:51:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:38:11.404 21:51:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:38:11.404 21:51:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:38:11.662 BaseBdev1_malloc 00:38:11.662 21:51:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:38:11.920 [2024-07-15 21:51:45.119456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:38:11.920 [2024-07-15 21:51:45.119749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:11.920 [2024-07-15 21:51:45.119853] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:38:11.920 [2024-07-15 21:51:45.119912] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:11.920 [2024-07-15 21:51:45.122233] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:11.920 [2024-07-15 21:51:45.122391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:11.920 BaseBdev1 00:38:11.920 21:51:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:38:11.920 21:51:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:38:12.179 BaseBdev2_malloc 00:38:12.179 21:51:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:38:12.437 [2024-07-15 21:51:45.619319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:38:12.437 [2024-07-15 21:51:45.619542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:12.437 [2024-07-15 21:51:45.619602] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:38:12.437 [2024-07-15 21:51:45.619650] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:12.437 [2024-07-15 21:51:45.621704] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:12.437 [2024-07-15 21:51:45.621809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:38:12.437 BaseBdev2 00:38:12.437 21:51:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:38:12.695 spare_malloc 00:38:12.695 21:51:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:38:12.955 spare_delay 00:38:12.955 21:51:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:38:13.214 [2024-07-15 21:51:46.341571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:13.214 [2024-07-15 21:51:46.341755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:13.214 [2024-07-15 21:51:46.341815] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:38:13.214 [2024-07-15 21:51:46.341867] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:13.214 [2024-07-15 21:51:46.343882] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:13.214 [2024-07-15 21:51:46.343982] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:13.214 spare 00:38:13.214 21:51:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:38:13.214 [2024-07-15 21:51:46.557330] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:13.214 [2024-07-15 21:51:46.559365] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:13.214 [2024-07-15 21:51:46.559644] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:38:13.214 [2024-07-15 21:51:46.559699] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:38:13.214 [2024-07-15 21:51:46.559899] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:38:13.214 [2024-07-15 21:51:46.560041] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:38:13.214 [2024-07-15 21:51:46.560084] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:38:13.214 [2024-07-15 21:51:46.560229] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:13.214 21:51:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:13.214 21:51:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:13.214 21:51:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:13.214 21:51:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:13.214 21:51:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:13.214 21:51:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:13.214 21:51:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:13.214 21:51:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:13.214 21:51:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:13.214 21:51:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:13.214 21:51:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:13.214 21:51:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:13.474 21:51:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:13.474 "name": "raid_bdev1", 00:38:13.474 "uuid": "5c6a5488-47ae-4996-83e0-8283fe4c320a", 00:38:13.474 "strip_size_kb": 0, 00:38:13.474 "state": "online", 00:38:13.474 "raid_level": "raid1", 00:38:13.474 "superblock": true, 00:38:13.474 "num_base_bdevs": 2, 00:38:13.474 "num_base_bdevs_discovered": 2, 00:38:13.474 "num_base_bdevs_operational": 2, 00:38:13.474 "base_bdevs_list": [ 00:38:13.474 { 00:38:13.474 "name": "BaseBdev1", 00:38:13.474 "uuid": "31f5b499-2b0f-56f2-bfb9-fde535b5f475", 00:38:13.474 "is_configured": true, 00:38:13.474 "data_offset": 256, 00:38:13.474 "data_size": 7936 00:38:13.474 }, 00:38:13.474 { 00:38:13.474 "name": "BaseBdev2", 00:38:13.474 "uuid": "6e93b810-3ebd-5044-a08a-abae55a68307", 00:38:13.474 "is_configured": true, 00:38:13.474 "data_offset": 256, 00:38:13.474 "data_size": 7936 00:38:13.474 } 00:38:13.474 ] 00:38:13.474 }' 00:38:13.474 21:51:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:13.474 21:51:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:14.411 21:51:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:38:14.411 21:51:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:38:14.411 [2024-07-15 21:51:47.631626] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:14.411 21:51:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:38:14.411 21:51:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:14.411 21:51:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:38:14.670 21:51:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:38:14.670 21:51:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:38:14.670 21:51:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:38:14.670 21:51:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:38:14.670 21:51:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:38:14.670 21:51:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:38:14.670 21:51:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:38:14.670 21:51:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:14.670 21:51:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:38:14.670 21:51:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:14.670 21:51:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:38:14.670 21:51:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:14.670 21:51:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:14.670 21:51:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:38:14.929 [2024-07-15 21:51:48.062657] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:38:14.929 /dev/nbd0 00:38:14.929 21:51:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:14.930 21:51:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:14.930 21:51:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:38:14.930 21:51:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:38:14.930 21:51:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:38:14.930 21:51:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:38:14.930 21:51:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:38:14.930 21:51:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:38:14.930 21:51:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:38:14.930 21:51:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:38:14.930 21:51:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:14.930 1+0 records in 00:38:14.930 1+0 records out 00:38:14.930 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311664 s, 13.1 MB/s 00:38:14.930 21:51:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:14.930 21:51:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:38:14.930 21:51:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:14.930 21:51:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:38:14.930 21:51:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:38:14.930 21:51:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:14.930 21:51:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:14.930 21:51:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:38:14.930 21:51:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:38:14.930 21:51:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:38:15.879 7936+0 records in 00:38:15.879 7936+0 records out 00:38:15.879 32505856 bytes (33 MB, 31 MiB) copied, 0.777077 s, 41.8 MB/s 00:38:15.879 21:51:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:38:15.879 21:51:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:38:15.879 21:51:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:38:15.879 21:51:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:15.879 21:51:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:38:15.879 21:51:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:15.879 21:51:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:38:15.879 21:51:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:15.879 21:51:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:15.879 21:51:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:15.879 21:51:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:15.879 21:51:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:15.879 21:51:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:15.879 [2024-07-15 21:51:49.161561] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:15.879 21:51:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:38:15.879 21:51:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:38:15.879 21:51:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:38:16.138 [2024-07-15 21:51:49.364838] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:16.138 21:51:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:16.138 21:51:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:16.138 21:51:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:16.138 21:51:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:16.138 21:51:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:16.138 21:51:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:16.138 21:51:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:16.138 21:51:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:16.138 21:51:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:16.138 21:51:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:16.138 21:51:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:16.138 21:51:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:16.396 21:51:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:16.396 "name": "raid_bdev1", 00:38:16.396 "uuid": "5c6a5488-47ae-4996-83e0-8283fe4c320a", 00:38:16.396 "strip_size_kb": 0, 00:38:16.396 "state": "online", 00:38:16.396 "raid_level": "raid1", 00:38:16.396 "superblock": true, 00:38:16.396 "num_base_bdevs": 2, 00:38:16.396 "num_base_bdevs_discovered": 1, 00:38:16.396 "num_base_bdevs_operational": 1, 00:38:16.396 "base_bdevs_list": [ 00:38:16.396 { 00:38:16.396 "name": null, 00:38:16.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:16.396 "is_configured": false, 00:38:16.396 "data_offset": 256, 00:38:16.396 "data_size": 7936 00:38:16.396 }, 00:38:16.396 { 00:38:16.396 "name": "BaseBdev2", 00:38:16.396 "uuid": "6e93b810-3ebd-5044-a08a-abae55a68307", 00:38:16.396 "is_configured": true, 00:38:16.396 "data_offset": 256, 00:38:16.396 "data_size": 7936 00:38:16.396 } 00:38:16.396 ] 00:38:16.397 }' 00:38:16.397 21:51:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:16.397 21:51:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:16.970 21:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:38:17.229 [2024-07-15 21:51:50.411020] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:17.229 [2024-07-15 21:51:50.427560] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019ffd0 00:38:17.229 [2024-07-15 21:51:50.429542] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:17.229 21:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # sleep 1 00:38:18.183 21:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:18.183 21:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:18.183 21:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:18.183 21:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:18.183 21:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:18.183 21:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:18.183 21:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:18.443 21:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:18.443 "name": "raid_bdev1", 00:38:18.443 "uuid": "5c6a5488-47ae-4996-83e0-8283fe4c320a", 00:38:18.443 "strip_size_kb": 0, 00:38:18.443 "state": "online", 00:38:18.443 "raid_level": "raid1", 00:38:18.443 "superblock": true, 00:38:18.443 "num_base_bdevs": 2, 00:38:18.443 "num_base_bdevs_discovered": 2, 00:38:18.443 "num_base_bdevs_operational": 2, 00:38:18.443 "process": { 00:38:18.443 "type": "rebuild", 00:38:18.443 "target": "spare", 00:38:18.443 "progress": { 00:38:18.443 "blocks": 3072, 00:38:18.443 "percent": 38 00:38:18.443 } 00:38:18.443 }, 00:38:18.443 "base_bdevs_list": [ 00:38:18.443 { 00:38:18.443 "name": "spare", 00:38:18.443 "uuid": "d8cc2a14-4158-5e5d-9bf9-96471ed62307", 00:38:18.443 "is_configured": true, 00:38:18.443 "data_offset": 256, 00:38:18.443 "data_size": 7936 00:38:18.443 }, 00:38:18.443 { 00:38:18.443 "name": "BaseBdev2", 00:38:18.443 "uuid": "6e93b810-3ebd-5044-a08a-abae55a68307", 00:38:18.443 "is_configured": true, 00:38:18.443 "data_offset": 256, 00:38:18.443 "data_size": 7936 00:38:18.443 } 00:38:18.443 ] 00:38:18.443 }' 00:38:18.443 21:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:18.443 21:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:18.443 21:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:18.443 21:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:18.443 21:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:38:18.703 [2024-07-15 21:51:51.989490] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:18.703 [2024-07-15 21:51:52.037322] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:18.703 [2024-07-15 21:51:52.037410] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:18.703 [2024-07-15 21:51:52.037427] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:18.703 [2024-07-15 21:51:52.037436] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:18.962 21:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:18.962 21:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:18.962 21:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:18.962 21:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:18.962 21:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:18.962 21:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:18.962 21:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:18.962 21:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:18.962 21:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:18.962 21:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:18.962 21:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:18.962 21:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:18.962 21:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:18.962 "name": "raid_bdev1", 00:38:18.962 "uuid": "5c6a5488-47ae-4996-83e0-8283fe4c320a", 00:38:18.962 "strip_size_kb": 0, 00:38:18.962 "state": "online", 00:38:18.962 "raid_level": "raid1", 00:38:18.962 "superblock": true, 00:38:18.962 "num_base_bdevs": 2, 00:38:18.962 "num_base_bdevs_discovered": 1, 00:38:18.962 "num_base_bdevs_operational": 1, 00:38:18.962 "base_bdevs_list": [ 00:38:18.962 { 00:38:18.962 "name": null, 00:38:18.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:18.962 "is_configured": false, 00:38:18.962 "data_offset": 256, 00:38:18.962 "data_size": 7936 00:38:18.962 }, 00:38:18.962 { 00:38:18.962 "name": "BaseBdev2", 00:38:18.962 "uuid": "6e93b810-3ebd-5044-a08a-abae55a68307", 00:38:18.962 "is_configured": true, 00:38:18.962 "data_offset": 256, 00:38:18.962 "data_size": 7936 00:38:18.962 } 00:38:18.962 ] 00:38:18.962 }' 00:38:18.962 21:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:18.962 21:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:19.899 21:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:19.899 21:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:19.899 21:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:38:19.899 21:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:38:19.899 21:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:19.899 21:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:19.899 21:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:19.899 21:51:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:19.899 "name": "raid_bdev1", 00:38:19.899 "uuid": "5c6a5488-47ae-4996-83e0-8283fe4c320a", 00:38:19.899 "strip_size_kb": 0, 00:38:19.899 "state": "online", 00:38:19.899 "raid_level": "raid1", 00:38:19.899 "superblock": true, 00:38:19.899 "num_base_bdevs": 2, 00:38:19.899 "num_base_bdevs_discovered": 1, 00:38:19.899 "num_base_bdevs_operational": 1, 00:38:19.899 "base_bdevs_list": [ 00:38:19.899 { 00:38:19.899 "name": null, 00:38:19.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:19.899 "is_configured": false, 00:38:19.899 "data_offset": 256, 00:38:19.899 "data_size": 7936 00:38:19.899 }, 00:38:19.899 { 00:38:19.899 "name": "BaseBdev2", 00:38:19.899 "uuid": "6e93b810-3ebd-5044-a08a-abae55a68307", 00:38:19.899 "is_configured": true, 00:38:19.899 "data_offset": 256, 00:38:19.899 "data_size": 7936 00:38:19.899 } 00:38:19.899 ] 00:38:19.899 }' 00:38:19.899 21:51:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:19.899 21:51:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:38:19.899 21:51:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:20.158 21:51:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:20.158 21:51:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:38:20.158 [2024-07-15 21:51:53.520467] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:20.158 [2024-07-15 21:51:53.536323] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:38:20.416 [2024-07-15 21:51:53.538190] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:20.416 21:51:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # sleep 1 00:38:21.350 21:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:21.350 21:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:21.350 21:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:21.350 21:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:21.350 21:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:21.350 21:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:21.350 21:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:21.608 21:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:21.608 "name": "raid_bdev1", 00:38:21.608 "uuid": "5c6a5488-47ae-4996-83e0-8283fe4c320a", 00:38:21.608 "strip_size_kb": 0, 00:38:21.608 "state": "online", 00:38:21.608 "raid_level": "raid1", 00:38:21.608 "superblock": true, 00:38:21.608 "num_base_bdevs": 2, 00:38:21.608 "num_base_bdevs_discovered": 2, 00:38:21.608 "num_base_bdevs_operational": 2, 00:38:21.608 "process": { 00:38:21.608 "type": "rebuild", 00:38:21.608 "target": "spare", 00:38:21.608 "progress": { 00:38:21.608 "blocks": 3072, 00:38:21.608 "percent": 38 00:38:21.608 } 00:38:21.608 }, 00:38:21.609 "base_bdevs_list": [ 00:38:21.609 { 00:38:21.609 "name": "spare", 00:38:21.609 "uuid": "d8cc2a14-4158-5e5d-9bf9-96471ed62307", 00:38:21.609 "is_configured": true, 00:38:21.609 "data_offset": 256, 00:38:21.609 "data_size": 7936 00:38:21.609 }, 00:38:21.609 { 00:38:21.609 "name": "BaseBdev2", 00:38:21.609 "uuid": "6e93b810-3ebd-5044-a08a-abae55a68307", 00:38:21.609 "is_configured": true, 00:38:21.609 "data_offset": 256, 00:38:21.609 "data_size": 7936 00:38:21.609 } 00:38:21.609 ] 00:38:21.609 }' 00:38:21.609 21:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:21.609 21:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:21.609 21:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:21.609 21:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:21.609 21:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:38:21.609 21:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:38:21.609 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:38:21.609 21:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:38:21.609 21:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:38:21.609 21:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:38:21.609 21:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@705 -- # local timeout=1367 00:38:21.609 21:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:38:21.609 21:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:21.609 21:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:21.609 21:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:21.609 21:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:21.609 21:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:21.609 21:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:21.609 21:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:21.868 21:51:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:21.868 "name": "raid_bdev1", 00:38:21.868 "uuid": "5c6a5488-47ae-4996-83e0-8283fe4c320a", 00:38:21.868 "strip_size_kb": 0, 00:38:21.868 "state": "online", 00:38:21.868 "raid_level": "raid1", 00:38:21.868 "superblock": true, 00:38:21.868 "num_base_bdevs": 2, 00:38:21.868 "num_base_bdevs_discovered": 2, 00:38:21.868 "num_base_bdevs_operational": 2, 00:38:21.868 "process": { 00:38:21.868 "type": "rebuild", 00:38:21.868 "target": "spare", 00:38:21.868 "progress": { 00:38:21.868 "blocks": 3840, 00:38:21.868 "percent": 48 00:38:21.868 } 00:38:21.868 }, 00:38:21.868 "base_bdevs_list": [ 00:38:21.868 { 00:38:21.868 "name": "spare", 00:38:21.868 "uuid": "d8cc2a14-4158-5e5d-9bf9-96471ed62307", 00:38:21.868 "is_configured": true, 00:38:21.868 "data_offset": 256, 00:38:21.868 "data_size": 7936 00:38:21.868 }, 00:38:21.868 { 00:38:21.868 "name": "BaseBdev2", 00:38:21.868 "uuid": "6e93b810-3ebd-5044-a08a-abae55a68307", 00:38:21.868 "is_configured": true, 00:38:21.868 "data_offset": 256, 00:38:21.868 "data_size": 7936 00:38:21.868 } 00:38:21.868 ] 00:38:21.868 }' 00:38:21.868 21:51:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:21.868 21:51:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:21.868 21:51:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:21.868 21:51:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:21.868 21:51:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:38:23.259 21:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:38:23.259 21:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:23.259 21:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:23.259 21:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:23.259 21:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:23.259 21:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:23.259 21:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:23.259 21:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:23.259 21:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:23.259 "name": "raid_bdev1", 00:38:23.259 "uuid": "5c6a5488-47ae-4996-83e0-8283fe4c320a", 00:38:23.259 "strip_size_kb": 0, 00:38:23.259 "state": "online", 00:38:23.259 "raid_level": "raid1", 00:38:23.259 "superblock": true, 00:38:23.259 "num_base_bdevs": 2, 00:38:23.259 "num_base_bdevs_discovered": 2, 00:38:23.259 "num_base_bdevs_operational": 2, 00:38:23.259 "process": { 00:38:23.259 "type": "rebuild", 00:38:23.259 "target": "spare", 00:38:23.259 "progress": { 00:38:23.259 "blocks": 7424, 00:38:23.259 "percent": 93 00:38:23.259 } 00:38:23.259 }, 00:38:23.259 "base_bdevs_list": [ 00:38:23.259 { 00:38:23.259 "name": "spare", 00:38:23.259 "uuid": "d8cc2a14-4158-5e5d-9bf9-96471ed62307", 00:38:23.259 "is_configured": true, 00:38:23.259 "data_offset": 256, 00:38:23.259 "data_size": 7936 00:38:23.259 }, 00:38:23.259 { 00:38:23.259 "name": "BaseBdev2", 00:38:23.259 "uuid": "6e93b810-3ebd-5044-a08a-abae55a68307", 00:38:23.259 "is_configured": true, 00:38:23.259 "data_offset": 256, 00:38:23.259 "data_size": 7936 00:38:23.259 } 00:38:23.259 ] 00:38:23.259 }' 00:38:23.259 21:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:23.259 21:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:23.259 21:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:23.259 21:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:23.259 21:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:38:23.517 [2024-07-15 21:51:56.652840] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:38:23.517 [2024-07-15 21:51:56.652926] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:38:23.517 [2024-07-15 21:51:56.653091] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:24.460 21:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:38:24.460 21:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:24.460 21:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:24.460 21:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:24.460 21:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:24.460 21:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:24.460 21:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:24.460 21:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:24.460 21:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:24.460 "name": "raid_bdev1", 00:38:24.460 "uuid": "5c6a5488-47ae-4996-83e0-8283fe4c320a", 00:38:24.460 "strip_size_kb": 0, 00:38:24.460 "state": "online", 00:38:24.460 "raid_level": "raid1", 00:38:24.460 "superblock": true, 00:38:24.460 "num_base_bdevs": 2, 00:38:24.460 "num_base_bdevs_discovered": 2, 00:38:24.460 "num_base_bdevs_operational": 2, 00:38:24.460 "base_bdevs_list": [ 00:38:24.460 { 00:38:24.460 "name": "spare", 00:38:24.460 "uuid": "d8cc2a14-4158-5e5d-9bf9-96471ed62307", 00:38:24.460 "is_configured": true, 00:38:24.460 "data_offset": 256, 00:38:24.460 "data_size": 7936 00:38:24.460 }, 00:38:24.460 { 00:38:24.460 "name": "BaseBdev2", 00:38:24.460 "uuid": "6e93b810-3ebd-5044-a08a-abae55a68307", 00:38:24.460 "is_configured": true, 00:38:24.460 "data_offset": 256, 00:38:24.460 "data_size": 7936 00:38:24.460 } 00:38:24.460 ] 00:38:24.460 }' 00:38:24.460 21:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:24.460 21:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:38:24.460 21:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:24.719 21:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:38:24.719 21:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # break 00:38:24.719 21:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:24.719 21:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:24.719 21:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:38:24.719 21:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:38:24.719 21:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:24.719 21:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:24.719 21:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:24.978 21:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:24.978 "name": "raid_bdev1", 00:38:24.978 "uuid": "5c6a5488-47ae-4996-83e0-8283fe4c320a", 00:38:24.978 "strip_size_kb": 0, 00:38:24.978 "state": "online", 00:38:24.978 "raid_level": "raid1", 00:38:24.978 "superblock": true, 00:38:24.978 "num_base_bdevs": 2, 00:38:24.978 "num_base_bdevs_discovered": 2, 00:38:24.978 "num_base_bdevs_operational": 2, 00:38:24.978 "base_bdevs_list": [ 00:38:24.978 { 00:38:24.978 "name": "spare", 00:38:24.978 "uuid": "d8cc2a14-4158-5e5d-9bf9-96471ed62307", 00:38:24.978 "is_configured": true, 00:38:24.978 "data_offset": 256, 00:38:24.978 "data_size": 7936 00:38:24.978 }, 00:38:24.978 { 00:38:24.978 "name": "BaseBdev2", 00:38:24.978 "uuid": "6e93b810-3ebd-5044-a08a-abae55a68307", 00:38:24.978 "is_configured": true, 00:38:24.978 "data_offset": 256, 00:38:24.978 "data_size": 7936 00:38:24.978 } 00:38:24.978 ] 00:38:24.978 }' 00:38:24.978 21:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:24.978 21:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:38:24.978 21:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:24.978 21:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:24.978 21:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:24.978 21:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:24.978 21:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:24.978 21:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:24.978 21:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:24.978 21:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:24.978 21:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:24.978 21:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:24.978 21:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:24.978 21:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:24.978 21:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:24.978 21:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:25.237 21:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:25.237 "name": "raid_bdev1", 00:38:25.237 "uuid": "5c6a5488-47ae-4996-83e0-8283fe4c320a", 00:38:25.237 "strip_size_kb": 0, 00:38:25.237 "state": "online", 00:38:25.237 "raid_level": "raid1", 00:38:25.237 "superblock": true, 00:38:25.237 "num_base_bdevs": 2, 00:38:25.237 "num_base_bdevs_discovered": 2, 00:38:25.237 "num_base_bdevs_operational": 2, 00:38:25.237 "base_bdevs_list": [ 00:38:25.237 { 00:38:25.237 "name": "spare", 00:38:25.237 "uuid": "d8cc2a14-4158-5e5d-9bf9-96471ed62307", 00:38:25.237 "is_configured": true, 00:38:25.237 "data_offset": 256, 00:38:25.237 "data_size": 7936 00:38:25.237 }, 00:38:25.237 { 00:38:25.237 "name": "BaseBdev2", 00:38:25.237 "uuid": "6e93b810-3ebd-5044-a08a-abae55a68307", 00:38:25.237 "is_configured": true, 00:38:25.237 "data_offset": 256, 00:38:25.237 "data_size": 7936 00:38:25.237 } 00:38:25.237 ] 00:38:25.237 }' 00:38:25.237 21:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:25.237 21:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:26.170 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:38:26.170 [2024-07-15 21:51:59.392268] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:26.170 [2024-07-15 21:51:59.392321] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:26.170 [2024-07-15 21:51:59.392423] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:26.170 [2024-07-15 21:51:59.392495] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:26.170 [2024-07-15 21:51:59.392505] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:38:26.170 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:26.170 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # jq length 00:38:26.428 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:38:26.428 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:38:26.428 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:38:26.428 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:38:26.428 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:38:26.428 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:38:26.428 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:26.428 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:38:26.428 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:26.428 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:38:26.428 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:26.428 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:26.428 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:38:26.686 /dev/nbd0 00:38:26.686 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:26.686 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:26.686 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:38:26.686 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:38:26.686 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:38:26.686 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:38:26.686 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:38:26.686 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:38:26.686 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:38:26.686 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:38:26.686 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:26.686 1+0 records in 00:38:26.686 1+0 records out 00:38:26.686 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189833 s, 21.6 MB/s 00:38:26.686 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:26.686 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:38:26.686 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:26.686 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:38:26.686 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:38:26.686 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:26.686 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:26.686 21:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:38:26.943 /dev/nbd1 00:38:26.943 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:38:26.943 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:38:26.943 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:38:26.943 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:38:26.943 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:38:26.943 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:38:26.943 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:38:26.943 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:38:26.943 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:38:26.943 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:38:26.943 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:26.943 1+0 records in 00:38:26.943 1+0 records out 00:38:26.943 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431932 s, 9.5 MB/s 00:38:26.943 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:26.943 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:38:26.943 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:26.943 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:38:26.943 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:38:26.943 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:26.943 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:26.943 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:38:27.200 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:38:27.200 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:38:27.200 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:38:27.200 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:27.200 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:38:27.200 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:27.200 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:38:27.200 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:27.200 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:27.200 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:27.200 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:27.200 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:27.200 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:27.200 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:38:27.200 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:38:27.200 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:27.200 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:38:27.458 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:38:27.458 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:38:27.458 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:38:27.458 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:27.458 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:27.458 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:38:27.458 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:38:27.458 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:38:27.458 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:38:27.458 21:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:38:27.716 21:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:38:27.975 [2024-07-15 21:52:01.245291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:27.975 [2024-07-15 21:52:01.245405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:27.975 [2024-07-15 21:52:01.245481] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:38:27.975 [2024-07-15 21:52:01.245501] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:27.975 [2024-07-15 21:52:01.247513] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:27.975 [2024-07-15 21:52:01.247563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:27.975 [2024-07-15 21:52:01.247674] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:38:27.975 [2024-07-15 21:52:01.247727] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:27.975 [2024-07-15 21:52:01.247827] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:27.975 spare 00:38:27.975 21:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:27.975 21:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:27.975 21:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:27.975 21:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:27.975 21:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:27.975 21:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:27.975 21:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:27.975 21:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:27.975 21:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:27.975 21:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:27.975 21:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:27.975 21:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:27.975 [2024-07-15 21:52:01.347715] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:38:27.975 [2024-07-15 21:52:01.347754] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:38:27.975 [2024-07-15 21:52:01.347926] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:38:27.975 [2024-07-15 21:52:01.348088] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:38:27.975 [2024-07-15 21:52:01.348111] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:38:27.975 [2024-07-15 21:52:01.348218] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:28.234 21:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:28.234 "name": "raid_bdev1", 00:38:28.234 "uuid": "5c6a5488-47ae-4996-83e0-8283fe4c320a", 00:38:28.234 "strip_size_kb": 0, 00:38:28.234 "state": "online", 00:38:28.234 "raid_level": "raid1", 00:38:28.234 "superblock": true, 00:38:28.234 "num_base_bdevs": 2, 00:38:28.234 "num_base_bdevs_discovered": 2, 00:38:28.234 "num_base_bdevs_operational": 2, 00:38:28.234 "base_bdevs_list": [ 00:38:28.234 { 00:38:28.234 "name": "spare", 00:38:28.234 "uuid": "d8cc2a14-4158-5e5d-9bf9-96471ed62307", 00:38:28.234 "is_configured": true, 00:38:28.234 "data_offset": 256, 00:38:28.234 "data_size": 7936 00:38:28.234 }, 00:38:28.234 { 00:38:28.234 "name": "BaseBdev2", 00:38:28.234 "uuid": "6e93b810-3ebd-5044-a08a-abae55a68307", 00:38:28.234 "is_configured": true, 00:38:28.234 "data_offset": 256, 00:38:28.234 "data_size": 7936 00:38:28.234 } 00:38:28.234 ] 00:38:28.234 }' 00:38:28.234 21:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:28.234 21:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:28.803 21:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:28.803 21:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:28.803 21:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:38:28.803 21:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:38:28.803 21:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:28.803 21:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:28.803 21:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:29.075 21:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:29.075 "name": "raid_bdev1", 00:38:29.075 "uuid": "5c6a5488-47ae-4996-83e0-8283fe4c320a", 00:38:29.075 "strip_size_kb": 0, 00:38:29.075 "state": "online", 00:38:29.075 "raid_level": "raid1", 00:38:29.075 "superblock": true, 00:38:29.075 "num_base_bdevs": 2, 00:38:29.075 "num_base_bdevs_discovered": 2, 00:38:29.075 "num_base_bdevs_operational": 2, 00:38:29.075 "base_bdevs_list": [ 00:38:29.075 { 00:38:29.075 "name": "spare", 00:38:29.075 "uuid": "d8cc2a14-4158-5e5d-9bf9-96471ed62307", 00:38:29.075 "is_configured": true, 00:38:29.075 "data_offset": 256, 00:38:29.075 "data_size": 7936 00:38:29.075 }, 00:38:29.075 { 00:38:29.075 "name": "BaseBdev2", 00:38:29.075 "uuid": "6e93b810-3ebd-5044-a08a-abae55a68307", 00:38:29.075 "is_configured": true, 00:38:29.075 "data_offset": 256, 00:38:29.075 "data_size": 7936 00:38:29.075 } 00:38:29.075 ] 00:38:29.075 }' 00:38:29.075 21:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:29.075 21:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:38:29.075 21:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:29.334 21:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:29.334 21:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:29.334 21:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:38:29.595 21:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:38:29.595 21:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:38:29.595 [2024-07-15 21:52:02.918495] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:29.595 21:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:29.595 21:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:29.595 21:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:29.595 21:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:29.595 21:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:29.595 21:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:29.595 21:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:29.595 21:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:29.595 21:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:29.595 21:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:29.595 21:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:29.595 21:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:29.854 21:52:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:29.854 "name": "raid_bdev1", 00:38:29.854 "uuid": "5c6a5488-47ae-4996-83e0-8283fe4c320a", 00:38:29.854 "strip_size_kb": 0, 00:38:29.854 "state": "online", 00:38:29.854 "raid_level": "raid1", 00:38:29.854 "superblock": true, 00:38:29.854 "num_base_bdevs": 2, 00:38:29.854 "num_base_bdevs_discovered": 1, 00:38:29.854 "num_base_bdevs_operational": 1, 00:38:29.854 "base_bdevs_list": [ 00:38:29.854 { 00:38:29.854 "name": null, 00:38:29.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:29.854 "is_configured": false, 00:38:29.854 "data_offset": 256, 00:38:29.854 "data_size": 7936 00:38:29.854 }, 00:38:29.854 { 00:38:29.854 "name": "BaseBdev2", 00:38:29.854 "uuid": "6e93b810-3ebd-5044-a08a-abae55a68307", 00:38:29.854 "is_configured": true, 00:38:29.854 "data_offset": 256, 00:38:29.854 "data_size": 7936 00:38:29.854 } 00:38:29.854 ] 00:38:29.854 }' 00:38:29.854 21:52:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:29.854 21:52:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:30.791 21:52:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:38:30.791 [2024-07-15 21:52:04.036537] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:30.791 [2024-07-15 21:52:04.036809] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:38:30.791 [2024-07-15 21:52:04.036856] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:38:30.791 [2024-07-15 21:52:04.036931] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:30.792 [2024-07-15 21:52:04.050526] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:38:30.792 [2024-07-15 21:52:04.052289] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:30.792 21:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # sleep 1 00:38:31.729 21:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:31.729 21:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:31.729 21:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:31.729 21:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:31.729 21:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:31.729 21:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:31.729 21:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:31.987 21:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:31.987 "name": "raid_bdev1", 00:38:31.987 "uuid": "5c6a5488-47ae-4996-83e0-8283fe4c320a", 00:38:31.987 "strip_size_kb": 0, 00:38:31.987 "state": "online", 00:38:31.987 "raid_level": "raid1", 00:38:31.987 "superblock": true, 00:38:31.987 "num_base_bdevs": 2, 00:38:31.987 "num_base_bdevs_discovered": 2, 00:38:31.987 "num_base_bdevs_operational": 2, 00:38:31.987 "process": { 00:38:31.987 "type": "rebuild", 00:38:31.987 "target": "spare", 00:38:31.987 "progress": { 00:38:31.987 "blocks": 3072, 00:38:31.987 "percent": 38 00:38:31.987 } 00:38:31.987 }, 00:38:31.987 "base_bdevs_list": [ 00:38:31.987 { 00:38:31.987 "name": "spare", 00:38:31.987 "uuid": "d8cc2a14-4158-5e5d-9bf9-96471ed62307", 00:38:31.987 "is_configured": true, 00:38:31.987 "data_offset": 256, 00:38:31.988 "data_size": 7936 00:38:31.988 }, 00:38:31.988 { 00:38:31.988 "name": "BaseBdev2", 00:38:31.988 "uuid": "6e93b810-3ebd-5044-a08a-abae55a68307", 00:38:31.988 "is_configured": true, 00:38:31.988 "data_offset": 256, 00:38:31.988 "data_size": 7936 00:38:31.988 } 00:38:31.988 ] 00:38:31.988 }' 00:38:31.988 21:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:31.988 21:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:31.988 21:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:32.246 21:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:32.246 21:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:38:32.246 [2024-07-15 21:52:05.592145] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:32.506 [2024-07-15 21:52:05.659935] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:32.506 [2024-07-15 21:52:05.660114] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:32.506 [2024-07-15 21:52:05.660155] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:32.506 [2024-07-15 21:52:05.660186] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:32.506 21:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:32.506 21:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:32.506 21:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:32.506 21:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:32.506 21:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:32.506 21:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:32.506 21:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:32.506 21:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:32.506 21:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:32.506 21:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:32.506 21:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:32.506 21:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:32.765 21:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:32.765 "name": "raid_bdev1", 00:38:32.765 "uuid": "5c6a5488-47ae-4996-83e0-8283fe4c320a", 00:38:32.765 "strip_size_kb": 0, 00:38:32.765 "state": "online", 00:38:32.765 "raid_level": "raid1", 00:38:32.765 "superblock": true, 00:38:32.765 "num_base_bdevs": 2, 00:38:32.765 "num_base_bdevs_discovered": 1, 00:38:32.765 "num_base_bdevs_operational": 1, 00:38:32.765 "base_bdevs_list": [ 00:38:32.765 { 00:38:32.765 "name": null, 00:38:32.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:32.765 "is_configured": false, 00:38:32.765 "data_offset": 256, 00:38:32.765 "data_size": 7936 00:38:32.765 }, 00:38:32.765 { 00:38:32.765 "name": "BaseBdev2", 00:38:32.765 "uuid": "6e93b810-3ebd-5044-a08a-abae55a68307", 00:38:32.765 "is_configured": true, 00:38:32.765 "data_offset": 256, 00:38:32.765 "data_size": 7936 00:38:32.765 } 00:38:32.765 ] 00:38:32.765 }' 00:38:32.765 21:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:32.765 21:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:33.333 21:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:38:33.592 [2024-07-15 21:52:06.810923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:33.592 [2024-07-15 21:52:06.811101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:33.592 [2024-07-15 21:52:06.811176] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:38:33.592 [2024-07-15 21:52:06.811255] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:33.592 [2024-07-15 21:52:06.811611] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:33.592 [2024-07-15 21:52:06.811688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:33.592 [2024-07-15 21:52:06.811853] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:38:33.592 [2024-07-15 21:52:06.811895] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:38:33.592 [2024-07-15 21:52:06.811931] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:38:33.592 [2024-07-15 21:52:06.812005] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:33.592 [2024-07-15 21:52:06.827333] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1f60 00:38:33.592 spare 00:38:33.592 [2024-07-15 21:52:06.829064] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:33.592 21:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # sleep 1 00:38:34.529 21:52:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:34.529 21:52:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:34.529 21:52:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:34.529 21:52:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:34.529 21:52:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:34.529 21:52:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:34.529 21:52:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:34.789 21:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:34.789 "name": "raid_bdev1", 00:38:34.789 "uuid": "5c6a5488-47ae-4996-83e0-8283fe4c320a", 00:38:34.789 "strip_size_kb": 0, 00:38:34.789 "state": "online", 00:38:34.789 "raid_level": "raid1", 00:38:34.789 "superblock": true, 00:38:34.789 "num_base_bdevs": 2, 00:38:34.789 "num_base_bdevs_discovered": 2, 00:38:34.789 "num_base_bdevs_operational": 2, 00:38:34.789 "process": { 00:38:34.789 "type": "rebuild", 00:38:34.789 "target": "spare", 00:38:34.789 "progress": { 00:38:34.789 "blocks": 3072, 00:38:34.789 "percent": 38 00:38:34.789 } 00:38:34.789 }, 00:38:34.789 "base_bdevs_list": [ 00:38:34.789 { 00:38:34.789 "name": "spare", 00:38:34.789 "uuid": "d8cc2a14-4158-5e5d-9bf9-96471ed62307", 00:38:34.789 "is_configured": true, 00:38:34.789 "data_offset": 256, 00:38:34.789 "data_size": 7936 00:38:34.789 }, 00:38:34.789 { 00:38:34.789 "name": "BaseBdev2", 00:38:34.789 "uuid": "6e93b810-3ebd-5044-a08a-abae55a68307", 00:38:34.789 "is_configured": true, 00:38:34.789 "data_offset": 256, 00:38:34.789 "data_size": 7936 00:38:34.789 } 00:38:34.789 ] 00:38:34.789 }' 00:38:34.789 21:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:34.789 21:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:34.789 21:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:34.789 21:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:34.789 21:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:38:35.049 [2024-07-15 21:52:08.340900] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:35.308 [2024-07-15 21:52:08.436660] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:35.308 [2024-07-15 21:52:08.436831] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:35.308 [2024-07-15 21:52:08.436870] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:35.308 [2024-07-15 21:52:08.436918] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:35.308 21:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:35.308 21:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:35.308 21:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:35.308 21:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:35.308 21:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:35.308 21:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:35.308 21:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:35.308 21:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:35.308 21:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:35.308 21:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:35.308 21:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:35.308 21:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:35.308 21:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:35.308 "name": "raid_bdev1", 00:38:35.308 "uuid": "5c6a5488-47ae-4996-83e0-8283fe4c320a", 00:38:35.308 "strip_size_kb": 0, 00:38:35.308 "state": "online", 00:38:35.308 "raid_level": "raid1", 00:38:35.308 "superblock": true, 00:38:35.308 "num_base_bdevs": 2, 00:38:35.308 "num_base_bdevs_discovered": 1, 00:38:35.308 "num_base_bdevs_operational": 1, 00:38:35.308 "base_bdevs_list": [ 00:38:35.308 { 00:38:35.308 "name": null, 00:38:35.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:35.308 "is_configured": false, 00:38:35.308 "data_offset": 256, 00:38:35.308 "data_size": 7936 00:38:35.308 }, 00:38:35.308 { 00:38:35.308 "name": "BaseBdev2", 00:38:35.308 "uuid": "6e93b810-3ebd-5044-a08a-abae55a68307", 00:38:35.308 "is_configured": true, 00:38:35.308 "data_offset": 256, 00:38:35.308 "data_size": 7936 00:38:35.308 } 00:38:35.308 ] 00:38:35.308 }' 00:38:35.308 21:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:35.308 21:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:36.245 21:52:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:36.245 21:52:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:36.245 21:52:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:38:36.245 21:52:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:38:36.245 21:52:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:36.245 21:52:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:36.245 21:52:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:36.245 21:52:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:36.245 "name": "raid_bdev1", 00:38:36.245 "uuid": "5c6a5488-47ae-4996-83e0-8283fe4c320a", 00:38:36.245 "strip_size_kb": 0, 00:38:36.245 "state": "online", 00:38:36.245 "raid_level": "raid1", 00:38:36.245 "superblock": true, 00:38:36.245 "num_base_bdevs": 2, 00:38:36.245 "num_base_bdevs_discovered": 1, 00:38:36.245 "num_base_bdevs_operational": 1, 00:38:36.245 "base_bdevs_list": [ 00:38:36.245 { 00:38:36.245 "name": null, 00:38:36.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:36.245 "is_configured": false, 00:38:36.245 "data_offset": 256, 00:38:36.245 "data_size": 7936 00:38:36.245 }, 00:38:36.245 { 00:38:36.245 "name": "BaseBdev2", 00:38:36.245 "uuid": "6e93b810-3ebd-5044-a08a-abae55a68307", 00:38:36.245 "is_configured": true, 00:38:36.245 "data_offset": 256, 00:38:36.245 "data_size": 7936 00:38:36.245 } 00:38:36.245 ] 00:38:36.245 }' 00:38:36.245 21:52:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:36.245 21:52:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:38:36.245 21:52:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:36.504 21:52:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:36.504 21:52:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:38:36.764 21:52:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:38:36.764 [2024-07-15 21:52:10.093927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:38:36.764 [2024-07-15 21:52:10.094123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:36.764 [2024-07-15 21:52:10.094181] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:38:36.764 [2024-07-15 21:52:10.094228] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:36.764 [2024-07-15 21:52:10.094505] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:36.764 [2024-07-15 21:52:10.094579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:36.764 [2024-07-15 21:52:10.094741] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:38:36.764 [2024-07-15 21:52:10.094783] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:38:36.764 [2024-07-15 21:52:10.094808] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:38:36.764 BaseBdev1 00:38:36.764 21:52:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # sleep 1 00:38:38.170 21:52:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:38.170 21:52:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:38.170 21:52:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:38.170 21:52:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:38.170 21:52:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:38.170 21:52:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:38.170 21:52:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:38.170 21:52:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:38.170 21:52:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:38.170 21:52:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:38.170 21:52:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:38.170 21:52:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:38.170 21:52:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:38.170 "name": "raid_bdev1", 00:38:38.170 "uuid": "5c6a5488-47ae-4996-83e0-8283fe4c320a", 00:38:38.170 "strip_size_kb": 0, 00:38:38.170 "state": "online", 00:38:38.170 "raid_level": "raid1", 00:38:38.170 "superblock": true, 00:38:38.170 "num_base_bdevs": 2, 00:38:38.170 "num_base_bdevs_discovered": 1, 00:38:38.170 "num_base_bdevs_operational": 1, 00:38:38.170 "base_bdevs_list": [ 00:38:38.170 { 00:38:38.170 "name": null, 00:38:38.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:38.170 "is_configured": false, 00:38:38.170 "data_offset": 256, 00:38:38.170 "data_size": 7936 00:38:38.170 }, 00:38:38.170 { 00:38:38.170 "name": "BaseBdev2", 00:38:38.170 "uuid": "6e93b810-3ebd-5044-a08a-abae55a68307", 00:38:38.170 "is_configured": true, 00:38:38.170 "data_offset": 256, 00:38:38.170 "data_size": 7936 00:38:38.170 } 00:38:38.170 ] 00:38:38.170 }' 00:38:38.170 21:52:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:38.170 21:52:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:38.737 21:52:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:38.737 21:52:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:38.737 21:52:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:38:38.737 21:52:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:38:38.737 21:52:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:38.737 21:52:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:38.737 21:52:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:39.001 21:52:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:39.001 "name": "raid_bdev1", 00:38:39.001 "uuid": "5c6a5488-47ae-4996-83e0-8283fe4c320a", 00:38:39.001 "strip_size_kb": 0, 00:38:39.001 "state": "online", 00:38:39.001 "raid_level": "raid1", 00:38:39.001 "superblock": true, 00:38:39.001 "num_base_bdevs": 2, 00:38:39.001 "num_base_bdevs_discovered": 1, 00:38:39.001 "num_base_bdevs_operational": 1, 00:38:39.001 "base_bdevs_list": [ 00:38:39.001 { 00:38:39.001 "name": null, 00:38:39.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:39.001 "is_configured": false, 00:38:39.001 "data_offset": 256, 00:38:39.001 "data_size": 7936 00:38:39.001 }, 00:38:39.001 { 00:38:39.001 "name": "BaseBdev2", 00:38:39.001 "uuid": "6e93b810-3ebd-5044-a08a-abae55a68307", 00:38:39.001 "is_configured": true, 00:38:39.001 "data_offset": 256, 00:38:39.001 "data_size": 7936 00:38:39.001 } 00:38:39.001 ] 00:38:39.001 }' 00:38:39.001 21:52:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:39.001 21:52:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:38:39.001 21:52:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:39.001 21:52:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:39.001 21:52:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:39.001 21:52:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:38:39.001 21:52:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:39.001 21:52:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:39.001 21:52:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:39.001 21:52:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:39.001 21:52:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:39.002 21:52:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:39.002 21:52:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:39.002 21:52:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:39.002 21:52:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:38:39.002 21:52:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:39.260 [2024-07-15 21:52:12.545679] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:39.260 [2024-07-15 21:52:12.545968] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:38:39.260 [2024-07-15 21:52:12.546018] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:38:39.260 request: 00:38:39.260 { 00:38:39.260 "base_bdev": "BaseBdev1", 00:38:39.260 "raid_bdev": "raid_bdev1", 00:38:39.260 "method": "bdev_raid_add_base_bdev", 00:38:39.260 "req_id": 1 00:38:39.260 } 00:38:39.260 Got JSON-RPC error response 00:38:39.260 response: 00:38:39.260 { 00:38:39.260 "code": -22, 00:38:39.260 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:38:39.260 } 00:38:39.260 21:52:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # es=1 00:38:39.260 21:52:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:39.260 21:52:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:39.260 21:52:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:39.260 21:52:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # sleep 1 00:38:40.194 21:52:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:40.194 21:52:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:40.194 21:52:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:40.194 21:52:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:40.194 21:52:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:40.194 21:52:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:40.194 21:52:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:40.194 21:52:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:40.194 21:52:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:40.194 21:52:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:40.194 21:52:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:40.194 21:52:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:40.453 21:52:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:40.453 "name": "raid_bdev1", 00:38:40.453 "uuid": "5c6a5488-47ae-4996-83e0-8283fe4c320a", 00:38:40.453 "strip_size_kb": 0, 00:38:40.453 "state": "online", 00:38:40.453 "raid_level": "raid1", 00:38:40.453 "superblock": true, 00:38:40.453 "num_base_bdevs": 2, 00:38:40.453 "num_base_bdevs_discovered": 1, 00:38:40.453 "num_base_bdevs_operational": 1, 00:38:40.453 "base_bdevs_list": [ 00:38:40.453 { 00:38:40.453 "name": null, 00:38:40.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:40.453 "is_configured": false, 00:38:40.453 "data_offset": 256, 00:38:40.453 "data_size": 7936 00:38:40.453 }, 00:38:40.453 { 00:38:40.453 "name": "BaseBdev2", 00:38:40.453 "uuid": "6e93b810-3ebd-5044-a08a-abae55a68307", 00:38:40.453 "is_configured": true, 00:38:40.453 "data_offset": 256, 00:38:40.453 "data_size": 7936 00:38:40.453 } 00:38:40.453 ] 00:38:40.453 }' 00:38:40.453 21:52:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:40.453 21:52:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:41.019 21:52:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:41.019 21:52:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:41.019 21:52:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:38:41.019 21:52:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:38:41.019 21:52:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:41.019 21:52:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:41.279 21:52:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:41.279 21:52:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:41.279 "name": "raid_bdev1", 00:38:41.279 "uuid": "5c6a5488-47ae-4996-83e0-8283fe4c320a", 00:38:41.279 "strip_size_kb": 0, 00:38:41.279 "state": "online", 00:38:41.279 "raid_level": "raid1", 00:38:41.279 "superblock": true, 00:38:41.279 "num_base_bdevs": 2, 00:38:41.279 "num_base_bdevs_discovered": 1, 00:38:41.279 "num_base_bdevs_operational": 1, 00:38:41.279 "base_bdevs_list": [ 00:38:41.279 { 00:38:41.279 "name": null, 00:38:41.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:41.279 "is_configured": false, 00:38:41.279 "data_offset": 256, 00:38:41.279 "data_size": 7936 00:38:41.279 }, 00:38:41.279 { 00:38:41.279 "name": "BaseBdev2", 00:38:41.279 "uuid": "6e93b810-3ebd-5044-a08a-abae55a68307", 00:38:41.279 "is_configured": true, 00:38:41.279 "data_offset": 256, 00:38:41.279 "data_size": 7936 00:38:41.279 } 00:38:41.279 ] 00:38:41.279 }' 00:38:41.279 21:52:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:41.279 21:52:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:38:41.279 21:52:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:41.538 21:52:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:41.538 21:52:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@782 -- # killprocess 163454 00:38:41.538 21:52:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 163454 ']' 00:38:41.538 21:52:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 163454 00:38:41.538 21:52:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:38:41.538 21:52:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:41.538 21:52:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 163454 00:38:41.538 killing process with pid 163454 00:38:41.538 21:52:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:41.538 21:52:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:41.538 21:52:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 163454' 00:38:41.538 21:52:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 163454 00:38:41.538 21:52:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 163454 00:38:41.538 Received shutdown signal, test time was about 60.000000 seconds 00:38:41.538 00:38:41.538 Latency(us) 00:38:41.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:41.538 =================================================================================================================== 00:38:41.538 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:41.538 [2024-07-15 21:52:14.703953] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:41.538 [2024-07-15 21:52:14.704162] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:41.538 [2024-07-15 21:52:14.704242] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:41.538 [2024-07-15 21:52:14.704270] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:38:41.795 [2024-07-15 21:52:15.048923] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:43.227 ************************************ 00:38:43.227 END TEST raid_rebuild_test_sb_md_separate 00:38:43.227 ************************************ 00:38:43.227 21:52:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # return 0 00:38:43.227 00:38:43.227 real 0m32.785s 00:38:43.227 user 0m51.468s 00:38:43.227 sys 0m4.254s 00:38:43.227 21:52:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:43.227 21:52:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:43.227 21:52:16 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:38:43.227 21:52:16 bdev_raid -- bdev/bdev_raid.sh@911 -- # base_malloc_params='-m 32 -i' 00:38:43.227 21:52:16 bdev_raid -- bdev/bdev_raid.sh@912 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:38:43.227 21:52:16 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:38:43.227 21:52:16 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:43.227 21:52:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:43.227 ************************************ 00:38:43.227 START TEST raid_state_function_test_sb_md_interleaved 00:38:43.227 ************************************ 00:38:43.227 21:52:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:38:43.227 21:52:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:38:43.227 21:52:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:38:43.227 21:52:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:38:43.227 21:52:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:38:43.227 21:52:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:38:43.227 21:52:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:38:43.227 21:52:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:38:43.227 21:52:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:38:43.227 21:52:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:38:43.227 21:52:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:38:43.227 21:52:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:38:43.227 21:52:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:38:43.227 21:52:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:38:43.227 21:52:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:38:43.227 21:52:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:38:43.227 21:52:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:38:43.227 21:52:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:38:43.227 21:52:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:38:43.227 21:52:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:38:43.227 21:52:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:38:43.227 21:52:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:38:43.227 21:52:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:38:43.227 21:52:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=164385 00:38:43.227 21:52:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:38:43.227 21:52:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 164385' 00:38:43.227 Process raid pid: 164385 00:38:43.227 21:52:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 164385 /var/tmp/spdk-raid.sock 00:38:43.227 21:52:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 164385 ']' 00:38:43.227 21:52:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:38:43.227 21:52:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:43.227 21:52:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:38:43.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:38:43.227 21:52:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:43.227 21:52:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:43.227 [2024-07-15 21:52:16.596582] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:38:43.227 [2024-07-15 21:52:16.596845] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:43.486 [2024-07-15 21:52:16.773924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:43.746 [2024-07-15 21:52:17.035568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:44.005 [2024-07-15 21:52:17.291794] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:44.264 21:52:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:44.264 21:52:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:38:44.264 21:52:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:38:44.264 [2024-07-15 21:52:17.612418] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:38:44.264 [2024-07-15 21:52:17.612632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:38:44.264 [2024-07-15 21:52:17.612669] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:44.264 [2024-07-15 21:52:17.612713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:44.264 21:52:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:38:44.264 21:52:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:38:44.264 21:52:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:38:44.264 21:52:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:44.265 21:52:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:44.265 21:52:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:44.265 21:52:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:44.265 21:52:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:44.265 21:52:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:44.265 21:52:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:44.265 21:52:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:44.265 21:52:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:44.523 21:52:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:44.523 "name": "Existed_Raid", 00:38:44.523 "uuid": "7cb30536-34a0-44db-8ac4-b26f0ddadfd5", 00:38:44.523 "strip_size_kb": 0, 00:38:44.523 "state": "configuring", 00:38:44.523 "raid_level": "raid1", 00:38:44.523 "superblock": true, 00:38:44.523 "num_base_bdevs": 2, 00:38:44.523 "num_base_bdevs_discovered": 0, 00:38:44.523 "num_base_bdevs_operational": 2, 00:38:44.523 "base_bdevs_list": [ 00:38:44.523 { 00:38:44.523 "name": "BaseBdev1", 00:38:44.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:44.523 "is_configured": false, 00:38:44.523 "data_offset": 0, 00:38:44.523 "data_size": 0 00:38:44.523 }, 00:38:44.523 { 00:38:44.523 "name": "BaseBdev2", 00:38:44.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:44.523 "is_configured": false, 00:38:44.523 "data_offset": 0, 00:38:44.523 "data_size": 0 00:38:44.523 } 00:38:44.523 ] 00:38:44.523 }' 00:38:44.523 21:52:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:44.523 21:52:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:45.092 21:52:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:38:45.351 [2024-07-15 21:52:18.614520] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:38:45.351 [2024-07-15 21:52:18.614672] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:38:45.351 21:52:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:38:45.610 [2024-07-15 21:52:18.794260] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:38:45.610 [2024-07-15 21:52:18.794435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:38:45.610 [2024-07-15 21:52:18.794469] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:45.610 [2024-07-15 21:52:18.794508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:45.610 21:52:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:38:45.869 [2024-07-15 21:52:19.033653] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:45.869 BaseBdev1 00:38:45.869 21:52:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:38:45.869 21:52:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:38:45.869 21:52:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:38:45.869 21:52:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:38:45.869 21:52:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:38:45.869 21:52:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:38:45.869 21:52:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:38:45.869 21:52:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:38:46.154 [ 00:38:46.154 { 00:38:46.154 "name": "BaseBdev1", 00:38:46.154 "aliases": [ 00:38:46.154 "6ae861f2-7613-4f50-ba76-822829539dec" 00:38:46.154 ], 00:38:46.154 "product_name": "Malloc disk", 00:38:46.154 "block_size": 4128, 00:38:46.154 "num_blocks": 8192, 00:38:46.154 "uuid": "6ae861f2-7613-4f50-ba76-822829539dec", 00:38:46.154 "md_size": 32, 00:38:46.154 "md_interleave": true, 00:38:46.154 "dif_type": 0, 00:38:46.154 "assigned_rate_limits": { 00:38:46.154 "rw_ios_per_sec": 0, 00:38:46.154 "rw_mbytes_per_sec": 0, 00:38:46.154 "r_mbytes_per_sec": 0, 00:38:46.154 "w_mbytes_per_sec": 0 00:38:46.154 }, 00:38:46.154 "claimed": true, 00:38:46.154 "claim_type": "exclusive_write", 00:38:46.154 "zoned": false, 00:38:46.154 "supported_io_types": { 00:38:46.154 "read": true, 00:38:46.154 "write": true, 00:38:46.154 "unmap": true, 00:38:46.154 "flush": true, 00:38:46.154 "reset": true, 00:38:46.154 "nvme_admin": false, 00:38:46.154 "nvme_io": false, 00:38:46.154 "nvme_io_md": false, 00:38:46.154 "write_zeroes": true, 00:38:46.154 "zcopy": true, 00:38:46.154 "get_zone_info": false, 00:38:46.155 "zone_management": false, 00:38:46.155 "zone_append": false, 00:38:46.155 "compare": false, 00:38:46.155 "compare_and_write": false, 00:38:46.155 "abort": true, 00:38:46.155 "seek_hole": false, 00:38:46.155 "seek_data": false, 00:38:46.155 "copy": true, 00:38:46.155 "nvme_iov_md": false 00:38:46.155 }, 00:38:46.155 "memory_domains": [ 00:38:46.155 { 00:38:46.155 "dma_device_id": "system", 00:38:46.155 "dma_device_type": 1 00:38:46.155 }, 00:38:46.155 { 00:38:46.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:46.155 "dma_device_type": 2 00:38:46.155 } 00:38:46.155 ], 00:38:46.155 "driver_specific": {} 00:38:46.155 } 00:38:46.155 ] 00:38:46.155 21:52:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:38:46.155 21:52:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:38:46.155 21:52:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:38:46.155 21:52:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:38:46.155 21:52:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:46.155 21:52:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:46.155 21:52:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:46.155 21:52:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:46.155 21:52:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:46.155 21:52:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:46.155 21:52:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:46.155 21:52:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:46.155 21:52:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:46.414 21:52:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:46.414 "name": "Existed_Raid", 00:38:46.414 "uuid": "d79221c1-9918-4497-9381-e9fb7c8d8324", 00:38:46.414 "strip_size_kb": 0, 00:38:46.414 "state": "configuring", 00:38:46.414 "raid_level": "raid1", 00:38:46.414 "superblock": true, 00:38:46.414 "num_base_bdevs": 2, 00:38:46.414 "num_base_bdevs_discovered": 1, 00:38:46.414 "num_base_bdevs_operational": 2, 00:38:46.414 "base_bdevs_list": [ 00:38:46.414 { 00:38:46.414 "name": "BaseBdev1", 00:38:46.414 "uuid": "6ae861f2-7613-4f50-ba76-822829539dec", 00:38:46.414 "is_configured": true, 00:38:46.414 "data_offset": 256, 00:38:46.414 "data_size": 7936 00:38:46.414 }, 00:38:46.414 { 00:38:46.414 "name": "BaseBdev2", 00:38:46.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:46.414 "is_configured": false, 00:38:46.414 "data_offset": 0, 00:38:46.414 "data_size": 0 00:38:46.414 } 00:38:46.414 ] 00:38:46.414 }' 00:38:46.414 21:52:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:46.414 21:52:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:46.981 21:52:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:38:47.239 [2024-07-15 21:52:20.455473] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:38:47.239 [2024-07-15 21:52:20.455631] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:38:47.239 21:52:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:38:47.497 [2024-07-15 21:52:20.659188] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:47.497 [2024-07-15 21:52:20.661361] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:47.497 [2024-07-15 21:52:20.661461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:47.497 21:52:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:38:47.497 21:52:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:38:47.497 21:52:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:38:47.497 21:52:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:38:47.497 21:52:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:38:47.497 21:52:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:47.497 21:52:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:47.497 21:52:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:47.497 21:52:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:47.497 21:52:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:47.497 21:52:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:47.497 21:52:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:47.497 21:52:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:47.497 21:52:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:47.776 21:52:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:47.776 "name": "Existed_Raid", 00:38:47.776 "uuid": "55f87af8-02da-4321-8f27-3ec5d1c522bd", 00:38:47.776 "strip_size_kb": 0, 00:38:47.776 "state": "configuring", 00:38:47.776 "raid_level": "raid1", 00:38:47.776 "superblock": true, 00:38:47.776 "num_base_bdevs": 2, 00:38:47.776 "num_base_bdevs_discovered": 1, 00:38:47.776 "num_base_bdevs_operational": 2, 00:38:47.776 "base_bdevs_list": [ 00:38:47.776 { 00:38:47.776 "name": "BaseBdev1", 00:38:47.776 "uuid": "6ae861f2-7613-4f50-ba76-822829539dec", 00:38:47.776 "is_configured": true, 00:38:47.776 "data_offset": 256, 00:38:47.776 "data_size": 7936 00:38:47.776 }, 00:38:47.776 { 00:38:47.776 "name": "BaseBdev2", 00:38:47.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:47.776 "is_configured": false, 00:38:47.776 "data_offset": 0, 00:38:47.776 "data_size": 0 00:38:47.776 } 00:38:47.776 ] 00:38:47.776 }' 00:38:47.776 21:52:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:47.776 21:52:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:48.343 21:52:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:38:48.601 [2024-07-15 21:52:21.750201] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:48.601 [2024-07-15 21:52:21.750561] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:38:48.601 [2024-07-15 21:52:21.750591] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:38:48.601 [2024-07-15 21:52:21.750725] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:38:48.601 [2024-07-15 21:52:21.750835] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:38:48.601 [2024-07-15 21:52:21.750866] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:38:48.601 [2024-07-15 21:52:21.750970] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:48.601 BaseBdev2 00:38:48.601 21:52:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:38:48.601 21:52:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:38:48.601 21:52:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:38:48.601 21:52:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:38:48.601 21:52:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:38:48.601 21:52:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:38:48.601 21:52:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:38:48.601 21:52:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:38:48.860 [ 00:38:48.860 { 00:38:48.860 "name": "BaseBdev2", 00:38:48.860 "aliases": [ 00:38:48.860 "1a2dd244-cacf-4be6-84ee-f13b7f973b4c" 00:38:48.860 ], 00:38:48.860 "product_name": "Malloc disk", 00:38:48.860 "block_size": 4128, 00:38:48.860 "num_blocks": 8192, 00:38:48.860 "uuid": "1a2dd244-cacf-4be6-84ee-f13b7f973b4c", 00:38:48.860 "md_size": 32, 00:38:48.860 "md_interleave": true, 00:38:48.860 "dif_type": 0, 00:38:48.860 "assigned_rate_limits": { 00:38:48.860 "rw_ios_per_sec": 0, 00:38:48.860 "rw_mbytes_per_sec": 0, 00:38:48.860 "r_mbytes_per_sec": 0, 00:38:48.860 "w_mbytes_per_sec": 0 00:38:48.860 }, 00:38:48.860 "claimed": true, 00:38:48.860 "claim_type": "exclusive_write", 00:38:48.860 "zoned": false, 00:38:48.860 "supported_io_types": { 00:38:48.860 "read": true, 00:38:48.860 "write": true, 00:38:48.860 "unmap": true, 00:38:48.860 "flush": true, 00:38:48.860 "reset": true, 00:38:48.860 "nvme_admin": false, 00:38:48.860 "nvme_io": false, 00:38:48.860 "nvme_io_md": false, 00:38:48.860 "write_zeroes": true, 00:38:48.860 "zcopy": true, 00:38:48.860 "get_zone_info": false, 00:38:48.860 "zone_management": false, 00:38:48.860 "zone_append": false, 00:38:48.860 "compare": false, 00:38:48.860 "compare_and_write": false, 00:38:48.860 "abort": true, 00:38:48.860 "seek_hole": false, 00:38:48.860 "seek_data": false, 00:38:48.860 "copy": true, 00:38:48.860 "nvme_iov_md": false 00:38:48.860 }, 00:38:48.860 "memory_domains": [ 00:38:48.860 { 00:38:48.860 "dma_device_id": "system", 00:38:48.860 "dma_device_type": 1 00:38:48.860 }, 00:38:48.860 { 00:38:48.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:48.860 "dma_device_type": 2 00:38:48.860 } 00:38:48.860 ], 00:38:48.860 "driver_specific": {} 00:38:48.860 } 00:38:48.860 ] 00:38:48.860 21:52:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:38:48.860 21:52:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:38:48.860 21:52:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:38:48.860 21:52:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:38:48.860 21:52:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:38:48.860 21:52:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:48.860 21:52:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:48.860 21:52:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:48.860 21:52:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:48.860 21:52:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:48.860 21:52:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:48.860 21:52:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:48.860 21:52:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:48.860 21:52:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:48.860 21:52:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:49.118 21:52:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:49.118 "name": "Existed_Raid", 00:38:49.118 "uuid": "55f87af8-02da-4321-8f27-3ec5d1c522bd", 00:38:49.118 "strip_size_kb": 0, 00:38:49.118 "state": "online", 00:38:49.118 "raid_level": "raid1", 00:38:49.118 "superblock": true, 00:38:49.118 "num_base_bdevs": 2, 00:38:49.118 "num_base_bdevs_discovered": 2, 00:38:49.118 "num_base_bdevs_operational": 2, 00:38:49.118 "base_bdevs_list": [ 00:38:49.118 { 00:38:49.118 "name": "BaseBdev1", 00:38:49.118 "uuid": "6ae861f2-7613-4f50-ba76-822829539dec", 00:38:49.118 "is_configured": true, 00:38:49.118 "data_offset": 256, 00:38:49.118 "data_size": 7936 00:38:49.118 }, 00:38:49.118 { 00:38:49.118 "name": "BaseBdev2", 00:38:49.118 "uuid": "1a2dd244-cacf-4be6-84ee-f13b7f973b4c", 00:38:49.118 "is_configured": true, 00:38:49.118 "data_offset": 256, 00:38:49.118 "data_size": 7936 00:38:49.118 } 00:38:49.118 ] 00:38:49.118 }' 00:38:49.118 21:52:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:49.119 21:52:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:49.685 21:52:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:38:49.685 21:52:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:38:49.685 21:52:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:38:49.685 21:52:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:38:49.685 21:52:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:38:49.685 21:52:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:38:49.685 21:52:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:38:49.686 21:52:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:38:49.945 [2024-07-15 21:52:23.136185] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:49.945 21:52:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:38:49.945 "name": "Existed_Raid", 00:38:49.945 "aliases": [ 00:38:49.945 "55f87af8-02da-4321-8f27-3ec5d1c522bd" 00:38:49.945 ], 00:38:49.945 "product_name": "Raid Volume", 00:38:49.945 "block_size": 4128, 00:38:49.945 "num_blocks": 7936, 00:38:49.945 "uuid": "55f87af8-02da-4321-8f27-3ec5d1c522bd", 00:38:49.945 "md_size": 32, 00:38:49.945 "md_interleave": true, 00:38:49.945 "dif_type": 0, 00:38:49.945 "assigned_rate_limits": { 00:38:49.945 "rw_ios_per_sec": 0, 00:38:49.945 "rw_mbytes_per_sec": 0, 00:38:49.945 "r_mbytes_per_sec": 0, 00:38:49.945 "w_mbytes_per_sec": 0 00:38:49.945 }, 00:38:49.945 "claimed": false, 00:38:49.945 "zoned": false, 00:38:49.945 "supported_io_types": { 00:38:49.945 "read": true, 00:38:49.945 "write": true, 00:38:49.945 "unmap": false, 00:38:49.945 "flush": false, 00:38:49.945 "reset": true, 00:38:49.945 "nvme_admin": false, 00:38:49.945 "nvme_io": false, 00:38:49.945 "nvme_io_md": false, 00:38:49.945 "write_zeroes": true, 00:38:49.945 "zcopy": false, 00:38:49.945 "get_zone_info": false, 00:38:49.945 "zone_management": false, 00:38:49.945 "zone_append": false, 00:38:49.945 "compare": false, 00:38:49.945 "compare_and_write": false, 00:38:49.945 "abort": false, 00:38:49.945 "seek_hole": false, 00:38:49.945 "seek_data": false, 00:38:49.945 "copy": false, 00:38:49.945 "nvme_iov_md": false 00:38:49.945 }, 00:38:49.945 "memory_domains": [ 00:38:49.945 { 00:38:49.945 "dma_device_id": "system", 00:38:49.945 "dma_device_type": 1 00:38:49.945 }, 00:38:49.945 { 00:38:49.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:49.945 "dma_device_type": 2 00:38:49.945 }, 00:38:49.945 { 00:38:49.945 "dma_device_id": "system", 00:38:49.945 "dma_device_type": 1 00:38:49.945 }, 00:38:49.945 { 00:38:49.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:49.945 "dma_device_type": 2 00:38:49.945 } 00:38:49.945 ], 00:38:49.945 "driver_specific": { 00:38:49.945 "raid": { 00:38:49.945 "uuid": "55f87af8-02da-4321-8f27-3ec5d1c522bd", 00:38:49.946 "strip_size_kb": 0, 00:38:49.946 "state": "online", 00:38:49.946 "raid_level": "raid1", 00:38:49.946 "superblock": true, 00:38:49.946 "num_base_bdevs": 2, 00:38:49.946 "num_base_bdevs_discovered": 2, 00:38:49.946 "num_base_bdevs_operational": 2, 00:38:49.946 "base_bdevs_list": [ 00:38:49.946 { 00:38:49.946 "name": "BaseBdev1", 00:38:49.946 "uuid": "6ae861f2-7613-4f50-ba76-822829539dec", 00:38:49.946 "is_configured": true, 00:38:49.946 "data_offset": 256, 00:38:49.946 "data_size": 7936 00:38:49.946 }, 00:38:49.946 { 00:38:49.946 "name": "BaseBdev2", 00:38:49.946 "uuid": "1a2dd244-cacf-4be6-84ee-f13b7f973b4c", 00:38:49.946 "is_configured": true, 00:38:49.946 "data_offset": 256, 00:38:49.946 "data_size": 7936 00:38:49.946 } 00:38:49.946 ] 00:38:49.946 } 00:38:49.946 } 00:38:49.946 }' 00:38:49.946 21:52:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:38:49.946 21:52:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:38:49.946 BaseBdev2' 00:38:49.946 21:52:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:38:49.946 21:52:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:38:49.946 21:52:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:38:50.203 21:52:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:38:50.203 "name": "BaseBdev1", 00:38:50.203 "aliases": [ 00:38:50.203 "6ae861f2-7613-4f50-ba76-822829539dec" 00:38:50.203 ], 00:38:50.203 "product_name": "Malloc disk", 00:38:50.203 "block_size": 4128, 00:38:50.203 "num_blocks": 8192, 00:38:50.203 "uuid": "6ae861f2-7613-4f50-ba76-822829539dec", 00:38:50.203 "md_size": 32, 00:38:50.203 "md_interleave": true, 00:38:50.203 "dif_type": 0, 00:38:50.203 "assigned_rate_limits": { 00:38:50.203 "rw_ios_per_sec": 0, 00:38:50.203 "rw_mbytes_per_sec": 0, 00:38:50.203 "r_mbytes_per_sec": 0, 00:38:50.203 "w_mbytes_per_sec": 0 00:38:50.203 }, 00:38:50.203 "claimed": true, 00:38:50.203 "claim_type": "exclusive_write", 00:38:50.203 "zoned": false, 00:38:50.203 "supported_io_types": { 00:38:50.203 "read": true, 00:38:50.203 "write": true, 00:38:50.203 "unmap": true, 00:38:50.203 "flush": true, 00:38:50.203 "reset": true, 00:38:50.203 "nvme_admin": false, 00:38:50.203 "nvme_io": false, 00:38:50.203 "nvme_io_md": false, 00:38:50.203 "write_zeroes": true, 00:38:50.203 "zcopy": true, 00:38:50.203 "get_zone_info": false, 00:38:50.203 "zone_management": false, 00:38:50.203 "zone_append": false, 00:38:50.203 "compare": false, 00:38:50.203 "compare_and_write": false, 00:38:50.203 "abort": true, 00:38:50.203 "seek_hole": false, 00:38:50.203 "seek_data": false, 00:38:50.203 "copy": true, 00:38:50.203 "nvme_iov_md": false 00:38:50.203 }, 00:38:50.203 "memory_domains": [ 00:38:50.203 { 00:38:50.203 "dma_device_id": "system", 00:38:50.203 "dma_device_type": 1 00:38:50.203 }, 00:38:50.203 { 00:38:50.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:50.203 "dma_device_type": 2 00:38:50.203 } 00:38:50.203 ], 00:38:50.203 "driver_specific": {} 00:38:50.203 }' 00:38:50.203 21:52:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:50.203 21:52:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:50.203 21:52:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:38:50.203 21:52:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:50.203 21:52:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:50.461 21:52:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:38:50.461 21:52:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:50.461 21:52:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:50.461 21:52:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:38:50.461 21:52:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:50.461 21:52:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:50.461 21:52:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:38:50.461 21:52:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:38:50.719 21:52:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:38:50.719 21:52:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:38:50.719 21:52:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:38:50.719 "name": "BaseBdev2", 00:38:50.719 "aliases": [ 00:38:50.719 "1a2dd244-cacf-4be6-84ee-f13b7f973b4c" 00:38:50.719 ], 00:38:50.719 "product_name": "Malloc disk", 00:38:50.719 "block_size": 4128, 00:38:50.719 "num_blocks": 8192, 00:38:50.719 "uuid": "1a2dd244-cacf-4be6-84ee-f13b7f973b4c", 00:38:50.719 "md_size": 32, 00:38:50.719 "md_interleave": true, 00:38:50.719 "dif_type": 0, 00:38:50.719 "assigned_rate_limits": { 00:38:50.719 "rw_ios_per_sec": 0, 00:38:50.719 "rw_mbytes_per_sec": 0, 00:38:50.719 "r_mbytes_per_sec": 0, 00:38:50.719 "w_mbytes_per_sec": 0 00:38:50.719 }, 00:38:50.719 "claimed": true, 00:38:50.719 "claim_type": "exclusive_write", 00:38:50.719 "zoned": false, 00:38:50.719 "supported_io_types": { 00:38:50.719 "read": true, 00:38:50.719 "write": true, 00:38:50.719 "unmap": true, 00:38:50.719 "flush": true, 00:38:50.719 "reset": true, 00:38:50.719 "nvme_admin": false, 00:38:50.719 "nvme_io": false, 00:38:50.719 "nvme_io_md": false, 00:38:50.719 "write_zeroes": true, 00:38:50.719 "zcopy": true, 00:38:50.719 "get_zone_info": false, 00:38:50.719 "zone_management": false, 00:38:50.719 "zone_append": false, 00:38:50.719 "compare": false, 00:38:50.719 "compare_and_write": false, 00:38:50.719 "abort": true, 00:38:50.719 "seek_hole": false, 00:38:50.719 "seek_data": false, 00:38:50.719 "copy": true, 00:38:50.719 "nvme_iov_md": false 00:38:50.719 }, 00:38:50.719 "memory_domains": [ 00:38:50.719 { 00:38:50.719 "dma_device_id": "system", 00:38:50.719 "dma_device_type": 1 00:38:50.719 }, 00:38:50.719 { 00:38:50.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:50.719 "dma_device_type": 2 00:38:50.719 } 00:38:50.719 ], 00:38:50.719 "driver_specific": {} 00:38:50.719 }' 00:38:50.719 21:52:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:50.719 21:52:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:50.978 21:52:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:38:50.978 21:52:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:50.978 21:52:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:50.978 21:52:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:38:50.978 21:52:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:50.978 21:52:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:51.236 21:52:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:38:51.236 21:52:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:51.236 21:52:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:51.236 21:52:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:38:51.236 21:52:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:38:51.495 [2024-07-15 21:52:24.713414] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:51.495 21:52:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:38:51.495 21:52:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:38:51.495 21:52:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:38:51.495 21:52:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:38:51.495 21:52:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:38:51.495 21:52:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:38:51.495 21:52:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:38:51.495 21:52:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:51.495 21:52:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:51.495 21:52:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:51.495 21:52:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:51.495 21:52:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:51.495 21:52:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:51.495 21:52:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:51.495 21:52:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:51.495 21:52:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:51.495 21:52:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:51.754 21:52:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:51.754 "name": "Existed_Raid", 00:38:51.754 "uuid": "55f87af8-02da-4321-8f27-3ec5d1c522bd", 00:38:51.754 "strip_size_kb": 0, 00:38:51.754 "state": "online", 00:38:51.754 "raid_level": "raid1", 00:38:51.754 "superblock": true, 00:38:51.754 "num_base_bdevs": 2, 00:38:51.754 "num_base_bdevs_discovered": 1, 00:38:51.754 "num_base_bdevs_operational": 1, 00:38:51.754 "base_bdevs_list": [ 00:38:51.754 { 00:38:51.754 "name": null, 00:38:51.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:51.754 "is_configured": false, 00:38:51.754 "data_offset": 256, 00:38:51.754 "data_size": 7936 00:38:51.754 }, 00:38:51.754 { 00:38:51.754 "name": "BaseBdev2", 00:38:51.754 "uuid": "1a2dd244-cacf-4be6-84ee-f13b7f973b4c", 00:38:51.754 "is_configured": true, 00:38:51.754 "data_offset": 256, 00:38:51.754 "data_size": 7936 00:38:51.754 } 00:38:51.754 ] 00:38:51.754 }' 00:38:51.754 21:52:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:51.754 21:52:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:52.690 21:52:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:38:52.690 21:52:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:38:52.690 21:52:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:52.690 21:52:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:38:52.690 21:52:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:38:52.690 21:52:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:38:52.690 21:52:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:38:52.949 [2024-07-15 21:52:26.087148] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:38:52.949 [2024-07-15 21:52:26.087408] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:52.949 [2024-07-15 21:52:26.197413] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:52.949 [2024-07-15 21:52:26.197547] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:52.949 [2024-07-15 21:52:26.197575] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:38:52.949 21:52:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:38:52.949 21:52:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:38:52.949 21:52:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:52.949 21:52:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:38:53.207 21:52:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:38:53.207 21:52:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:38:53.207 21:52:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:38:53.207 21:52:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 164385 00:38:53.207 21:52:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 164385 ']' 00:38:53.207 21:52:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 164385 00:38:53.207 21:52:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:38:53.207 21:52:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:53.207 21:52:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 164385 00:38:53.207 killing process with pid 164385 00:38:53.207 21:52:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:53.207 21:52:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:53.207 21:52:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 164385' 00:38:53.207 21:52:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 164385 00:38:53.207 21:52:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 164385 00:38:53.207 [2024-07-15 21:52:26.447091] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:53.207 [2024-07-15 21:52:26.447215] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:54.582 ************************************ 00:38:54.582 END TEST raid_state_function_test_sb_md_interleaved 00:38:54.582 ************************************ 00:38:54.582 21:52:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:38:54.582 00:38:54.582 real 0m11.391s 00:38:54.582 user 0m19.450s 00:38:54.582 sys 0m1.511s 00:38:54.582 21:52:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:54.582 21:52:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:54.841 21:52:27 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:38:54.841 21:52:27 bdev_raid -- bdev/bdev_raid.sh@913 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:38:54.841 21:52:27 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:38:54.841 21:52:27 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:54.841 21:52:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:54.841 ************************************ 00:38:54.841 START TEST raid_superblock_test_md_interleaved 00:38:54.841 ************************************ 00:38:54.841 21:52:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:38:54.841 21:52:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:38:54.841 21:52:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:38:54.841 21:52:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:38:54.841 21:52:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:38:54.841 21:52:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:38:54.841 21:52:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:38:54.841 21:52:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:38:54.841 21:52:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:38:54.841 21:52:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:38:54.841 21:52:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local strip_size 00:38:54.841 21:52:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:38:54.841 21:52:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:38:54.841 21:52:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:38:54.841 21:52:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:38:54.841 21:52:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:38:54.841 21:52:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # raid_pid=164785 00:38:54.841 21:52:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:38:54.841 21:52:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # waitforlisten 164785 /var/tmp/spdk-raid.sock 00:38:54.841 21:52:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 164785 ']' 00:38:54.841 21:52:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:38:54.841 21:52:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:54.841 21:52:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:38:54.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:38:54.841 21:52:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:54.841 21:52:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:54.841 [2024-07-15 21:52:28.037467] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:38:54.841 [2024-07-15 21:52:28.037694] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164785 ] 00:38:54.841 [2024-07-15 21:52:28.202873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:55.100 [2024-07-15 21:52:28.470195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:55.359 [2024-07-15 21:52:28.718163] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:55.617 21:52:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:55.617 21:52:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:38:55.617 21:52:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:38:55.617 21:52:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:38:55.617 21:52:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:38:55.617 21:52:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:38:55.617 21:52:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:38:55.617 21:52:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:38:55.617 21:52:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:38:55.617 21:52:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:38:55.617 21:52:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:38:55.876 malloc1 00:38:55.876 21:52:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:38:56.135 [2024-07-15 21:52:29.389425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:38:56.135 [2024-07-15 21:52:29.389648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:56.135 [2024-07-15 21:52:29.389724] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:38:56.135 [2024-07-15 21:52:29.389770] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:56.135 [2024-07-15 21:52:29.391959] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:56.135 [2024-07-15 21:52:29.392044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:38:56.135 pt1 00:38:56.135 21:52:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:38:56.135 21:52:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:38:56.135 21:52:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:38:56.135 21:52:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:38:56.135 21:52:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:38:56.135 21:52:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:38:56.135 21:52:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:38:56.135 21:52:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:38:56.135 21:52:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:38:56.394 malloc2 00:38:56.394 21:52:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:38:56.653 [2024-07-15 21:52:29.899532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:38:56.653 [2024-07-15 21:52:29.899786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:56.653 [2024-07-15 21:52:29.899846] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:38:56.654 [2024-07-15 21:52:29.899914] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:56.654 [2024-07-15 21:52:29.902152] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:56.654 [2024-07-15 21:52:29.902240] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:38:56.654 pt2 00:38:56.654 21:52:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:38:56.654 21:52:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:38:56.654 21:52:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:38:56.914 [2024-07-15 21:52:30.099267] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:38:56.914 [2024-07-15 21:52:30.101493] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:56.914 [2024-07-15 21:52:30.101771] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:38:56.914 [2024-07-15 21:52:30.101813] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:38:56.914 [2024-07-15 21:52:30.101950] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:38:56.914 [2024-07-15 21:52:30.102065] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:38:56.914 [2024-07-15 21:52:30.102100] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:38:56.914 [2024-07-15 21:52:30.102189] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:56.914 21:52:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:56.914 21:52:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:56.914 21:52:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:56.914 21:52:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:56.914 21:52:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:56.914 21:52:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:56.914 21:52:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:56.914 21:52:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:56.914 21:52:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:56.914 21:52:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:56.914 21:52:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:56.914 21:52:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:57.173 21:52:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:57.173 "name": "raid_bdev1", 00:38:57.173 "uuid": "bb65c732-f929-4047-bf56-400ff0a1c26f", 00:38:57.173 "strip_size_kb": 0, 00:38:57.173 "state": "online", 00:38:57.173 "raid_level": "raid1", 00:38:57.173 "superblock": true, 00:38:57.173 "num_base_bdevs": 2, 00:38:57.173 "num_base_bdevs_discovered": 2, 00:38:57.173 "num_base_bdevs_operational": 2, 00:38:57.173 "base_bdevs_list": [ 00:38:57.173 { 00:38:57.173 "name": "pt1", 00:38:57.173 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:57.173 "is_configured": true, 00:38:57.173 "data_offset": 256, 00:38:57.173 "data_size": 7936 00:38:57.173 }, 00:38:57.173 { 00:38:57.173 "name": "pt2", 00:38:57.173 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:57.173 "is_configured": true, 00:38:57.173 "data_offset": 256, 00:38:57.173 "data_size": 7936 00:38:57.173 } 00:38:57.173 ] 00:38:57.173 }' 00:38:57.173 21:52:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:57.173 21:52:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:57.760 21:52:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:38:57.760 21:52:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:38:57.760 21:52:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:38:57.760 21:52:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:38:57.760 21:52:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:38:57.760 21:52:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:38:57.760 21:52:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:38:57.760 21:52:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:38:57.760 [2024-07-15 21:52:31.117683] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:57.760 21:52:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:38:57.760 "name": "raid_bdev1", 00:38:57.760 "aliases": [ 00:38:57.760 "bb65c732-f929-4047-bf56-400ff0a1c26f" 00:38:57.760 ], 00:38:57.760 "product_name": "Raid Volume", 00:38:57.760 "block_size": 4128, 00:38:57.760 "num_blocks": 7936, 00:38:57.760 "uuid": "bb65c732-f929-4047-bf56-400ff0a1c26f", 00:38:57.760 "md_size": 32, 00:38:57.760 "md_interleave": true, 00:38:57.760 "dif_type": 0, 00:38:57.760 "assigned_rate_limits": { 00:38:57.760 "rw_ios_per_sec": 0, 00:38:57.760 "rw_mbytes_per_sec": 0, 00:38:57.760 "r_mbytes_per_sec": 0, 00:38:57.760 "w_mbytes_per_sec": 0 00:38:57.760 }, 00:38:57.760 "claimed": false, 00:38:57.760 "zoned": false, 00:38:57.760 "supported_io_types": { 00:38:57.760 "read": true, 00:38:57.760 "write": true, 00:38:57.760 "unmap": false, 00:38:57.760 "flush": false, 00:38:57.760 "reset": true, 00:38:57.760 "nvme_admin": false, 00:38:57.760 "nvme_io": false, 00:38:57.760 "nvme_io_md": false, 00:38:57.760 "write_zeroes": true, 00:38:57.760 "zcopy": false, 00:38:57.760 "get_zone_info": false, 00:38:57.760 "zone_management": false, 00:38:57.760 "zone_append": false, 00:38:57.760 "compare": false, 00:38:57.760 "compare_and_write": false, 00:38:57.760 "abort": false, 00:38:57.760 "seek_hole": false, 00:38:57.760 "seek_data": false, 00:38:57.760 "copy": false, 00:38:57.760 "nvme_iov_md": false 00:38:57.760 }, 00:38:57.760 "memory_domains": [ 00:38:57.760 { 00:38:57.760 "dma_device_id": "system", 00:38:57.760 "dma_device_type": 1 00:38:57.760 }, 00:38:57.760 { 00:38:57.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:57.760 "dma_device_type": 2 00:38:57.760 }, 00:38:57.760 { 00:38:57.760 "dma_device_id": "system", 00:38:57.760 "dma_device_type": 1 00:38:57.760 }, 00:38:57.760 { 00:38:57.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:57.760 "dma_device_type": 2 00:38:57.760 } 00:38:57.760 ], 00:38:57.760 "driver_specific": { 00:38:57.760 "raid": { 00:38:57.760 "uuid": "bb65c732-f929-4047-bf56-400ff0a1c26f", 00:38:57.760 "strip_size_kb": 0, 00:38:57.760 "state": "online", 00:38:57.760 "raid_level": "raid1", 00:38:57.760 "superblock": true, 00:38:57.760 "num_base_bdevs": 2, 00:38:57.760 "num_base_bdevs_discovered": 2, 00:38:57.760 "num_base_bdevs_operational": 2, 00:38:57.760 "base_bdevs_list": [ 00:38:57.760 { 00:38:57.760 "name": "pt1", 00:38:57.760 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:57.760 "is_configured": true, 00:38:57.760 "data_offset": 256, 00:38:57.760 "data_size": 7936 00:38:57.760 }, 00:38:57.760 { 00:38:57.760 "name": "pt2", 00:38:57.760 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:57.760 "is_configured": true, 00:38:57.760 "data_offset": 256, 00:38:57.760 "data_size": 7936 00:38:57.760 } 00:38:57.760 ] 00:38:57.760 } 00:38:57.760 } 00:38:57.760 }' 00:38:57.760 21:52:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:38:58.019 21:52:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:38:58.019 pt2' 00:38:58.019 21:52:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:38:58.019 21:52:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:38:58.019 21:52:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:38:58.019 21:52:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:38:58.019 "name": "pt1", 00:38:58.019 "aliases": [ 00:38:58.019 "00000000-0000-0000-0000-000000000001" 00:38:58.019 ], 00:38:58.019 "product_name": "passthru", 00:38:58.019 "block_size": 4128, 00:38:58.019 "num_blocks": 8192, 00:38:58.019 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:58.019 "md_size": 32, 00:38:58.019 "md_interleave": true, 00:38:58.019 "dif_type": 0, 00:38:58.019 "assigned_rate_limits": { 00:38:58.019 "rw_ios_per_sec": 0, 00:38:58.019 "rw_mbytes_per_sec": 0, 00:38:58.019 "r_mbytes_per_sec": 0, 00:38:58.019 "w_mbytes_per_sec": 0 00:38:58.019 }, 00:38:58.019 "claimed": true, 00:38:58.019 "claim_type": "exclusive_write", 00:38:58.019 "zoned": false, 00:38:58.019 "supported_io_types": { 00:38:58.019 "read": true, 00:38:58.019 "write": true, 00:38:58.019 "unmap": true, 00:38:58.019 "flush": true, 00:38:58.019 "reset": true, 00:38:58.019 "nvme_admin": false, 00:38:58.019 "nvme_io": false, 00:38:58.019 "nvme_io_md": false, 00:38:58.019 "write_zeroes": true, 00:38:58.019 "zcopy": true, 00:38:58.019 "get_zone_info": false, 00:38:58.019 "zone_management": false, 00:38:58.019 "zone_append": false, 00:38:58.019 "compare": false, 00:38:58.019 "compare_and_write": false, 00:38:58.019 "abort": true, 00:38:58.019 "seek_hole": false, 00:38:58.019 "seek_data": false, 00:38:58.019 "copy": true, 00:38:58.019 "nvme_iov_md": false 00:38:58.019 }, 00:38:58.019 "memory_domains": [ 00:38:58.019 { 00:38:58.019 "dma_device_id": "system", 00:38:58.019 "dma_device_type": 1 00:38:58.019 }, 00:38:58.019 { 00:38:58.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:58.019 "dma_device_type": 2 00:38:58.019 } 00:38:58.019 ], 00:38:58.019 "driver_specific": { 00:38:58.019 "passthru": { 00:38:58.019 "name": "pt1", 00:38:58.019 "base_bdev_name": "malloc1" 00:38:58.019 } 00:38:58.019 } 00:38:58.019 }' 00:38:58.019 21:52:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:58.279 21:52:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:58.279 21:52:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:38:58.279 21:52:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:58.279 21:52:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:58.279 21:52:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:38:58.279 21:52:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:58.537 21:52:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:58.537 21:52:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:38:58.537 21:52:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:58.537 21:52:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:58.537 21:52:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:38:58.537 21:52:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:38:58.537 21:52:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:38:58.537 21:52:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:38:58.795 21:52:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:38:58.795 "name": "pt2", 00:38:58.795 "aliases": [ 00:38:58.795 "00000000-0000-0000-0000-000000000002" 00:38:58.795 ], 00:38:58.795 "product_name": "passthru", 00:38:58.795 "block_size": 4128, 00:38:58.795 "num_blocks": 8192, 00:38:58.795 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:58.795 "md_size": 32, 00:38:58.795 "md_interleave": true, 00:38:58.795 "dif_type": 0, 00:38:58.795 "assigned_rate_limits": { 00:38:58.795 "rw_ios_per_sec": 0, 00:38:58.795 "rw_mbytes_per_sec": 0, 00:38:58.795 "r_mbytes_per_sec": 0, 00:38:58.795 "w_mbytes_per_sec": 0 00:38:58.795 }, 00:38:58.795 "claimed": true, 00:38:58.795 "claim_type": "exclusive_write", 00:38:58.795 "zoned": false, 00:38:58.795 "supported_io_types": { 00:38:58.796 "read": true, 00:38:58.796 "write": true, 00:38:58.796 "unmap": true, 00:38:58.796 "flush": true, 00:38:58.796 "reset": true, 00:38:58.796 "nvme_admin": false, 00:38:58.796 "nvme_io": false, 00:38:58.796 "nvme_io_md": false, 00:38:58.796 "write_zeroes": true, 00:38:58.796 "zcopy": true, 00:38:58.796 "get_zone_info": false, 00:38:58.796 "zone_management": false, 00:38:58.796 "zone_append": false, 00:38:58.796 "compare": false, 00:38:58.796 "compare_and_write": false, 00:38:58.796 "abort": true, 00:38:58.796 "seek_hole": false, 00:38:58.796 "seek_data": false, 00:38:58.796 "copy": true, 00:38:58.796 "nvme_iov_md": false 00:38:58.796 }, 00:38:58.796 "memory_domains": [ 00:38:58.796 { 00:38:58.796 "dma_device_id": "system", 00:38:58.796 "dma_device_type": 1 00:38:58.796 }, 00:38:58.796 { 00:38:58.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:58.796 "dma_device_type": 2 00:38:58.796 } 00:38:58.796 ], 00:38:58.796 "driver_specific": { 00:38:58.796 "passthru": { 00:38:58.796 "name": "pt2", 00:38:58.796 "base_bdev_name": "malloc2" 00:38:58.796 } 00:38:58.796 } 00:38:58.796 }' 00:38:58.796 21:52:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:58.796 21:52:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:58.796 21:52:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:38:58.796 21:52:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:59.054 21:52:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:59.054 21:52:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:38:59.054 21:52:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:59.054 21:52:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:59.054 21:52:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:38:59.054 21:52:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:59.312 21:52:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:59.312 21:52:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:38:59.313 21:52:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:38:59.313 21:52:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:38:59.313 [2024-07-15 21:52:32.687027] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:59.571 21:52:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=bb65c732-f929-4047-bf56-400ff0a1c26f 00:38:59.571 21:52:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # '[' -z bb65c732-f929-4047-bf56-400ff0a1c26f ']' 00:38:59.571 21:52:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:38:59.571 [2024-07-15 21:52:32.882385] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:59.571 [2024-07-15 21:52:32.882492] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:59.571 [2024-07-15 21:52:32.882601] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:59.571 [2024-07-15 21:52:32.882686] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:59.571 [2024-07-15 21:52:32.882725] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:38:59.571 21:52:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:59.571 21:52:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:38:59.830 21:52:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:38:59.830 21:52:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:38:59.830 21:52:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:38:59.830 21:52:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:39:00.101 21:52:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:39:00.101 21:52:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:39:00.359 21:52:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:39:00.359 21:52:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:39:00.359 21:52:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:39:00.359 21:52:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:39:00.359 21:52:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:39:00.359 21:52:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:39:00.359 21:52:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:00.359 21:52:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:00.359 21:52:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:00.627 21:52:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:00.627 21:52:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:00.627 21:52:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:00.627 21:52:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:00.628 21:52:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:39:00.628 21:52:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:39:00.628 [2024-07-15 21:52:33.916659] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:39:00.628 [2024-07-15 21:52:33.919006] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:39:00.628 [2024-07-15 21:52:33.919121] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:39:00.628 [2024-07-15 21:52:33.919240] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:39:00.628 [2024-07-15 21:52:33.919296] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:00.628 [2024-07-15 21:52:33.919318] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state configuring 00:39:00.628 request: 00:39:00.628 { 00:39:00.628 "name": "raid_bdev1", 00:39:00.628 "raid_level": "raid1", 00:39:00.628 "base_bdevs": [ 00:39:00.628 "malloc1", 00:39:00.628 "malloc2" 00:39:00.628 ], 00:39:00.628 "superblock": false, 00:39:00.628 "method": "bdev_raid_create", 00:39:00.628 "req_id": 1 00:39:00.628 } 00:39:00.628 Got JSON-RPC error response 00:39:00.628 response: 00:39:00.628 { 00:39:00.628 "code": -17, 00:39:00.628 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:39:00.628 } 00:39:00.628 21:52:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:39:00.628 21:52:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:00.628 21:52:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:00.628 21:52:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:00.628 21:52:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:00.628 21:52:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:39:00.886 21:52:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:39:00.886 21:52:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:39:00.886 21:52:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:39:01.146 [2024-07-15 21:52:34.359835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:39:01.146 [2024-07-15 21:52:34.360029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:01.146 [2024-07-15 21:52:34.360081] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:39:01.146 [2024-07-15 21:52:34.360128] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:01.146 [2024-07-15 21:52:34.362425] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:01.146 [2024-07-15 21:52:34.362546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:39:01.146 [2024-07-15 21:52:34.362644] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:39:01.146 [2024-07-15 21:52:34.362728] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:39:01.146 pt1 00:39:01.146 21:52:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:39:01.146 21:52:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:01.146 21:52:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:39:01.146 21:52:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:01.146 21:52:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:01.146 21:52:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:39:01.146 21:52:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:01.146 21:52:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:01.146 21:52:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:01.146 21:52:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:01.146 21:52:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:01.146 21:52:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:01.405 21:52:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:01.405 "name": "raid_bdev1", 00:39:01.405 "uuid": "bb65c732-f929-4047-bf56-400ff0a1c26f", 00:39:01.405 "strip_size_kb": 0, 00:39:01.405 "state": "configuring", 00:39:01.405 "raid_level": "raid1", 00:39:01.405 "superblock": true, 00:39:01.405 "num_base_bdevs": 2, 00:39:01.405 "num_base_bdevs_discovered": 1, 00:39:01.405 "num_base_bdevs_operational": 2, 00:39:01.405 "base_bdevs_list": [ 00:39:01.405 { 00:39:01.405 "name": "pt1", 00:39:01.405 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:01.405 "is_configured": true, 00:39:01.405 "data_offset": 256, 00:39:01.405 "data_size": 7936 00:39:01.405 }, 00:39:01.405 { 00:39:01.405 "name": null, 00:39:01.405 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:01.405 "is_configured": false, 00:39:01.405 "data_offset": 256, 00:39:01.405 "data_size": 7936 00:39:01.405 } 00:39:01.405 ] 00:39:01.405 }' 00:39:01.405 21:52:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:01.405 21:52:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:01.973 21:52:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:39:01.973 21:52:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:39:01.973 21:52:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:39:01.973 21:52:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:39:02.233 [2024-07-15 21:52:35.425993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:39:02.233 [2024-07-15 21:52:35.426178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:02.233 [2024-07-15 21:52:35.426235] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:39:02.233 [2024-07-15 21:52:35.426300] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:02.233 [2024-07-15 21:52:35.426550] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:02.233 [2024-07-15 21:52:35.426619] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:39:02.233 [2024-07-15 21:52:35.426713] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:39:02.233 [2024-07-15 21:52:35.426757] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:02.233 [2024-07-15 21:52:35.426876] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:39:02.233 [2024-07-15 21:52:35.426907] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:39:02.233 [2024-07-15 21:52:35.426997] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:39:02.233 [2024-07-15 21:52:35.427088] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:39:02.233 [2024-07-15 21:52:35.427116] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:39:02.233 [2024-07-15 21:52:35.427187] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:02.233 pt2 00:39:02.233 21:52:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:39:02.233 21:52:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:39:02.233 21:52:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:02.233 21:52:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:02.233 21:52:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:02.233 21:52:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:02.233 21:52:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:02.233 21:52:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:39:02.233 21:52:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:02.233 21:52:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:02.233 21:52:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:02.233 21:52:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:02.233 21:52:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:02.233 21:52:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:02.493 21:52:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:02.493 "name": "raid_bdev1", 00:39:02.493 "uuid": "bb65c732-f929-4047-bf56-400ff0a1c26f", 00:39:02.493 "strip_size_kb": 0, 00:39:02.493 "state": "online", 00:39:02.493 "raid_level": "raid1", 00:39:02.493 "superblock": true, 00:39:02.493 "num_base_bdevs": 2, 00:39:02.493 "num_base_bdevs_discovered": 2, 00:39:02.493 "num_base_bdevs_operational": 2, 00:39:02.493 "base_bdevs_list": [ 00:39:02.493 { 00:39:02.493 "name": "pt1", 00:39:02.493 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:02.493 "is_configured": true, 00:39:02.493 "data_offset": 256, 00:39:02.493 "data_size": 7936 00:39:02.493 }, 00:39:02.493 { 00:39:02.493 "name": "pt2", 00:39:02.493 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:02.493 "is_configured": true, 00:39:02.493 "data_offset": 256, 00:39:02.493 "data_size": 7936 00:39:02.493 } 00:39:02.493 ] 00:39:02.493 }' 00:39:02.493 21:52:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:02.493 21:52:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:03.062 21:52:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:39:03.062 21:52:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:39:03.062 21:52:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:39:03.062 21:52:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:39:03.062 21:52:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:39:03.062 21:52:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:39:03.062 21:52:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:39:03.062 21:52:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:39:03.321 [2024-07-15 21:52:36.500408] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:03.321 21:52:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:39:03.321 "name": "raid_bdev1", 00:39:03.321 "aliases": [ 00:39:03.321 "bb65c732-f929-4047-bf56-400ff0a1c26f" 00:39:03.321 ], 00:39:03.321 "product_name": "Raid Volume", 00:39:03.321 "block_size": 4128, 00:39:03.321 "num_blocks": 7936, 00:39:03.321 "uuid": "bb65c732-f929-4047-bf56-400ff0a1c26f", 00:39:03.321 "md_size": 32, 00:39:03.321 "md_interleave": true, 00:39:03.321 "dif_type": 0, 00:39:03.321 "assigned_rate_limits": { 00:39:03.321 "rw_ios_per_sec": 0, 00:39:03.321 "rw_mbytes_per_sec": 0, 00:39:03.321 "r_mbytes_per_sec": 0, 00:39:03.321 "w_mbytes_per_sec": 0 00:39:03.321 }, 00:39:03.321 "claimed": false, 00:39:03.321 "zoned": false, 00:39:03.321 "supported_io_types": { 00:39:03.321 "read": true, 00:39:03.321 "write": true, 00:39:03.321 "unmap": false, 00:39:03.321 "flush": false, 00:39:03.321 "reset": true, 00:39:03.321 "nvme_admin": false, 00:39:03.321 "nvme_io": false, 00:39:03.321 "nvme_io_md": false, 00:39:03.321 "write_zeroes": true, 00:39:03.321 "zcopy": false, 00:39:03.321 "get_zone_info": false, 00:39:03.321 "zone_management": false, 00:39:03.321 "zone_append": false, 00:39:03.321 "compare": false, 00:39:03.321 "compare_and_write": false, 00:39:03.321 "abort": false, 00:39:03.321 "seek_hole": false, 00:39:03.321 "seek_data": false, 00:39:03.321 "copy": false, 00:39:03.321 "nvme_iov_md": false 00:39:03.321 }, 00:39:03.321 "memory_domains": [ 00:39:03.321 { 00:39:03.321 "dma_device_id": "system", 00:39:03.321 "dma_device_type": 1 00:39:03.321 }, 00:39:03.321 { 00:39:03.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:03.321 "dma_device_type": 2 00:39:03.321 }, 00:39:03.321 { 00:39:03.321 "dma_device_id": "system", 00:39:03.321 "dma_device_type": 1 00:39:03.321 }, 00:39:03.321 { 00:39:03.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:03.321 "dma_device_type": 2 00:39:03.321 } 00:39:03.321 ], 00:39:03.321 "driver_specific": { 00:39:03.321 "raid": { 00:39:03.321 "uuid": "bb65c732-f929-4047-bf56-400ff0a1c26f", 00:39:03.321 "strip_size_kb": 0, 00:39:03.321 "state": "online", 00:39:03.321 "raid_level": "raid1", 00:39:03.321 "superblock": true, 00:39:03.321 "num_base_bdevs": 2, 00:39:03.321 "num_base_bdevs_discovered": 2, 00:39:03.321 "num_base_bdevs_operational": 2, 00:39:03.321 "base_bdevs_list": [ 00:39:03.321 { 00:39:03.321 "name": "pt1", 00:39:03.321 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:03.321 "is_configured": true, 00:39:03.321 "data_offset": 256, 00:39:03.321 "data_size": 7936 00:39:03.321 }, 00:39:03.321 { 00:39:03.321 "name": "pt2", 00:39:03.321 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:03.321 "is_configured": true, 00:39:03.321 "data_offset": 256, 00:39:03.321 "data_size": 7936 00:39:03.321 } 00:39:03.321 ] 00:39:03.321 } 00:39:03.321 } 00:39:03.321 }' 00:39:03.321 21:52:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:39:03.321 21:52:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:39:03.321 pt2' 00:39:03.321 21:52:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:39:03.321 21:52:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:39:03.321 21:52:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:39:03.581 21:52:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:39:03.581 "name": "pt1", 00:39:03.581 "aliases": [ 00:39:03.581 "00000000-0000-0000-0000-000000000001" 00:39:03.581 ], 00:39:03.581 "product_name": "passthru", 00:39:03.581 "block_size": 4128, 00:39:03.581 "num_blocks": 8192, 00:39:03.581 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:03.581 "md_size": 32, 00:39:03.581 "md_interleave": true, 00:39:03.581 "dif_type": 0, 00:39:03.581 "assigned_rate_limits": { 00:39:03.581 "rw_ios_per_sec": 0, 00:39:03.581 "rw_mbytes_per_sec": 0, 00:39:03.581 "r_mbytes_per_sec": 0, 00:39:03.581 "w_mbytes_per_sec": 0 00:39:03.581 }, 00:39:03.581 "claimed": true, 00:39:03.582 "claim_type": "exclusive_write", 00:39:03.582 "zoned": false, 00:39:03.582 "supported_io_types": { 00:39:03.582 "read": true, 00:39:03.582 "write": true, 00:39:03.582 "unmap": true, 00:39:03.582 "flush": true, 00:39:03.582 "reset": true, 00:39:03.582 "nvme_admin": false, 00:39:03.582 "nvme_io": false, 00:39:03.582 "nvme_io_md": false, 00:39:03.582 "write_zeroes": true, 00:39:03.582 "zcopy": true, 00:39:03.582 "get_zone_info": false, 00:39:03.582 "zone_management": false, 00:39:03.582 "zone_append": false, 00:39:03.582 "compare": false, 00:39:03.582 "compare_and_write": false, 00:39:03.582 "abort": true, 00:39:03.582 "seek_hole": false, 00:39:03.582 "seek_data": false, 00:39:03.582 "copy": true, 00:39:03.582 "nvme_iov_md": false 00:39:03.582 }, 00:39:03.582 "memory_domains": [ 00:39:03.582 { 00:39:03.582 "dma_device_id": "system", 00:39:03.582 "dma_device_type": 1 00:39:03.582 }, 00:39:03.582 { 00:39:03.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:03.582 "dma_device_type": 2 00:39:03.582 } 00:39:03.582 ], 00:39:03.582 "driver_specific": { 00:39:03.582 "passthru": { 00:39:03.582 "name": "pt1", 00:39:03.582 "base_bdev_name": "malloc1" 00:39:03.582 } 00:39:03.582 } 00:39:03.582 }' 00:39:03.582 21:52:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:03.582 21:52:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:03.582 21:52:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:39:03.582 21:52:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:03.582 21:52:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:03.841 21:52:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:39:03.841 21:52:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:03.841 21:52:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:03.841 21:52:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:39:03.841 21:52:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:03.841 21:52:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:03.841 21:52:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:39:03.841 21:52:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:39:03.841 21:52:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:39:03.841 21:52:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:39:04.101 21:52:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:39:04.101 "name": "pt2", 00:39:04.101 "aliases": [ 00:39:04.101 "00000000-0000-0000-0000-000000000002" 00:39:04.101 ], 00:39:04.101 "product_name": "passthru", 00:39:04.101 "block_size": 4128, 00:39:04.101 "num_blocks": 8192, 00:39:04.101 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:04.101 "md_size": 32, 00:39:04.101 "md_interleave": true, 00:39:04.101 "dif_type": 0, 00:39:04.101 "assigned_rate_limits": { 00:39:04.101 "rw_ios_per_sec": 0, 00:39:04.101 "rw_mbytes_per_sec": 0, 00:39:04.101 "r_mbytes_per_sec": 0, 00:39:04.101 "w_mbytes_per_sec": 0 00:39:04.101 }, 00:39:04.101 "claimed": true, 00:39:04.101 "claim_type": "exclusive_write", 00:39:04.101 "zoned": false, 00:39:04.101 "supported_io_types": { 00:39:04.101 "read": true, 00:39:04.101 "write": true, 00:39:04.101 "unmap": true, 00:39:04.101 "flush": true, 00:39:04.101 "reset": true, 00:39:04.101 "nvme_admin": false, 00:39:04.101 "nvme_io": false, 00:39:04.101 "nvme_io_md": false, 00:39:04.101 "write_zeroes": true, 00:39:04.101 "zcopy": true, 00:39:04.101 "get_zone_info": false, 00:39:04.101 "zone_management": false, 00:39:04.101 "zone_append": false, 00:39:04.101 "compare": false, 00:39:04.101 "compare_and_write": false, 00:39:04.101 "abort": true, 00:39:04.101 "seek_hole": false, 00:39:04.101 "seek_data": false, 00:39:04.101 "copy": true, 00:39:04.101 "nvme_iov_md": false 00:39:04.101 }, 00:39:04.101 "memory_domains": [ 00:39:04.101 { 00:39:04.101 "dma_device_id": "system", 00:39:04.101 "dma_device_type": 1 00:39:04.101 }, 00:39:04.101 { 00:39:04.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:04.101 "dma_device_type": 2 00:39:04.101 } 00:39:04.101 ], 00:39:04.101 "driver_specific": { 00:39:04.101 "passthru": { 00:39:04.101 "name": "pt2", 00:39:04.101 "base_bdev_name": "malloc2" 00:39:04.101 } 00:39:04.101 } 00:39:04.101 }' 00:39:04.101 21:52:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:04.101 21:52:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:04.360 21:52:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:39:04.360 21:52:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:04.361 21:52:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:04.361 21:52:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:39:04.361 21:52:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:04.361 21:52:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:04.361 21:52:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:39:04.361 21:52:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:04.620 21:52:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:04.620 21:52:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:39:04.620 21:52:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:39:04.620 21:52:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:39:04.880 [2024-07-15 21:52:38.005831] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:04.880 21:52:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # '[' bb65c732-f929-4047-bf56-400ff0a1c26f '!=' bb65c732-f929-4047-bf56-400ff0a1c26f ']' 00:39:04.880 21:52:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:39:04.880 21:52:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:39:04.880 21:52:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:39:04.880 21:52:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:39:04.880 [2024-07-15 21:52:38.201352] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:39:04.880 21:52:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:04.880 21:52:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:04.880 21:52:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:04.880 21:52:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:04.880 21:52:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:04.880 21:52:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:04.880 21:52:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:04.880 21:52:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:04.880 21:52:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:04.880 21:52:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:04.880 21:52:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:04.880 21:52:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:05.140 21:52:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:05.140 "name": "raid_bdev1", 00:39:05.140 "uuid": "bb65c732-f929-4047-bf56-400ff0a1c26f", 00:39:05.140 "strip_size_kb": 0, 00:39:05.140 "state": "online", 00:39:05.140 "raid_level": "raid1", 00:39:05.140 "superblock": true, 00:39:05.140 "num_base_bdevs": 2, 00:39:05.140 "num_base_bdevs_discovered": 1, 00:39:05.140 "num_base_bdevs_operational": 1, 00:39:05.140 "base_bdevs_list": [ 00:39:05.140 { 00:39:05.140 "name": null, 00:39:05.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:05.140 "is_configured": false, 00:39:05.140 "data_offset": 256, 00:39:05.140 "data_size": 7936 00:39:05.140 }, 00:39:05.140 { 00:39:05.140 "name": "pt2", 00:39:05.140 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:05.140 "is_configured": true, 00:39:05.140 "data_offset": 256, 00:39:05.140 "data_size": 7936 00:39:05.140 } 00:39:05.140 ] 00:39:05.140 }' 00:39:05.140 21:52:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:05.140 21:52:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:05.710 21:52:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:39:05.970 [2024-07-15 21:52:39.187585] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:05.970 [2024-07-15 21:52:39.187713] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:05.970 [2024-07-15 21:52:39.187815] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:05.970 [2024-07-15 21:52:39.187881] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:05.970 [2024-07-15 21:52:39.187898] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:39:05.970 21:52:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:05.970 21:52:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:39:06.255 21:52:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:39:06.255 21:52:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:39:06.255 21:52:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:39:06.255 21:52:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:39:06.256 21:52:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:39:06.256 21:52:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:39:06.256 21:52:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:39:06.256 21:52:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:39:06.256 21:52:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:39:06.256 21:52:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@518 -- # i=1 00:39:06.256 21:52:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:39:06.515 [2024-07-15 21:52:39.790531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:39:06.515 [2024-07-15 21:52:39.790752] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:06.515 [2024-07-15 21:52:39.790801] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:39:06.515 [2024-07-15 21:52:39.790851] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:06.515 [2024-07-15 21:52:39.792992] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:06.515 [2024-07-15 21:52:39.793093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:39:06.515 [2024-07-15 21:52:39.793216] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:39:06.515 [2024-07-15 21:52:39.793321] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:06.515 [2024-07-15 21:52:39.793448] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:39:06.515 [2024-07-15 21:52:39.793479] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:39:06.515 [2024-07-15 21:52:39.793569] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:39:06.515 [2024-07-15 21:52:39.793679] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:39:06.515 [2024-07-15 21:52:39.793714] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:39:06.515 [2024-07-15 21:52:39.793798] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:06.515 pt2 00:39:06.515 21:52:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:06.515 21:52:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:06.515 21:52:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:06.515 21:52:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:06.515 21:52:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:06.515 21:52:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:06.515 21:52:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:06.515 21:52:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:06.515 21:52:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:06.515 21:52:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:06.515 21:52:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:06.515 21:52:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:06.774 21:52:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:06.774 "name": "raid_bdev1", 00:39:06.774 "uuid": "bb65c732-f929-4047-bf56-400ff0a1c26f", 00:39:06.774 "strip_size_kb": 0, 00:39:06.774 "state": "online", 00:39:06.774 "raid_level": "raid1", 00:39:06.774 "superblock": true, 00:39:06.774 "num_base_bdevs": 2, 00:39:06.774 "num_base_bdevs_discovered": 1, 00:39:06.774 "num_base_bdevs_operational": 1, 00:39:06.774 "base_bdevs_list": [ 00:39:06.774 { 00:39:06.774 "name": null, 00:39:06.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:06.774 "is_configured": false, 00:39:06.774 "data_offset": 256, 00:39:06.774 "data_size": 7936 00:39:06.774 }, 00:39:06.774 { 00:39:06.774 "name": "pt2", 00:39:06.774 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:06.774 "is_configured": true, 00:39:06.774 "data_offset": 256, 00:39:06.774 "data_size": 7936 00:39:06.774 } 00:39:06.774 ] 00:39:06.774 }' 00:39:06.774 21:52:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:06.774 21:52:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:07.339 21:52:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:39:07.598 [2024-07-15 21:52:40.824698] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:07.598 [2024-07-15 21:52:40.824831] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:07.598 [2024-07-15 21:52:40.824936] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:07.598 [2024-07-15 21:52:40.824997] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:07.598 [2024-07-15 21:52:40.825019] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:39:07.598 21:52:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:39:07.598 21:52:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:07.855 21:52:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:39:07.855 21:52:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:39:07.855 21:52:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:39:07.855 21:52:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:39:07.855 [2024-07-15 21:52:41.227977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:39:07.855 [2024-07-15 21:52:41.228148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:07.855 [2024-07-15 21:52:41.228204] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:39:07.855 [2024-07-15 21:52:41.228245] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:07.855 [2024-07-15 21:52:41.230494] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:07.855 [2024-07-15 21:52:41.230596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:39:07.855 [2024-07-15 21:52:41.230687] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:39:07.855 [2024-07-15 21:52:41.230784] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:39:07.856 [2024-07-15 21:52:41.230980] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:39:07.856 [2024-07-15 21:52:41.231018] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:07.856 [2024-07-15 21:52:41.231048] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:39:07.856 [2024-07-15 21:52:41.231159] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:07.856 [2024-07-15 21:52:41.231261] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:39:07.856 [2024-07-15 21:52:41.231309] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:39:07.856 [2024-07-15 21:52:41.231393] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:39:07.856 [2024-07-15 21:52:41.231490] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:39:07.856 [2024-07-15 21:52:41.231534] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:39:07.856 [2024-07-15 21:52:41.231605] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:07.856 pt1 00:39:08.162 21:52:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:39:08.162 21:52:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:08.163 21:52:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:08.163 21:52:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:08.163 21:52:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:08.163 21:52:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:08.163 21:52:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:08.163 21:52:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:08.163 21:52:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:08.163 21:52:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:08.163 21:52:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:08.163 21:52:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:08.163 21:52:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:08.163 21:52:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:08.163 "name": "raid_bdev1", 00:39:08.163 "uuid": "bb65c732-f929-4047-bf56-400ff0a1c26f", 00:39:08.163 "strip_size_kb": 0, 00:39:08.163 "state": "online", 00:39:08.163 "raid_level": "raid1", 00:39:08.163 "superblock": true, 00:39:08.163 "num_base_bdevs": 2, 00:39:08.163 "num_base_bdevs_discovered": 1, 00:39:08.163 "num_base_bdevs_operational": 1, 00:39:08.163 "base_bdevs_list": [ 00:39:08.163 { 00:39:08.163 "name": null, 00:39:08.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:08.163 "is_configured": false, 00:39:08.163 "data_offset": 256, 00:39:08.163 "data_size": 7936 00:39:08.163 }, 00:39:08.163 { 00:39:08.163 "name": "pt2", 00:39:08.163 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:08.163 "is_configured": true, 00:39:08.163 "data_offset": 256, 00:39:08.163 "data_size": 7936 00:39:08.163 } 00:39:08.163 ] 00:39:08.163 }' 00:39:08.163 21:52:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:08.163 21:52:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:09.097 21:52:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:39:09.097 21:52:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:39:09.097 21:52:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:39:09.097 21:52:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:39:09.097 21:52:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:39:09.097 [2024-07-15 21:52:42.454246] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:09.097 21:52:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' bb65c732-f929-4047-bf56-400ff0a1c26f '!=' bb65c732-f929-4047-bf56-400ff0a1c26f ']' 00:39:09.097 21:52:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@562 -- # killprocess 164785 00:39:09.097 21:52:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 164785 ']' 00:39:09.097 21:52:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 164785 00:39:09.097 21:52:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:39:09.401 21:52:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:09.401 21:52:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 164785 00:39:09.401 21:52:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:09.401 21:52:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:09.401 21:52:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 164785' 00:39:09.401 killing process with pid 164785 00:39:09.401 21:52:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@967 -- # kill 164785 00:39:09.401 21:52:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # wait 164785 00:39:09.401 [2024-07-15 21:52:42.499663] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:09.401 [2024-07-15 21:52:42.499761] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:09.401 [2024-07-15 21:52:42.499852] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:09.401 [2024-07-15 21:52:42.499880] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:39:09.401 [2024-07-15 21:52:42.696293] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:10.779 ************************************ 00:39:10.779 END TEST raid_superblock_test_md_interleaved 00:39:10.779 ************************************ 00:39:10.779 21:52:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@564 -- # return 0 00:39:10.779 00:39:10.779 real 0m16.057s 00:39:10.779 user 0m28.879s 00:39:10.779 sys 0m2.239s 00:39:10.779 21:52:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:10.779 21:52:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:10.779 21:52:44 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:39:10.779 21:52:44 bdev_raid -- bdev/bdev_raid.sh@914 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:39:10.779 21:52:44 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:39:10.779 21:52:44 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:10.779 21:52:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:10.779 ************************************ 00:39:10.779 START TEST raid_rebuild_test_sb_md_interleaved 00:39:10.779 ************************************ 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false false 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local verify=false 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local strip_size 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local create_arg 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local data_offset 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # raid_pid=165320 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # waitforlisten 165320 /var/tmp/spdk-raid.sock 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 165320 ']' 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:39:10.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:10.779 21:52:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:11.038 [2024-07-15 21:52:44.169959] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:39:11.038 [2024-07-15 21:52:44.170179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165320 ] 00:39:11.038 I/O size of 3145728 is greater than zero copy threshold (65536). 00:39:11.038 Zero copy mechanism will not be used. 00:39:11.038 [2024-07-15 21:52:44.314938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:11.297 [2024-07-15 21:52:44.569479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:11.556 [2024-07-15 21:52:44.811874] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:11.815 21:52:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:11.815 21:52:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:39:11.815 21:52:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:39:11.815 21:52:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:39:12.075 BaseBdev1_malloc 00:39:12.075 21:52:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:39:12.075 [2024-07-15 21:52:45.435484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:39:12.075 [2024-07-15 21:52:45.435725] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:12.075 [2024-07-15 21:52:45.435782] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:39:12.075 [2024-07-15 21:52:45.435823] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:12.075 [2024-07-15 21:52:45.437941] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:12.075 [2024-07-15 21:52:45.438017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:39:12.075 BaseBdev1 00:39:12.075 21:52:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:39:12.075 21:52:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:39:12.645 BaseBdev2_malloc 00:39:12.645 21:52:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:39:12.645 [2024-07-15 21:52:45.948256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:39:12.645 [2024-07-15 21:52:45.948521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:12.645 [2024-07-15 21:52:45.948631] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:39:12.645 [2024-07-15 21:52:45.948694] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:12.645 [2024-07-15 21:52:45.951219] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:12.645 [2024-07-15 21:52:45.951317] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:39:12.645 BaseBdev2 00:39:12.645 21:52:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:39:12.904 spare_malloc 00:39:12.904 21:52:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:39:13.163 spare_delay 00:39:13.163 21:52:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:39:13.423 [2024-07-15 21:52:46.627619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:13.423 [2024-07-15 21:52:46.627859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:13.423 [2024-07-15 21:52:46.627924] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:39:13.423 [2024-07-15 21:52:46.627974] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:13.423 [2024-07-15 21:52:46.630239] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:13.423 [2024-07-15 21:52:46.630359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:13.423 spare 00:39:13.423 21:52:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:39:13.682 [2024-07-15 21:52:46.839373] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:13.682 [2024-07-15 21:52:46.841789] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:13.682 [2024-07-15 21:52:46.842118] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:39:13.682 [2024-07-15 21:52:46.842166] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:39:13.682 [2024-07-15 21:52:46.842360] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:39:13.682 [2024-07-15 21:52:46.842490] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:39:13.682 [2024-07-15 21:52:46.842521] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:39:13.682 [2024-07-15 21:52:46.842612] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:13.682 21:52:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:13.682 21:52:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:13.682 21:52:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:13.682 21:52:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:13.682 21:52:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:13.682 21:52:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:39:13.682 21:52:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:13.682 21:52:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:13.682 21:52:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:13.682 21:52:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:13.682 21:52:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:13.682 21:52:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:13.941 21:52:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:13.941 "name": "raid_bdev1", 00:39:13.941 "uuid": "323db3f2-a3d0-46a8-bab1-f2e74d224b0d", 00:39:13.941 "strip_size_kb": 0, 00:39:13.941 "state": "online", 00:39:13.941 "raid_level": "raid1", 00:39:13.941 "superblock": true, 00:39:13.941 "num_base_bdevs": 2, 00:39:13.941 "num_base_bdevs_discovered": 2, 00:39:13.941 "num_base_bdevs_operational": 2, 00:39:13.941 "base_bdevs_list": [ 00:39:13.941 { 00:39:13.941 "name": "BaseBdev1", 00:39:13.941 "uuid": "651777a3-30a5-5cb2-a094-d30e9f4f537b", 00:39:13.941 "is_configured": true, 00:39:13.941 "data_offset": 256, 00:39:13.941 "data_size": 7936 00:39:13.941 }, 00:39:13.941 { 00:39:13.941 "name": "BaseBdev2", 00:39:13.941 "uuid": "ec41baa9-cb82-5c41-8460-d5ce262a80a3", 00:39:13.941 "is_configured": true, 00:39:13.941 "data_offset": 256, 00:39:13.941 "data_size": 7936 00:39:13.941 } 00:39:13.941 ] 00:39:13.941 }' 00:39:13.941 21:52:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:13.941 21:52:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:14.508 21:52:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:39:14.508 21:52:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:39:14.508 [2024-07-15 21:52:47.881866] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:14.769 21:52:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:39:14.769 21:52:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:39:14.769 21:52:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:14.769 21:52:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:39:14.769 21:52:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:39:14.769 21:52:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # '[' false = true ']' 00:39:14.769 21:52:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:39:15.029 [2024-07-15 21:52:48.292811] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:39:15.029 21:52:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:15.029 21:52:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:15.029 21:52:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:15.029 21:52:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:15.029 21:52:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:15.029 21:52:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:15.029 21:52:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:15.029 21:52:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:15.029 21:52:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:15.029 21:52:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:15.029 21:52:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:15.029 21:52:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:15.288 21:52:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:15.288 "name": "raid_bdev1", 00:39:15.288 "uuid": "323db3f2-a3d0-46a8-bab1-f2e74d224b0d", 00:39:15.288 "strip_size_kb": 0, 00:39:15.288 "state": "online", 00:39:15.288 "raid_level": "raid1", 00:39:15.288 "superblock": true, 00:39:15.288 "num_base_bdevs": 2, 00:39:15.288 "num_base_bdevs_discovered": 1, 00:39:15.288 "num_base_bdevs_operational": 1, 00:39:15.288 "base_bdevs_list": [ 00:39:15.288 { 00:39:15.288 "name": null, 00:39:15.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:15.288 "is_configured": false, 00:39:15.288 "data_offset": 256, 00:39:15.288 "data_size": 7936 00:39:15.288 }, 00:39:15.288 { 00:39:15.288 "name": "BaseBdev2", 00:39:15.288 "uuid": "ec41baa9-cb82-5c41-8460-d5ce262a80a3", 00:39:15.288 "is_configured": true, 00:39:15.288 "data_offset": 256, 00:39:15.288 "data_size": 7936 00:39:15.288 } 00:39:15.288 ] 00:39:15.288 }' 00:39:15.288 21:52:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:15.288 21:52:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:15.859 21:52:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:39:16.117 [2024-07-15 21:52:49.382953] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:16.117 [2024-07-15 21:52:49.402459] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:39:16.117 [2024-07-15 21:52:49.404602] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:16.117 21:52:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # sleep 1 00:39:17.053 21:52:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:17.053 21:52:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:17.053 21:52:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:39:17.053 21:52:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:39:17.053 21:52:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:17.053 21:52:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:17.053 21:52:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:17.310 21:52:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:17.310 "name": "raid_bdev1", 00:39:17.310 "uuid": "323db3f2-a3d0-46a8-bab1-f2e74d224b0d", 00:39:17.310 "strip_size_kb": 0, 00:39:17.310 "state": "online", 00:39:17.310 "raid_level": "raid1", 00:39:17.310 "superblock": true, 00:39:17.310 "num_base_bdevs": 2, 00:39:17.310 "num_base_bdevs_discovered": 2, 00:39:17.310 "num_base_bdevs_operational": 2, 00:39:17.310 "process": { 00:39:17.310 "type": "rebuild", 00:39:17.310 "target": "spare", 00:39:17.310 "progress": { 00:39:17.310 "blocks": 3072, 00:39:17.310 "percent": 38 00:39:17.310 } 00:39:17.310 }, 00:39:17.310 "base_bdevs_list": [ 00:39:17.310 { 00:39:17.310 "name": "spare", 00:39:17.310 "uuid": "48e4fbe1-f964-5fa3-9c49-87655eebb6fb", 00:39:17.310 "is_configured": true, 00:39:17.310 "data_offset": 256, 00:39:17.310 "data_size": 7936 00:39:17.310 }, 00:39:17.310 { 00:39:17.310 "name": "BaseBdev2", 00:39:17.310 "uuid": "ec41baa9-cb82-5c41-8460-d5ce262a80a3", 00:39:17.310 "is_configured": true, 00:39:17.310 "data_offset": 256, 00:39:17.310 "data_size": 7936 00:39:17.310 } 00:39:17.310 ] 00:39:17.310 }' 00:39:17.310 21:52:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:17.568 21:52:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:17.568 21:52:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:17.568 21:52:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:39:17.568 21:52:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:39:17.826 [2024-07-15 21:52:51.007285] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:17.826 [2024-07-15 21:52:51.015160] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:39:17.826 [2024-07-15 21:52:51.015272] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:17.826 [2024-07-15 21:52:51.015303] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:17.826 [2024-07-15 21:52:51.015331] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:39:17.826 21:52:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:17.826 21:52:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:17.826 21:52:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:17.826 21:52:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:17.826 21:52:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:17.826 21:52:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:17.826 21:52:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:17.826 21:52:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:17.826 21:52:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:17.826 21:52:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:17.826 21:52:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:17.827 21:52:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:18.084 21:52:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:18.084 "name": "raid_bdev1", 00:39:18.084 "uuid": "323db3f2-a3d0-46a8-bab1-f2e74d224b0d", 00:39:18.084 "strip_size_kb": 0, 00:39:18.084 "state": "online", 00:39:18.084 "raid_level": "raid1", 00:39:18.084 "superblock": true, 00:39:18.084 "num_base_bdevs": 2, 00:39:18.084 "num_base_bdevs_discovered": 1, 00:39:18.084 "num_base_bdevs_operational": 1, 00:39:18.084 "base_bdevs_list": [ 00:39:18.084 { 00:39:18.084 "name": null, 00:39:18.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:18.084 "is_configured": false, 00:39:18.084 "data_offset": 256, 00:39:18.084 "data_size": 7936 00:39:18.084 }, 00:39:18.084 { 00:39:18.084 "name": "BaseBdev2", 00:39:18.084 "uuid": "ec41baa9-cb82-5c41-8460-d5ce262a80a3", 00:39:18.084 "is_configured": true, 00:39:18.084 "data_offset": 256, 00:39:18.084 "data_size": 7936 00:39:18.084 } 00:39:18.084 ] 00:39:18.084 }' 00:39:18.084 21:52:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:18.084 21:52:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:18.652 21:52:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:18.652 21:52:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:18.652 21:52:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:39:18.652 21:52:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:39:18.652 21:52:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:18.652 21:52:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:18.652 21:52:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:18.912 21:52:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:18.912 "name": "raid_bdev1", 00:39:18.912 "uuid": "323db3f2-a3d0-46a8-bab1-f2e74d224b0d", 00:39:18.912 "strip_size_kb": 0, 00:39:18.912 "state": "online", 00:39:18.912 "raid_level": "raid1", 00:39:18.912 "superblock": true, 00:39:18.912 "num_base_bdevs": 2, 00:39:18.912 "num_base_bdevs_discovered": 1, 00:39:18.912 "num_base_bdevs_operational": 1, 00:39:18.912 "base_bdevs_list": [ 00:39:18.912 { 00:39:18.912 "name": null, 00:39:18.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:18.912 "is_configured": false, 00:39:18.912 "data_offset": 256, 00:39:18.912 "data_size": 7936 00:39:18.912 }, 00:39:18.912 { 00:39:18.912 "name": "BaseBdev2", 00:39:18.912 "uuid": "ec41baa9-cb82-5c41-8460-d5ce262a80a3", 00:39:18.912 "is_configured": true, 00:39:18.912 "data_offset": 256, 00:39:18.912 "data_size": 7936 00:39:18.912 } 00:39:18.912 ] 00:39:18.912 }' 00:39:18.912 21:52:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:18.912 21:52:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:39:18.912 21:52:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:19.171 21:52:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:19.171 21:52:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:39:19.171 [2024-07-15 21:52:52.483004] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:19.171 [2024-07-15 21:52:52.502785] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:39:19.171 [2024-07-15 21:52:52.505052] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:19.171 21:52:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:39:20.565 21:52:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:20.565 21:52:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:20.565 21:52:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:39:20.565 21:52:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:39:20.565 21:52:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:20.565 21:52:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:20.565 21:52:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:20.565 21:52:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:20.565 "name": "raid_bdev1", 00:39:20.565 "uuid": "323db3f2-a3d0-46a8-bab1-f2e74d224b0d", 00:39:20.565 "strip_size_kb": 0, 00:39:20.565 "state": "online", 00:39:20.565 "raid_level": "raid1", 00:39:20.565 "superblock": true, 00:39:20.565 "num_base_bdevs": 2, 00:39:20.565 "num_base_bdevs_discovered": 2, 00:39:20.565 "num_base_bdevs_operational": 2, 00:39:20.565 "process": { 00:39:20.565 "type": "rebuild", 00:39:20.565 "target": "spare", 00:39:20.565 "progress": { 00:39:20.565 "blocks": 2816, 00:39:20.565 "percent": 35 00:39:20.565 } 00:39:20.565 }, 00:39:20.565 "base_bdevs_list": [ 00:39:20.565 { 00:39:20.565 "name": "spare", 00:39:20.565 "uuid": "48e4fbe1-f964-5fa3-9c49-87655eebb6fb", 00:39:20.565 "is_configured": true, 00:39:20.565 "data_offset": 256, 00:39:20.565 "data_size": 7936 00:39:20.565 }, 00:39:20.565 { 00:39:20.565 "name": "BaseBdev2", 00:39:20.565 "uuid": "ec41baa9-cb82-5c41-8460-d5ce262a80a3", 00:39:20.565 "is_configured": true, 00:39:20.565 "data_offset": 256, 00:39:20.565 "data_size": 7936 00:39:20.565 } 00:39:20.565 ] 00:39:20.565 }' 00:39:20.565 21:52:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:20.565 21:52:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:20.565 21:52:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:20.565 21:52:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:39:20.565 21:52:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:39:20.565 21:52:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:39:20.565 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:39:20.565 21:52:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:39:20.565 21:52:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:39:20.565 21:52:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:39:20.565 21:52:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@705 -- # local timeout=1426 00:39:20.565 21:52:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:39:20.565 21:52:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:20.565 21:52:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:20.565 21:52:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:39:20.565 21:52:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:39:20.565 21:52:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:20.565 21:52:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:20.565 21:52:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:20.823 21:52:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:20.823 "name": "raid_bdev1", 00:39:20.823 "uuid": "323db3f2-a3d0-46a8-bab1-f2e74d224b0d", 00:39:20.823 "strip_size_kb": 0, 00:39:20.823 "state": "online", 00:39:20.824 "raid_level": "raid1", 00:39:20.824 "superblock": true, 00:39:20.824 "num_base_bdevs": 2, 00:39:20.824 "num_base_bdevs_discovered": 2, 00:39:20.824 "num_base_bdevs_operational": 2, 00:39:20.824 "process": { 00:39:20.824 "type": "rebuild", 00:39:20.824 "target": "spare", 00:39:20.824 "progress": { 00:39:20.824 "blocks": 3584, 00:39:20.824 "percent": 45 00:39:20.824 } 00:39:20.824 }, 00:39:20.824 "base_bdevs_list": [ 00:39:20.824 { 00:39:20.824 "name": "spare", 00:39:20.824 "uuid": "48e4fbe1-f964-5fa3-9c49-87655eebb6fb", 00:39:20.824 "is_configured": true, 00:39:20.824 "data_offset": 256, 00:39:20.824 "data_size": 7936 00:39:20.824 }, 00:39:20.824 { 00:39:20.824 "name": "BaseBdev2", 00:39:20.824 "uuid": "ec41baa9-cb82-5c41-8460-d5ce262a80a3", 00:39:20.824 "is_configured": true, 00:39:20.824 "data_offset": 256, 00:39:20.824 "data_size": 7936 00:39:20.824 } 00:39:20.824 ] 00:39:20.824 }' 00:39:20.824 21:52:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:20.824 21:52:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:20.824 21:52:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:20.824 21:52:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:39:20.824 21:52:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:39:22.203 21:52:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:39:22.203 21:52:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:22.203 21:52:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:22.203 21:52:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:39:22.203 21:52:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:39:22.203 21:52:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:22.203 21:52:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:22.203 21:52:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:22.203 21:52:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:22.203 "name": "raid_bdev1", 00:39:22.203 "uuid": "323db3f2-a3d0-46a8-bab1-f2e74d224b0d", 00:39:22.203 "strip_size_kb": 0, 00:39:22.203 "state": "online", 00:39:22.203 "raid_level": "raid1", 00:39:22.203 "superblock": true, 00:39:22.203 "num_base_bdevs": 2, 00:39:22.203 "num_base_bdevs_discovered": 2, 00:39:22.203 "num_base_bdevs_operational": 2, 00:39:22.203 "process": { 00:39:22.203 "type": "rebuild", 00:39:22.203 "target": "spare", 00:39:22.203 "progress": { 00:39:22.203 "blocks": 7168, 00:39:22.203 "percent": 90 00:39:22.203 } 00:39:22.203 }, 00:39:22.203 "base_bdevs_list": [ 00:39:22.203 { 00:39:22.203 "name": "spare", 00:39:22.203 "uuid": "48e4fbe1-f964-5fa3-9c49-87655eebb6fb", 00:39:22.203 "is_configured": true, 00:39:22.203 "data_offset": 256, 00:39:22.203 "data_size": 7936 00:39:22.203 }, 00:39:22.203 { 00:39:22.203 "name": "BaseBdev2", 00:39:22.203 "uuid": "ec41baa9-cb82-5c41-8460-d5ce262a80a3", 00:39:22.203 "is_configured": true, 00:39:22.203 "data_offset": 256, 00:39:22.203 "data_size": 7936 00:39:22.203 } 00:39:22.203 ] 00:39:22.203 }' 00:39:22.203 21:52:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:22.203 21:52:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:22.203 21:52:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:22.203 21:52:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:39:22.203 21:52:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:39:22.462 [2024-07-15 21:52:55.625227] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:39:22.462 [2024-07-15 21:52:55.625443] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:39:22.462 [2024-07-15 21:52:55.625675] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:23.408 21:52:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:39:23.408 21:52:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:23.408 21:52:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:23.408 21:52:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:39:23.408 21:52:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:39:23.408 21:52:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:23.408 21:52:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:23.408 21:52:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:23.408 21:52:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:23.408 "name": "raid_bdev1", 00:39:23.408 "uuid": "323db3f2-a3d0-46a8-bab1-f2e74d224b0d", 00:39:23.408 "strip_size_kb": 0, 00:39:23.408 "state": "online", 00:39:23.408 "raid_level": "raid1", 00:39:23.408 "superblock": true, 00:39:23.408 "num_base_bdevs": 2, 00:39:23.408 "num_base_bdevs_discovered": 2, 00:39:23.408 "num_base_bdevs_operational": 2, 00:39:23.409 "base_bdevs_list": [ 00:39:23.409 { 00:39:23.409 "name": "spare", 00:39:23.409 "uuid": "48e4fbe1-f964-5fa3-9c49-87655eebb6fb", 00:39:23.409 "is_configured": true, 00:39:23.409 "data_offset": 256, 00:39:23.409 "data_size": 7936 00:39:23.409 }, 00:39:23.409 { 00:39:23.409 "name": "BaseBdev2", 00:39:23.409 "uuid": "ec41baa9-cb82-5c41-8460-d5ce262a80a3", 00:39:23.409 "is_configured": true, 00:39:23.409 "data_offset": 256, 00:39:23.409 "data_size": 7936 00:39:23.409 } 00:39:23.409 ] 00:39:23.409 }' 00:39:23.409 21:52:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:23.409 21:52:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:39:23.409 21:52:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:23.409 21:52:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:39:23.409 21:52:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # break 00:39:23.409 21:52:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:23.409 21:52:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:23.409 21:52:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:39:23.409 21:52:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:39:23.409 21:52:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:23.409 21:52:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:23.409 21:52:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:23.669 21:52:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:23.669 "name": "raid_bdev1", 00:39:23.669 "uuid": "323db3f2-a3d0-46a8-bab1-f2e74d224b0d", 00:39:23.669 "strip_size_kb": 0, 00:39:23.669 "state": "online", 00:39:23.669 "raid_level": "raid1", 00:39:23.669 "superblock": true, 00:39:23.669 "num_base_bdevs": 2, 00:39:23.669 "num_base_bdevs_discovered": 2, 00:39:23.669 "num_base_bdevs_operational": 2, 00:39:23.669 "base_bdevs_list": [ 00:39:23.669 { 00:39:23.669 "name": "spare", 00:39:23.669 "uuid": "48e4fbe1-f964-5fa3-9c49-87655eebb6fb", 00:39:23.669 "is_configured": true, 00:39:23.669 "data_offset": 256, 00:39:23.669 "data_size": 7936 00:39:23.669 }, 00:39:23.669 { 00:39:23.669 "name": "BaseBdev2", 00:39:23.669 "uuid": "ec41baa9-cb82-5c41-8460-d5ce262a80a3", 00:39:23.669 "is_configured": true, 00:39:23.669 "data_offset": 256, 00:39:23.669 "data_size": 7936 00:39:23.669 } 00:39:23.669 ] 00:39:23.669 }' 00:39:23.669 21:52:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:23.669 21:52:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:39:23.929 21:52:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:23.929 21:52:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:23.929 21:52:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:23.929 21:52:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:23.929 21:52:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:23.929 21:52:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:23.929 21:52:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:23.929 21:52:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:39:23.929 21:52:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:23.929 21:52:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:23.929 21:52:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:23.929 21:52:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:23.929 21:52:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:23.929 21:52:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:24.188 21:52:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:24.188 "name": "raid_bdev1", 00:39:24.188 "uuid": "323db3f2-a3d0-46a8-bab1-f2e74d224b0d", 00:39:24.188 "strip_size_kb": 0, 00:39:24.188 "state": "online", 00:39:24.188 "raid_level": "raid1", 00:39:24.188 "superblock": true, 00:39:24.188 "num_base_bdevs": 2, 00:39:24.188 "num_base_bdevs_discovered": 2, 00:39:24.188 "num_base_bdevs_operational": 2, 00:39:24.188 "base_bdevs_list": [ 00:39:24.188 { 00:39:24.188 "name": "spare", 00:39:24.188 "uuid": "48e4fbe1-f964-5fa3-9c49-87655eebb6fb", 00:39:24.188 "is_configured": true, 00:39:24.188 "data_offset": 256, 00:39:24.188 "data_size": 7936 00:39:24.188 }, 00:39:24.188 { 00:39:24.188 "name": "BaseBdev2", 00:39:24.188 "uuid": "ec41baa9-cb82-5c41-8460-d5ce262a80a3", 00:39:24.188 "is_configured": true, 00:39:24.188 "data_offset": 256, 00:39:24.188 "data_size": 7936 00:39:24.188 } 00:39:24.188 ] 00:39:24.188 }' 00:39:24.188 21:52:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:24.188 21:52:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:24.756 21:52:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:39:24.756 [2024-07-15 21:52:58.120559] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:24.756 [2024-07-15 21:52:58.120666] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:24.756 [2024-07-15 21:52:58.120779] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:24.756 [2024-07-15 21:52:58.120888] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:24.756 [2024-07-15 21:52:58.120926] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:39:25.016 21:52:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:25.016 21:52:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # jq length 00:39:25.016 21:52:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:39:25.016 21:52:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # '[' false = true ']' 00:39:25.016 21:52:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:39:25.016 21:52:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:39:25.276 21:52:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:39:25.535 [2024-07-15 21:52:58.739475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:25.535 [2024-07-15 21:52:58.739636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:25.535 [2024-07-15 21:52:58.739696] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:39:25.535 [2024-07-15 21:52:58.739733] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:25.535 [2024-07-15 21:52:58.741785] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:25.535 [2024-07-15 21:52:58.741889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:25.535 [2024-07-15 21:52:58.741993] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:39:25.535 [2024-07-15 21:52:58.742076] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:25.535 [2024-07-15 21:52:58.742234] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:25.535 spare 00:39:25.535 21:52:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:25.535 21:52:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:25.535 21:52:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:25.535 21:52:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:25.535 21:52:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:25.535 21:52:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:39:25.535 21:52:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:25.535 21:52:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:25.535 21:52:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:25.535 21:52:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:25.535 21:52:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:25.535 21:52:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:25.535 [2024-07-15 21:52:58.842174] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:39:25.535 [2024-07-15 21:52:58.842278] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:39:25.535 [2024-07-15 21:52:58.842452] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:39:25.535 [2024-07-15 21:52:58.842573] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:39:25.535 [2024-07-15 21:52:58.842605] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:39:25.535 [2024-07-15 21:52:58.842697] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:25.793 21:52:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:25.793 "name": "raid_bdev1", 00:39:25.793 "uuid": "323db3f2-a3d0-46a8-bab1-f2e74d224b0d", 00:39:25.793 "strip_size_kb": 0, 00:39:25.793 "state": "online", 00:39:25.793 "raid_level": "raid1", 00:39:25.793 "superblock": true, 00:39:25.793 "num_base_bdevs": 2, 00:39:25.793 "num_base_bdevs_discovered": 2, 00:39:25.793 "num_base_bdevs_operational": 2, 00:39:25.793 "base_bdevs_list": [ 00:39:25.793 { 00:39:25.793 "name": "spare", 00:39:25.793 "uuid": "48e4fbe1-f964-5fa3-9c49-87655eebb6fb", 00:39:25.793 "is_configured": true, 00:39:25.793 "data_offset": 256, 00:39:25.793 "data_size": 7936 00:39:25.793 }, 00:39:25.793 { 00:39:25.793 "name": "BaseBdev2", 00:39:25.793 "uuid": "ec41baa9-cb82-5c41-8460-d5ce262a80a3", 00:39:25.793 "is_configured": true, 00:39:25.793 "data_offset": 256, 00:39:25.793 "data_size": 7936 00:39:25.793 } 00:39:25.793 ] 00:39:25.793 }' 00:39:25.793 21:52:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:25.793 21:52:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:26.357 21:52:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:26.357 21:52:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:26.357 21:52:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:39:26.357 21:52:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:39:26.357 21:52:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:26.357 21:52:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:26.357 21:52:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:26.615 21:52:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:26.615 "name": "raid_bdev1", 00:39:26.615 "uuid": "323db3f2-a3d0-46a8-bab1-f2e74d224b0d", 00:39:26.615 "strip_size_kb": 0, 00:39:26.615 "state": "online", 00:39:26.615 "raid_level": "raid1", 00:39:26.615 "superblock": true, 00:39:26.615 "num_base_bdevs": 2, 00:39:26.615 "num_base_bdevs_discovered": 2, 00:39:26.615 "num_base_bdevs_operational": 2, 00:39:26.615 "base_bdevs_list": [ 00:39:26.615 { 00:39:26.615 "name": "spare", 00:39:26.615 "uuid": "48e4fbe1-f964-5fa3-9c49-87655eebb6fb", 00:39:26.615 "is_configured": true, 00:39:26.615 "data_offset": 256, 00:39:26.615 "data_size": 7936 00:39:26.615 }, 00:39:26.615 { 00:39:26.615 "name": "BaseBdev2", 00:39:26.615 "uuid": "ec41baa9-cb82-5c41-8460-d5ce262a80a3", 00:39:26.615 "is_configured": true, 00:39:26.615 "data_offset": 256, 00:39:26.615 "data_size": 7936 00:39:26.615 } 00:39:26.615 ] 00:39:26.615 }' 00:39:26.615 21:52:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:26.615 21:52:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:39:26.615 21:52:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:26.615 21:52:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:26.615 21:52:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:26.615 21:52:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:39:26.872 21:53:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:39:26.872 21:53:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:39:27.130 [2024-07-15 21:53:00.264934] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:27.130 21:53:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:27.130 21:53:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:27.130 21:53:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:27.130 21:53:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:27.130 21:53:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:27.130 21:53:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:27.130 21:53:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:27.130 21:53:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:27.130 21:53:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:27.130 21:53:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:27.130 21:53:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:27.130 21:53:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:27.130 21:53:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:27.130 "name": "raid_bdev1", 00:39:27.130 "uuid": "323db3f2-a3d0-46a8-bab1-f2e74d224b0d", 00:39:27.130 "strip_size_kb": 0, 00:39:27.130 "state": "online", 00:39:27.130 "raid_level": "raid1", 00:39:27.130 "superblock": true, 00:39:27.130 "num_base_bdevs": 2, 00:39:27.130 "num_base_bdevs_discovered": 1, 00:39:27.130 "num_base_bdevs_operational": 1, 00:39:27.130 "base_bdevs_list": [ 00:39:27.130 { 00:39:27.130 "name": null, 00:39:27.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:27.130 "is_configured": false, 00:39:27.130 "data_offset": 256, 00:39:27.130 "data_size": 7936 00:39:27.130 }, 00:39:27.130 { 00:39:27.130 "name": "BaseBdev2", 00:39:27.131 "uuid": "ec41baa9-cb82-5c41-8460-d5ce262a80a3", 00:39:27.131 "is_configured": true, 00:39:27.131 "data_offset": 256, 00:39:27.131 "data_size": 7936 00:39:27.131 } 00:39:27.131 ] 00:39:27.131 }' 00:39:27.131 21:53:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:27.131 21:53:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:28.066 21:53:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:39:28.066 [2024-07-15 21:53:01.271348] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:28.066 [2024-07-15 21:53:01.271711] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:39:28.066 [2024-07-15 21:53:01.271759] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:39:28.066 [2024-07-15 21:53:01.271870] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:28.066 [2024-07-15 21:53:01.290113] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:39:28.066 [2024-07-15 21:53:01.292414] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:28.066 21:53:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # sleep 1 00:39:29.006 21:53:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:29.006 21:53:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:29.006 21:53:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:39:29.006 21:53:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:39:29.006 21:53:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:29.006 21:53:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:29.006 21:53:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:29.270 21:53:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:29.270 "name": "raid_bdev1", 00:39:29.270 "uuid": "323db3f2-a3d0-46a8-bab1-f2e74d224b0d", 00:39:29.270 "strip_size_kb": 0, 00:39:29.270 "state": "online", 00:39:29.270 "raid_level": "raid1", 00:39:29.270 "superblock": true, 00:39:29.270 "num_base_bdevs": 2, 00:39:29.270 "num_base_bdevs_discovered": 2, 00:39:29.270 "num_base_bdevs_operational": 2, 00:39:29.270 "process": { 00:39:29.270 "type": "rebuild", 00:39:29.270 "target": "spare", 00:39:29.270 "progress": { 00:39:29.270 "blocks": 2816, 00:39:29.270 "percent": 35 00:39:29.270 } 00:39:29.270 }, 00:39:29.270 "base_bdevs_list": [ 00:39:29.270 { 00:39:29.270 "name": "spare", 00:39:29.270 "uuid": "48e4fbe1-f964-5fa3-9c49-87655eebb6fb", 00:39:29.270 "is_configured": true, 00:39:29.270 "data_offset": 256, 00:39:29.270 "data_size": 7936 00:39:29.270 }, 00:39:29.270 { 00:39:29.270 "name": "BaseBdev2", 00:39:29.270 "uuid": "ec41baa9-cb82-5c41-8460-d5ce262a80a3", 00:39:29.270 "is_configured": true, 00:39:29.270 "data_offset": 256, 00:39:29.270 "data_size": 7936 00:39:29.270 } 00:39:29.270 ] 00:39:29.270 }' 00:39:29.270 21:53:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:29.270 21:53:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:29.270 21:53:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:29.270 21:53:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:39:29.270 21:53:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:39:29.528 [2024-07-15 21:53:02.821573] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:29.786 [2024-07-15 21:53:02.908644] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:39:29.786 [2024-07-15 21:53:02.908876] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:29.786 [2024-07-15 21:53:02.908937] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:29.786 [2024-07-15 21:53:02.908974] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:39:29.786 21:53:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:29.786 21:53:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:29.786 21:53:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:29.786 21:53:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:29.786 21:53:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:29.786 21:53:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:29.786 21:53:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:29.786 21:53:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:29.786 21:53:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:29.786 21:53:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:29.786 21:53:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:29.786 21:53:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:29.786 21:53:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:29.786 "name": "raid_bdev1", 00:39:29.786 "uuid": "323db3f2-a3d0-46a8-bab1-f2e74d224b0d", 00:39:29.786 "strip_size_kb": 0, 00:39:29.786 "state": "online", 00:39:29.786 "raid_level": "raid1", 00:39:29.786 "superblock": true, 00:39:29.786 "num_base_bdevs": 2, 00:39:29.786 "num_base_bdevs_discovered": 1, 00:39:29.786 "num_base_bdevs_operational": 1, 00:39:29.786 "base_bdevs_list": [ 00:39:29.786 { 00:39:29.786 "name": null, 00:39:29.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:29.786 "is_configured": false, 00:39:29.786 "data_offset": 256, 00:39:29.786 "data_size": 7936 00:39:29.786 }, 00:39:29.786 { 00:39:29.786 "name": "BaseBdev2", 00:39:29.786 "uuid": "ec41baa9-cb82-5c41-8460-d5ce262a80a3", 00:39:29.786 "is_configured": true, 00:39:29.786 "data_offset": 256, 00:39:29.786 "data_size": 7936 00:39:29.786 } 00:39:29.786 ] 00:39:29.786 }' 00:39:29.786 21:53:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:29.786 21:53:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:30.421 21:53:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:39:30.680 [2024-07-15 21:53:03.988558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:30.680 [2024-07-15 21:53:03.988775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:30.680 [2024-07-15 21:53:03.988826] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:39:30.680 [2024-07-15 21:53:03.988877] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:30.680 [2024-07-15 21:53:03.989190] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:30.680 [2024-07-15 21:53:03.989262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:30.680 [2024-07-15 21:53:03.989420] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:39:30.680 [2024-07-15 21:53:03.989459] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:39:30.680 [2024-07-15 21:53:03.989485] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:39:30.680 [2024-07-15 21:53:03.989554] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:30.680 [2024-07-15 21:53:04.007607] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:39:30.680 spare 00:39:30.680 [2024-07-15 21:53:04.009920] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:30.680 21:53:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # sleep 1 00:39:32.052 21:53:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:32.052 21:53:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:32.052 21:53:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:39:32.052 21:53:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:39:32.052 21:53:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:32.052 21:53:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:32.052 21:53:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:32.052 21:53:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:32.052 "name": "raid_bdev1", 00:39:32.052 "uuid": "323db3f2-a3d0-46a8-bab1-f2e74d224b0d", 00:39:32.052 "strip_size_kb": 0, 00:39:32.052 "state": "online", 00:39:32.052 "raid_level": "raid1", 00:39:32.052 "superblock": true, 00:39:32.052 "num_base_bdevs": 2, 00:39:32.052 "num_base_bdevs_discovered": 2, 00:39:32.052 "num_base_bdevs_operational": 2, 00:39:32.052 "process": { 00:39:32.052 "type": "rebuild", 00:39:32.052 "target": "spare", 00:39:32.052 "progress": { 00:39:32.052 "blocks": 2816, 00:39:32.052 "percent": 35 00:39:32.052 } 00:39:32.052 }, 00:39:32.052 "base_bdevs_list": [ 00:39:32.052 { 00:39:32.052 "name": "spare", 00:39:32.052 "uuid": "48e4fbe1-f964-5fa3-9c49-87655eebb6fb", 00:39:32.052 "is_configured": true, 00:39:32.052 "data_offset": 256, 00:39:32.052 "data_size": 7936 00:39:32.052 }, 00:39:32.052 { 00:39:32.052 "name": "BaseBdev2", 00:39:32.052 "uuid": "ec41baa9-cb82-5c41-8460-d5ce262a80a3", 00:39:32.052 "is_configured": true, 00:39:32.052 "data_offset": 256, 00:39:32.052 "data_size": 7936 00:39:32.052 } 00:39:32.052 ] 00:39:32.052 }' 00:39:32.052 21:53:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:32.052 21:53:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:32.052 21:53:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:32.052 21:53:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:39:32.052 21:53:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:39:32.311 [2024-07-15 21:53:05.517601] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:32.311 [2024-07-15 21:53:05.519712] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:39:32.311 [2024-07-15 21:53:05.519817] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:32.311 [2024-07-15 21:53:05.519845] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:32.311 [2024-07-15 21:53:05.519868] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:39:32.311 21:53:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:32.311 21:53:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:32.311 21:53:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:32.311 21:53:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:32.311 21:53:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:32.311 21:53:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:32.311 21:53:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:32.311 21:53:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:32.311 21:53:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:32.311 21:53:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:32.311 21:53:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:32.311 21:53:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:32.571 21:53:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:32.571 "name": "raid_bdev1", 00:39:32.571 "uuid": "323db3f2-a3d0-46a8-bab1-f2e74d224b0d", 00:39:32.571 "strip_size_kb": 0, 00:39:32.571 "state": "online", 00:39:32.571 "raid_level": "raid1", 00:39:32.571 "superblock": true, 00:39:32.571 "num_base_bdevs": 2, 00:39:32.571 "num_base_bdevs_discovered": 1, 00:39:32.571 "num_base_bdevs_operational": 1, 00:39:32.571 "base_bdevs_list": [ 00:39:32.571 { 00:39:32.571 "name": null, 00:39:32.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:32.571 "is_configured": false, 00:39:32.571 "data_offset": 256, 00:39:32.571 "data_size": 7936 00:39:32.571 }, 00:39:32.571 { 00:39:32.571 "name": "BaseBdev2", 00:39:32.571 "uuid": "ec41baa9-cb82-5c41-8460-d5ce262a80a3", 00:39:32.571 "is_configured": true, 00:39:32.571 "data_offset": 256, 00:39:32.571 "data_size": 7936 00:39:32.571 } 00:39:32.571 ] 00:39:32.571 }' 00:39:32.571 21:53:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:32.571 21:53:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:33.139 21:53:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:33.139 21:53:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:33.139 21:53:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:39:33.139 21:53:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:39:33.139 21:53:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:33.139 21:53:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:33.139 21:53:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:33.400 21:53:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:33.400 "name": "raid_bdev1", 00:39:33.400 "uuid": "323db3f2-a3d0-46a8-bab1-f2e74d224b0d", 00:39:33.400 "strip_size_kb": 0, 00:39:33.400 "state": "online", 00:39:33.400 "raid_level": "raid1", 00:39:33.400 "superblock": true, 00:39:33.400 "num_base_bdevs": 2, 00:39:33.400 "num_base_bdevs_discovered": 1, 00:39:33.400 "num_base_bdevs_operational": 1, 00:39:33.400 "base_bdevs_list": [ 00:39:33.400 { 00:39:33.400 "name": null, 00:39:33.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:33.400 "is_configured": false, 00:39:33.400 "data_offset": 256, 00:39:33.400 "data_size": 7936 00:39:33.400 }, 00:39:33.400 { 00:39:33.400 "name": "BaseBdev2", 00:39:33.400 "uuid": "ec41baa9-cb82-5c41-8460-d5ce262a80a3", 00:39:33.400 "is_configured": true, 00:39:33.400 "data_offset": 256, 00:39:33.400 "data_size": 7936 00:39:33.400 } 00:39:33.400 ] 00:39:33.400 }' 00:39:33.400 21:53:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:33.400 21:53:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:39:33.400 21:53:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:33.400 21:53:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:33.400 21:53:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:39:33.659 21:53:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:39:33.918 [2024-07-15 21:53:07.049957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:39:33.918 [2024-07-15 21:53:07.050164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:33.918 [2024-07-15 21:53:07.050223] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:39:33.918 [2024-07-15 21:53:07.050282] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:33.918 [2024-07-15 21:53:07.050515] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:33.918 [2024-07-15 21:53:07.050565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:39:33.918 [2024-07-15 21:53:07.050691] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:39:33.918 [2024-07-15 21:53:07.050728] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:39:33.918 [2024-07-15 21:53:07.050752] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:39:33.918 BaseBdev1 00:39:33.918 21:53:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # sleep 1 00:39:34.854 21:53:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:34.854 21:53:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:34.854 21:53:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:34.854 21:53:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:34.854 21:53:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:34.854 21:53:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:34.854 21:53:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:34.854 21:53:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:34.854 21:53:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:34.854 21:53:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:34.854 21:53:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:34.854 21:53:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:35.114 21:53:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:35.114 "name": "raid_bdev1", 00:39:35.114 "uuid": "323db3f2-a3d0-46a8-bab1-f2e74d224b0d", 00:39:35.114 "strip_size_kb": 0, 00:39:35.114 "state": "online", 00:39:35.114 "raid_level": "raid1", 00:39:35.114 "superblock": true, 00:39:35.114 "num_base_bdevs": 2, 00:39:35.114 "num_base_bdevs_discovered": 1, 00:39:35.114 "num_base_bdevs_operational": 1, 00:39:35.114 "base_bdevs_list": [ 00:39:35.114 { 00:39:35.114 "name": null, 00:39:35.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:35.114 "is_configured": false, 00:39:35.114 "data_offset": 256, 00:39:35.114 "data_size": 7936 00:39:35.114 }, 00:39:35.114 { 00:39:35.114 "name": "BaseBdev2", 00:39:35.114 "uuid": "ec41baa9-cb82-5c41-8460-d5ce262a80a3", 00:39:35.114 "is_configured": true, 00:39:35.114 "data_offset": 256, 00:39:35.114 "data_size": 7936 00:39:35.114 } 00:39:35.114 ] 00:39:35.114 }' 00:39:35.114 21:53:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:35.114 21:53:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:35.707 21:53:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:35.707 21:53:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:35.707 21:53:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:39:35.707 21:53:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:39:35.707 21:53:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:35.707 21:53:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:35.707 21:53:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:35.966 21:53:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:35.966 "name": "raid_bdev1", 00:39:35.966 "uuid": "323db3f2-a3d0-46a8-bab1-f2e74d224b0d", 00:39:35.966 "strip_size_kb": 0, 00:39:35.966 "state": "online", 00:39:35.966 "raid_level": "raid1", 00:39:35.966 "superblock": true, 00:39:35.966 "num_base_bdevs": 2, 00:39:35.966 "num_base_bdevs_discovered": 1, 00:39:35.966 "num_base_bdevs_operational": 1, 00:39:35.966 "base_bdevs_list": [ 00:39:35.966 { 00:39:35.966 "name": null, 00:39:35.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:35.966 "is_configured": false, 00:39:35.966 "data_offset": 256, 00:39:35.966 "data_size": 7936 00:39:35.966 }, 00:39:35.966 { 00:39:35.966 "name": "BaseBdev2", 00:39:35.966 "uuid": "ec41baa9-cb82-5c41-8460-d5ce262a80a3", 00:39:35.966 "is_configured": true, 00:39:35.966 "data_offset": 256, 00:39:35.966 "data_size": 7936 00:39:35.966 } 00:39:35.966 ] 00:39:35.966 }' 00:39:35.966 21:53:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:35.966 21:53:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:39:35.966 21:53:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:35.966 21:53:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:35.966 21:53:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:39:35.966 21:53:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:39:35.966 21:53:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:39:35.966 21:53:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:35.966 21:53:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:35.966 21:53:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:35.966 21:53:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:35.967 21:53:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:35.967 21:53:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:35.967 21:53:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:35.967 21:53:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:39:35.967 21:53:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:39:36.226 [2024-07-15 21:53:09.413953] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:36.226 [2024-07-15 21:53:09.414265] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:39:36.226 [2024-07-15 21:53:09.414309] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:39:36.226 request: 00:39:36.226 { 00:39:36.226 "base_bdev": "BaseBdev1", 00:39:36.226 "raid_bdev": "raid_bdev1", 00:39:36.226 "method": "bdev_raid_add_base_bdev", 00:39:36.226 "req_id": 1 00:39:36.226 } 00:39:36.226 Got JSON-RPC error response 00:39:36.226 response: 00:39:36.226 { 00:39:36.226 "code": -22, 00:39:36.226 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:39:36.226 } 00:39:36.226 21:53:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:39:36.226 21:53:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:36.226 21:53:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:36.226 21:53:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:36.226 21:53:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # sleep 1 00:39:37.161 21:53:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:37.161 21:53:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:37.161 21:53:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:37.161 21:53:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:37.161 21:53:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:37.161 21:53:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:37.161 21:53:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:37.161 21:53:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:37.161 21:53:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:37.161 21:53:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:37.161 21:53:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:37.161 21:53:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:37.418 21:53:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:37.418 "name": "raid_bdev1", 00:39:37.418 "uuid": "323db3f2-a3d0-46a8-bab1-f2e74d224b0d", 00:39:37.418 "strip_size_kb": 0, 00:39:37.418 "state": "online", 00:39:37.418 "raid_level": "raid1", 00:39:37.418 "superblock": true, 00:39:37.418 "num_base_bdevs": 2, 00:39:37.418 "num_base_bdevs_discovered": 1, 00:39:37.418 "num_base_bdevs_operational": 1, 00:39:37.418 "base_bdevs_list": [ 00:39:37.418 { 00:39:37.418 "name": null, 00:39:37.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:37.418 "is_configured": false, 00:39:37.418 "data_offset": 256, 00:39:37.418 "data_size": 7936 00:39:37.418 }, 00:39:37.418 { 00:39:37.418 "name": "BaseBdev2", 00:39:37.418 "uuid": "ec41baa9-cb82-5c41-8460-d5ce262a80a3", 00:39:37.418 "is_configured": true, 00:39:37.418 "data_offset": 256, 00:39:37.418 "data_size": 7936 00:39:37.418 } 00:39:37.418 ] 00:39:37.418 }' 00:39:37.418 21:53:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:37.418 21:53:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:37.984 21:53:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:37.984 21:53:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:37.984 21:53:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:39:37.984 21:53:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:39:37.984 21:53:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:37.984 21:53:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:37.984 21:53:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:38.242 21:53:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:38.242 "name": "raid_bdev1", 00:39:38.242 "uuid": "323db3f2-a3d0-46a8-bab1-f2e74d224b0d", 00:39:38.242 "strip_size_kb": 0, 00:39:38.242 "state": "online", 00:39:38.242 "raid_level": "raid1", 00:39:38.242 "superblock": true, 00:39:38.242 "num_base_bdevs": 2, 00:39:38.242 "num_base_bdevs_discovered": 1, 00:39:38.242 "num_base_bdevs_operational": 1, 00:39:38.242 "base_bdevs_list": [ 00:39:38.242 { 00:39:38.242 "name": null, 00:39:38.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:38.242 "is_configured": false, 00:39:38.242 "data_offset": 256, 00:39:38.242 "data_size": 7936 00:39:38.242 }, 00:39:38.242 { 00:39:38.242 "name": "BaseBdev2", 00:39:38.242 "uuid": "ec41baa9-cb82-5c41-8460-d5ce262a80a3", 00:39:38.242 "is_configured": true, 00:39:38.242 "data_offset": 256, 00:39:38.242 "data_size": 7936 00:39:38.242 } 00:39:38.242 ] 00:39:38.242 }' 00:39:38.242 21:53:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:38.242 21:53:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:39:38.242 21:53:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:38.242 21:53:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:38.242 21:53:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # killprocess 165320 00:39:38.242 21:53:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 165320 ']' 00:39:38.242 21:53:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 165320 00:39:38.242 21:53:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:39:38.242 21:53:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:38.242 21:53:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 165320 00:39:38.500 killing process with pid 165320 00:39:38.500 Received shutdown signal, test time was about 60.000000 seconds 00:39:38.500 00:39:38.500 Latency(us) 00:39:38.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:38.500 =================================================================================================================== 00:39:38.500 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:38.500 21:53:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:38.500 21:53:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:38.500 21:53:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 165320' 00:39:38.500 21:53:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 165320 00:39:38.500 21:53:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 165320 00:39:38.500 [2024-07-15 21:53:11.627411] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:38.500 [2024-07-15 21:53:11.627617] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:38.500 [2024-07-15 21:53:11.627699] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:38.500 [2024-07-15 21:53:11.627727] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:39:38.758 [2024-07-15 21:53:11.956918] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:40.133 ************************************ 00:39:40.133 END TEST raid_rebuild_test_sb_md_interleaved 00:39:40.133 ************************************ 00:39:40.133 21:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # return 0 00:39:40.133 00:39:40.133 real 0m29.221s 00:39:40.133 user 0m46.470s 00:39:40.133 sys 0m2.886s 00:39:40.133 21:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:40.133 21:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:40.133 21:53:13 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:39:40.133 21:53:13 bdev_raid -- bdev/bdev_raid.sh@916 -- # trap - EXIT 00:39:40.133 21:53:13 bdev_raid -- bdev/bdev_raid.sh@917 -- # cleanup 00:39:40.133 21:53:13 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 165320 ']' 00:39:40.133 21:53:13 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 165320 00:39:40.133 21:53:13 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:39:40.133 00:39:40.133 real 23m36.188s 00:39:40.133 user 39m49.394s 00:39:40.133 sys 2m56.360s 00:39:40.133 21:53:13 bdev_raid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:40.133 21:53:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:40.133 ************************************ 00:39:40.133 END TEST bdev_raid 00:39:40.133 ************************************ 00:39:40.133 21:53:13 -- common/autotest_common.sh@1142 -- # return 0 00:39:40.133 21:53:13 -- spdk/autotest.sh@191 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:39:40.133 21:53:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:40.133 21:53:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:40.133 21:53:13 -- common/autotest_common.sh@10 -- # set +x 00:39:40.133 ************************************ 00:39:40.133 START TEST bdevperf_config 00:39:40.133 ************************************ 00:39:40.133 21:53:13 bdevperf_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:39:40.391 * Looking for test storage... 00:39:40.391 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:40.391 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:40.391 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:40.391 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:40.391 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:40.391 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:40.391 21:53:13 bdevperf_config -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:39:45.696 21:53:18 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-15 21:53:13.665680] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:39:45.696 [2024-07-15 21:53:13.665819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166213 ] 00:39:45.696 Using job config with 4 jobs 00:39:45.696 [2024-07-15 21:53:13.826609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:45.696 [2024-07-15 21:53:14.093350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:45.696 cpumask for '\''job0'\'' is too big 00:39:45.696 cpumask for '\''job1'\'' is too big 00:39:45.696 cpumask for '\''job2'\'' is too big 00:39:45.696 cpumask for '\''job3'\'' is too big 00:39:45.696 Running I/O for 2 seconds... 00:39:45.696 00:39:45.696 Latency(us) 00:39:45.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:45.696 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:45.696 Malloc0 : 2.02 31231.08 30.50 0.00 0.00 8189.96 1452.38 12477.60 00:39:45.696 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:45.696 Malloc0 : 2.02 31207.38 30.48 0.00 0.00 8182.29 1359.37 11103.92 00:39:45.696 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:45.696 Malloc0 : 2.02 31187.37 30.46 0.00 0.00 8172.52 1387.99 10073.66 00:39:45.696 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:45.696 Malloc0 : 2.02 31167.57 30.44 0.00 0.00 8163.77 1395.14 9901.95 00:39:45.696 =================================================================================================================== 00:39:45.696 Total : 124793.40 121.87 0.00 0.00 8177.13 1359.37 12477.60' 00:39:45.696 21:53:18 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-15 21:53:13.665680] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:39:45.696 [2024-07-15 21:53:13.665819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166213 ] 00:39:45.696 Using job config with 4 jobs 00:39:45.696 [2024-07-15 21:53:13.826609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:45.696 [2024-07-15 21:53:14.093350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:45.696 cpumask for '\''job0'\'' is too big 00:39:45.696 cpumask for '\''job1'\'' is too big 00:39:45.696 cpumask for '\''job2'\'' is too big 00:39:45.696 cpumask for '\''job3'\'' is too big 00:39:45.696 Running I/O for 2 seconds... 00:39:45.696 00:39:45.696 Latency(us) 00:39:45.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:45.696 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:45.696 Malloc0 : 2.02 31231.08 30.50 0.00 0.00 8189.96 1452.38 12477.60 00:39:45.696 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:45.696 Malloc0 : 2.02 31207.38 30.48 0.00 0.00 8182.29 1359.37 11103.92 00:39:45.696 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:45.696 Malloc0 : 2.02 31187.37 30.46 0.00 0.00 8172.52 1387.99 10073.66 00:39:45.696 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:45.696 Malloc0 : 2.02 31167.57 30.44 0.00 0.00 8163.77 1395.14 9901.95 00:39:45.696 =================================================================================================================== 00:39:45.696 Total : 124793.40 121.87 0.00 0.00 8177.13 1359.37 12477.60' 00:39:45.697 21:53:18 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-15 21:53:13.665680] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:39:45.697 [2024-07-15 21:53:13.665819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166213 ] 00:39:45.697 Using job config with 4 jobs 00:39:45.697 [2024-07-15 21:53:13.826609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:45.697 [2024-07-15 21:53:14.093350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:45.697 cpumask for '\''job0'\'' is too big 00:39:45.697 cpumask for '\''job1'\'' is too big 00:39:45.697 cpumask for '\''job2'\'' is too big 00:39:45.697 cpumask for '\''job3'\'' is too big 00:39:45.697 Running I/O for 2 seconds... 00:39:45.697 00:39:45.697 Latency(us) 00:39:45.697 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:45.697 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:45.697 Malloc0 : 2.02 31231.08 30.50 0.00 0.00 8189.96 1452.38 12477.60 00:39:45.697 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:45.697 Malloc0 : 2.02 31207.38 30.48 0.00 0.00 8182.29 1359.37 11103.92 00:39:45.697 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:45.697 Malloc0 : 2.02 31187.37 30.46 0.00 0.00 8172.52 1387.99 10073.66 00:39:45.697 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:45.697 Malloc0 : 2.02 31167.57 30.44 0.00 0.00 8163.77 1395.14 9901.95 00:39:45.697 =================================================================================================================== 00:39:45.697 Total : 124793.40 121.87 0.00 0.00 8177.13 1359.37 12477.60' 00:39:45.697 21:53:18 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:39:45.697 21:53:18 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:39:45.697 21:53:18 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:39:45.697 21:53:18 bdevperf_config -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:39:45.697 [2024-07-15 21:53:18.426860] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:39:45.697 [2024-07-15 21:53:18.427072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166277 ] 00:39:45.697 [2024-07-15 21:53:18.586842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:45.697 [2024-07-15 21:53:18.851230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:46.264 cpumask for 'job0' is too big 00:39:46.264 cpumask for 'job1' is too big 00:39:46.264 cpumask for 'job2' is too big 00:39:46.264 cpumask for 'job3' is too big 00:39:50.440 21:53:23 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:39:50.440 Running I/O for 2 seconds... 00:39:50.440 00:39:50.440 Latency(us) 00:39:50.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:50.440 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:50.440 Malloc0 : 2.01 31295.19 30.56 0.00 0.00 8173.23 1523.93 13393.38 00:39:50.440 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:50.440 Malloc0 : 2.01 31272.42 30.54 0.00 0.00 8165.34 1481.00 12019.70 00:39:50.440 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:50.440 Malloc0 : 2.02 31251.60 30.52 0.00 0.00 8154.18 1566.85 10417.08 00:39:50.440 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:50.440 Malloc0 : 2.02 31325.38 30.59 0.00 0.00 8120.66 736.92 9558.53 00:39:50.440 =================================================================================================================== 00:39:50.440 Total : 125144.59 122.21 0.00 0.00 8153.32 736.92 13393.38' 00:39:50.440 21:53:23 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:39:50.440 21:53:23 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:39:50.440 00:39:50.440 21:53:23 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:39:50.440 21:53:23 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:39:50.440 21:53:23 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:39:50.440 21:53:23 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:39:50.440 21:53:23 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:39:50.440 21:53:23 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:39:50.440 21:53:23 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:50.440 21:53:23 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:50.440 00:39:50.440 21:53:23 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:39:50.440 21:53:23 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:39:50.440 21:53:23 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:39:50.440 21:53:23 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:39:50.440 21:53:23 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:39:50.440 21:53:23 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:39:50.440 21:53:23 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:50.440 21:53:23 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:50.440 00:39:50.440 21:53:23 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:39:50.440 21:53:23 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:39:50.440 21:53:23 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:39:50.440 21:53:23 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:39:50.440 21:53:23 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:39:50.440 21:53:23 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:39:50.440 21:53:23 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:50.440 21:53:23 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:50.440 21:53:23 bdevperf_config -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-15 21:53:23.217868] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:39:54.648 [2024-07-15 21:53:23.218002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166343 ] 00:39:54.648 Using job config with 3 jobs 00:39:54.648 [2024-07-15 21:53:23.377273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:54.648 [2024-07-15 21:53:23.655873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:54.648 cpumask for '\''job0'\'' is too big 00:39:54.648 cpumask for '\''job1'\'' is too big 00:39:54.648 cpumask for '\''job2'\'' is too big 00:39:54.648 Running I/O for 2 seconds... 00:39:54.648 00:39:54.648 Latency(us) 00:39:54.648 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:54.648 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:39:54.648 Malloc0 : 2.01 42298.15 41.31 0.00 0.00 6046.28 1495.31 9501.29 00:39:54.648 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:39:54.648 Malloc0 : 2.01 42269.46 41.28 0.00 0.00 6039.72 1473.84 7784.19 00:39:54.648 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:39:54.648 Malloc0 : 2.01 42327.31 41.34 0.00 0.00 6020.51 751.23 7097.35 00:39:54.648 =================================================================================================================== 00:39:54.648 Total : 126894.91 123.92 0.00 0.00 6035.49 751.23 9501.29' 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-15 21:53:23.217868] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:39:54.648 [2024-07-15 21:53:23.218002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166343 ] 00:39:54.648 Using job config with 3 jobs 00:39:54.648 [2024-07-15 21:53:23.377273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:54.648 [2024-07-15 21:53:23.655873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:54.648 cpumask for '\''job0'\'' is too big 00:39:54.648 cpumask for '\''job1'\'' is too big 00:39:54.648 cpumask for '\''job2'\'' is too big 00:39:54.648 Running I/O for 2 seconds... 00:39:54.648 00:39:54.648 Latency(us) 00:39:54.648 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:54.648 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:39:54.648 Malloc0 : 2.01 42298.15 41.31 0.00 0.00 6046.28 1495.31 9501.29 00:39:54.648 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:39:54.648 Malloc0 : 2.01 42269.46 41.28 0.00 0.00 6039.72 1473.84 7784.19 00:39:54.648 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:39:54.648 Malloc0 : 2.01 42327.31 41.34 0.00 0.00 6020.51 751.23 7097.35 00:39:54.648 =================================================================================================================== 00:39:54.648 Total : 126894.91 123.92 0.00 0.00 6035.49 751.23 9501.29' 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-15 21:53:23.217868] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:39:54.648 [2024-07-15 21:53:23.218002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166343 ] 00:39:54.648 Using job config with 3 jobs 00:39:54.648 [2024-07-15 21:53:23.377273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:54.648 [2024-07-15 21:53:23.655873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:54.648 cpumask for '\''job0'\'' is too big 00:39:54.648 cpumask for '\''job1'\'' is too big 00:39:54.648 cpumask for '\''job2'\'' is too big 00:39:54.648 Running I/O for 2 seconds... 00:39:54.648 00:39:54.648 Latency(us) 00:39:54.648 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:54.648 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:39:54.648 Malloc0 : 2.01 42298.15 41.31 0.00 0.00 6046.28 1495.31 9501.29 00:39:54.648 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:39:54.648 Malloc0 : 2.01 42269.46 41.28 0.00 0.00 6039.72 1473.84 7784.19 00:39:54.648 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:39:54.648 Malloc0 : 2.01 42327.31 41.34 0.00 0.00 6020.51 751.23 7097.35 00:39:54.648 =================================================================================================================== 00:39:54.648 Total : 126894.91 123.92 0.00 0.00 6035.49 751.23 9501.29' 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:39:54.648 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:54.648 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:54.648 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:54.648 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:54.648 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:54.648 21:53:27 bdevperf_config -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:39:59.942 21:53:32 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-15 21:53:28.044018] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:39:59.942 [2024-07-15 21:53:28.044148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166412 ] 00:39:59.942 Using job config with 4 jobs 00:39:59.942 [2024-07-15 21:53:28.202691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:59.942 [2024-07-15 21:53:28.468368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:59.942 cpumask for '\''job0'\'' is too big 00:39:59.942 cpumask for '\''job1'\'' is too big 00:39:59.942 cpumask for '\''job2'\'' is too big 00:39:59.942 cpumask for '\''job3'\'' is too big 00:39:59.942 Running I/O for 2 seconds... 00:39:59.942 00:39:59.942 Latency(us) 00:39:59.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:59.942 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:59.942 Malloc0 : 2.04 14464.37 14.13 0.00 0.00 17688.81 3233.87 25527.56 00:39:59.942 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:59.942 Malloc1 : 2.04 14453.53 14.11 0.00 0.00 17686.47 3834.86 25527.56 00:39:59.942 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:59.942 Malloc0 : 2.04 14442.85 14.10 0.00 0.00 17648.59 3033.54 22322.31 00:39:59.942 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:59.942 Malloc1 : 2.04 14432.84 14.09 0.00 0.00 17646.89 3591.60 22436.78 00:39:59.942 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:59.942 Malloc0 : 2.04 14423.31 14.09 0.00 0.00 17611.68 2976.31 20261.79 00:39:59.942 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:59.942 Malloc1 : 2.04 14412.25 14.07 0.00 0.00 17611.88 3591.60 20032.84 00:39:59.942 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:59.942 Malloc0 : 2.04 14402.97 14.07 0.00 0.00 17573.32 2976.31 20032.84 00:39:59.942 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:59.942 Malloc1 : 2.05 14392.80 14.06 0.00 0.00 17573.82 3605.91 20147.31 00:39:59.942 =================================================================================================================== 00:39:59.942 Total : 115424.92 112.72 0.00 0.00 17630.18 2976.31 25527.56' 00:39:59.942 21:53:32 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-15 21:53:28.044018] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:39:59.942 [2024-07-15 21:53:28.044148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166412 ] 00:39:59.942 Using job config with 4 jobs 00:39:59.942 [2024-07-15 21:53:28.202691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:59.942 [2024-07-15 21:53:28.468368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:59.942 cpumask for '\''job0'\'' is too big 00:39:59.942 cpumask for '\''job1'\'' is too big 00:39:59.942 cpumask for '\''job2'\'' is too big 00:39:59.942 cpumask for '\''job3'\'' is too big 00:39:59.942 Running I/O for 2 seconds... 00:39:59.942 00:39:59.942 Latency(us) 00:39:59.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:59.942 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:59.942 Malloc0 : 2.04 14464.37 14.13 0.00 0.00 17688.81 3233.87 25527.56 00:39:59.942 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:59.942 Malloc1 : 2.04 14453.53 14.11 0.00 0.00 17686.47 3834.86 25527.56 00:39:59.942 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:59.942 Malloc0 : 2.04 14442.85 14.10 0.00 0.00 17648.59 3033.54 22322.31 00:39:59.942 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:59.942 Malloc1 : 2.04 14432.84 14.09 0.00 0.00 17646.89 3591.60 22436.78 00:39:59.942 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:59.942 Malloc0 : 2.04 14423.31 14.09 0.00 0.00 17611.68 2976.31 20261.79 00:39:59.942 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:59.942 Malloc1 : 2.04 14412.25 14.07 0.00 0.00 17611.88 3591.60 20032.84 00:39:59.942 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:59.942 Malloc0 : 2.04 14402.97 14.07 0.00 0.00 17573.32 2976.31 20032.84 00:39:59.942 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:59.942 Malloc1 : 2.05 14392.80 14.06 0.00 0.00 17573.82 3605.91 20147.31 00:39:59.942 =================================================================================================================== 00:39:59.942 Total : 115424.92 112.72 0.00 0.00 17630.18 2976.31 25527.56' 00:39:59.942 21:53:32 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-15 21:53:28.044018] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:39:59.942 [2024-07-15 21:53:28.044148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166412 ] 00:39:59.942 Using job config with 4 jobs 00:39:59.942 [2024-07-15 21:53:28.202691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:59.942 [2024-07-15 21:53:28.468368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:59.942 cpumask for '\''job0'\'' is too big 00:39:59.942 cpumask for '\''job1'\'' is too big 00:39:59.942 cpumask for '\''job2'\'' is too big 00:39:59.942 cpumask for '\''job3'\'' is too big 00:39:59.942 Running I/O for 2 seconds... 00:39:59.942 00:39:59.942 Latency(us) 00:39:59.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:59.942 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:59.942 Malloc0 : 2.04 14464.37 14.13 0.00 0.00 17688.81 3233.87 25527.56 00:39:59.942 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:59.942 Malloc1 : 2.04 14453.53 14.11 0.00 0.00 17686.47 3834.86 25527.56 00:39:59.942 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:59.942 Malloc0 : 2.04 14442.85 14.10 0.00 0.00 17648.59 3033.54 22322.31 00:39:59.942 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:59.942 Malloc1 : 2.04 14432.84 14.09 0.00 0.00 17646.89 3591.60 22436.78 00:39:59.942 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:59.942 Malloc0 : 2.04 14423.31 14.09 0.00 0.00 17611.68 2976.31 20261.79 00:39:59.942 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:59.942 Malloc1 : 2.04 14412.25 14.07 0.00 0.00 17611.88 3591.60 20032.84 00:39:59.942 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:59.942 Malloc0 : 2.04 14402.97 14.07 0.00 0.00 17573.32 2976.31 20032.84 00:39:59.942 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:59.942 Malloc1 : 2.05 14392.80 14.06 0.00 0.00 17573.82 3605.91 20147.31 00:39:59.942 =================================================================================================================== 00:39:59.942 Total : 115424.92 112.72 0.00 0.00 17630.18 2976.31 25527.56' 00:39:59.942 21:53:32 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:39:59.942 21:53:32 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:39:59.942 21:53:32 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:39:59.942 21:53:32 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:39:59.942 21:53:32 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:39:59.942 21:53:32 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:39:59.942 00:39:59.942 real 0m19.359s 00:39:59.942 user 0m17.263s 00:39:59.942 sys 0m1.525s 00:39:59.942 ************************************ 00:39:59.942 END TEST bdevperf_config 00:39:59.942 ************************************ 00:39:59.942 21:53:32 bdevperf_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:59.942 21:53:32 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:39:59.942 21:53:32 -- common/autotest_common.sh@1142 -- # return 0 00:39:59.942 21:53:32 -- spdk/autotest.sh@192 -- # uname -s 00:39:59.942 21:53:32 -- spdk/autotest.sh@192 -- # [[ Linux == Linux ]] 00:39:59.942 21:53:32 -- spdk/autotest.sh@193 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:39:59.942 21:53:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:59.942 21:53:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:59.942 21:53:32 -- common/autotest_common.sh@10 -- # set +x 00:39:59.942 ************************************ 00:39:59.942 START TEST reactor_set_interrupt 00:39:59.942 ************************************ 00:39:59.942 21:53:32 reactor_set_interrupt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:39:59.942 * Looking for test storage... 00:39:59.942 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:39:59.942 21:53:33 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:39:59.942 21:53:33 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:39:59.942 21:53:33 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:39:59.942 21:53:33 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:39:59.942 21:53:33 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:39:59.942 21:53:33 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:39:59.942 21:53:33 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:39:59.942 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:39:59.942 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e 00:39:59.943 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:39:59.943 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob 00:39:59.943 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:39:59.943 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:39:59.943 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:39:59.943 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_FIO_PLUGIN=y 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_NVME_CUSE=y 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_RAID5F=y 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_LTO=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_SMA=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_ISAL=y 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_OPENSSL_PATH= 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_IDXD_KERNEL=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_URING_PATH= 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_DAOS=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR= 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_OCF=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_EXAMPLES=y 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_RDMA_PROV=verbs 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_ISCSI_INITIATOR=y 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_VTUNE=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_DPDK_INC_DIR= 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_CET=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_TESTS=y 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_APPS=y 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_DAOS_DIR= 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_CRYPTO_MLX5=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_XNVME=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_UNIT_TESTS=y 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_FUSE=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_OCF_PATH= 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_WPDK_DIR= 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_VFIO_USER=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_MAX_LCORES=128 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_ARCH=native 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_TSAN=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_VIRTIO=y 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_HAVE_EVP_MAC=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_IPSEC_MB=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_DPDK_UADK=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_ASAN=y 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_SHARED=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_VTUNE_DIR= 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_RDMA_SET_TOS=y 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_VBDEV_COMPRESS=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_VFIO_USER_DIR= 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_PGO_DIR= 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_FUZZER_LIB= 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_HAVE_EXECINFO_H=y 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_USDT=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_HAVE_KEYUTILS=y 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_URING_ZNS=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_FC_PATH= 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_COVERAGE=y 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_CUSTOMOCF=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_DPDK_PKG_CONFIG=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_DEBUG=y 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_RDMA=y 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_HAVE_ARC4RANDOM=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_FUZZER=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_FC=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_HAVE_LIBARCHIVE=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_DPDK_COMPRESSDEV=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_CROSS_PREFIX= 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_PREFIX=/usr/local 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_HAVE_LIBBSD=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_UBSAN=y 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_PGO_CAPTURE=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_UBLK=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_ISAL_CRYPTO=y 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_CRYPTO=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_RBD=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_LIBDIR= 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_IPSEC_MB_DIR= 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_PGO_USE=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_GOLANG=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_VHOST=y 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_IDXD=y 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_AVAHI=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_URING=n 00:39:59.943 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:39:59.943 21:53:33 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:39:59.943 21:53:33 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:39:59.943 21:53:33 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:39:59.943 21:53:33 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:39:59.943 21:53:33 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:39:59.943 21:53:33 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:39:59.943 21:53:33 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:39:59.943 21:53:33 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:39:59.943 21:53:33 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:39:59.943 21:53:33 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:39:59.943 21:53:33 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:39:59.943 21:53:33 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:39:59.943 21:53:33 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:39:59.943 21:53:33 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:39:59.943 21:53:33 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:39:59.943 #define SPDK_CONFIG_H 00:39:59.943 #define SPDK_CONFIG_APPS 1 00:39:59.943 #define SPDK_CONFIG_ARCH native 00:39:59.943 #define SPDK_CONFIG_ASAN 1 00:39:59.943 #undef SPDK_CONFIG_AVAHI 00:39:59.943 #undef SPDK_CONFIG_CET 00:39:59.943 #define SPDK_CONFIG_COVERAGE 1 00:39:59.943 #define SPDK_CONFIG_CROSS_PREFIX 00:39:59.943 #undef SPDK_CONFIG_CRYPTO 00:39:59.943 #undef SPDK_CONFIG_CRYPTO_MLX5 00:39:59.943 #undef SPDK_CONFIG_CUSTOMOCF 00:39:59.943 #undef SPDK_CONFIG_DAOS 00:39:59.943 #define SPDK_CONFIG_DAOS_DIR 00:39:59.943 #define SPDK_CONFIG_DEBUG 1 00:39:59.943 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:39:59.943 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:39:59.943 #define SPDK_CONFIG_DPDK_INC_DIR 00:39:59.943 #define SPDK_CONFIG_DPDK_LIB_DIR 00:39:59.943 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:39:59.943 #undef SPDK_CONFIG_DPDK_UADK 00:39:59.943 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:39:59.943 #define SPDK_CONFIG_EXAMPLES 1 00:39:59.943 #undef SPDK_CONFIG_FC 00:39:59.943 #define SPDK_CONFIG_FC_PATH 00:39:59.943 #define SPDK_CONFIG_FIO_PLUGIN 1 00:39:59.943 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:39:59.943 #undef SPDK_CONFIG_FUSE 00:39:59.943 #undef SPDK_CONFIG_FUZZER 00:39:59.943 #define SPDK_CONFIG_FUZZER_LIB 00:39:59.943 #undef SPDK_CONFIG_GOLANG 00:39:59.943 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:39:59.943 #undef SPDK_CONFIG_HAVE_EVP_MAC 00:39:59.943 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:39:59.943 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:39:59.943 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:39:59.943 #undef SPDK_CONFIG_HAVE_LIBBSD 00:39:59.943 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:39:59.943 #define SPDK_CONFIG_IDXD 1 00:39:59.943 #undef SPDK_CONFIG_IDXD_KERNEL 00:39:59.943 #undef SPDK_CONFIG_IPSEC_MB 00:39:59.943 #define SPDK_CONFIG_IPSEC_MB_DIR 00:39:59.943 #define SPDK_CONFIG_ISAL 1 00:39:59.943 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:39:59.943 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:39:59.943 #define SPDK_CONFIG_LIBDIR 00:39:59.944 #undef SPDK_CONFIG_LTO 00:39:59.944 #define SPDK_CONFIG_MAX_LCORES 128 00:39:59.944 #define SPDK_CONFIG_NVME_CUSE 1 00:39:59.944 #undef SPDK_CONFIG_OCF 00:39:59.944 #define SPDK_CONFIG_OCF_PATH 00:39:59.944 #define SPDK_CONFIG_OPENSSL_PATH 00:39:59.944 #undef SPDK_CONFIG_PGO_CAPTURE 00:39:59.944 #define SPDK_CONFIG_PGO_DIR 00:39:59.944 #undef SPDK_CONFIG_PGO_USE 00:39:59.944 #define SPDK_CONFIG_PREFIX /usr/local 00:39:59.944 #define SPDK_CONFIG_RAID5F 1 00:39:59.944 #undef SPDK_CONFIG_RBD 00:39:59.944 #define SPDK_CONFIG_RDMA 1 00:39:59.944 #define SPDK_CONFIG_RDMA_PROV verbs 00:39:59.944 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:39:59.944 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:39:59.944 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:39:59.944 #undef SPDK_CONFIG_SHARED 00:39:59.944 #undef SPDK_CONFIG_SMA 00:39:59.944 #define SPDK_CONFIG_TESTS 1 00:39:59.944 #undef SPDK_CONFIG_TSAN 00:39:59.944 #undef SPDK_CONFIG_UBLK 00:39:59.944 #define SPDK_CONFIG_UBSAN 1 00:39:59.944 #define SPDK_CONFIG_UNIT_TESTS 1 00:39:59.944 #undef SPDK_CONFIG_URING 00:39:59.944 #define SPDK_CONFIG_URING_PATH 00:39:59.944 #undef SPDK_CONFIG_URING_ZNS 00:39:59.944 #undef SPDK_CONFIG_USDT 00:39:59.944 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:39:59.944 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:39:59.944 #undef SPDK_CONFIG_VFIO_USER 00:39:59.944 #define SPDK_CONFIG_VFIO_USER_DIR 00:39:59.944 #define SPDK_CONFIG_VHOST 1 00:39:59.944 #define SPDK_CONFIG_VIRTIO 1 00:39:59.944 #undef SPDK_CONFIG_VTUNE 00:39:59.944 #define SPDK_CONFIG_VTUNE_DIR 00:39:59.944 #define SPDK_CONFIG_WERROR 1 00:39:59.944 #define SPDK_CONFIG_WPDK_DIR 00:39:59.944 #undef SPDK_CONFIG_XNVME 00:39:59.944 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:39:59.944 21:53:33 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:59.944 21:53:33 reactor_set_interrupt -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:59.944 21:53:33 reactor_set_interrupt -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:59.944 21:53:33 reactor_set_interrupt -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:59.944 21:53:33 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:59.944 21:53:33 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:59.944 21:53:33 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:59.944 21:53:33 reactor_set_interrupt -- paths/export.sh@5 -- # export PATH 00:39:59.944 21:53:33 reactor_set_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:39:59.944 21:53:33 reactor_set_interrupt -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:39:59.944 21:53:33 reactor_set_interrupt -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:39:59.944 21:53:33 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:39:59.944 21:53:33 reactor_set_interrupt -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:39:59.944 21:53:33 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:39:59.944 21:53:33 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A 00:39:59.944 21:53:33 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:39:59.944 21:53:33 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:39:59.944 21:53:33 reactor_set_interrupt -- pm/common@68 -- # uname -s 00:39:59.944 21:53:33 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux 00:39:59.944 21:53:33 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:39:59.944 21:53:33 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:39:59.944 21:53:33 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:39:59.944 21:53:33 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:39:59.944 21:53:33 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:39:59.944 21:53:33 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:39:59.944 21:53:33 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]= 00:39:59.944 21:53:33 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E' 00:39:59.944 21:53:33 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:39:59.944 21:53:33 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:39:59.944 21:53:33 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]] 00:39:59.944 21:53:33 reactor_set_interrupt -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:39:59.944 21:53:33 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@58 -- # : 0 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@62 -- # : 0 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@64 -- # : 0 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@66 -- # : 1 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@68 -- # : 1 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@70 -- # : 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@72 -- # : 0 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@74 -- # : 0 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@76 -- # : 0 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@78 -- # : 0 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@80 -- # : 1 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@82 -- # : 0 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@84 -- # : 0 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@86 -- # : 0 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@88 -- # : 0 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@90 -- # : 0 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@92 -- # : 0 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@94 -- # : 0 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@96 -- # : 0 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@98 -- # : 0 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@100 -- # : 0 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@102 -- # : rdma 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@104 -- # : 0 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@106 -- # : 0 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@108 -- # : 1 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@110 -- # : 0 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@112 -- # : 0 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@114 -- # : 0 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@116 -- # : 0 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@118 -- # : 0 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@120 -- # : 1 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@122 -- # : 1 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@124 -- # : 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:39:59.944 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@126 -- # : 0 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@128 -- # : 0 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@130 -- # : 0 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@132 -- # : 0 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@134 -- # : 0 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@136 -- # : 0 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@138 -- # : 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@140 -- # : true 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@142 -- # : 1 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@144 -- # : 0 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@146 -- # : 0 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@148 -- # : 0 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@150 -- # : 0 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@152 -- # : 0 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@154 -- # : 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@156 -- # : 0 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@158 -- # : 0 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@160 -- # : 0 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@162 -- # : 0 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@164 -- # : 0 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@167 -- # : 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@169 -- # : 0 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@171 -- # : 0 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@200 -- # cat 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@253 -- # export QEMU_BIN= 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@253 -- # QEMU_BIN= 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@254 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@263 -- # export valgrind= 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@263 -- # valgrind= 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@269 -- # uname -s 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@279 -- # MAKE=make 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@299 -- # TEST_MODE= 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@318 -- # [[ -z 166524 ]] 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@318 -- # kill -0 166524 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@331 -- # local mount target_dir 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.H9PcHN 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:39:59.945 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.H9PcHN/tests/interrupt /tmp/spdk.H9PcHN 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@327 -- # df -T 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=udev 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=6224461824 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=6224461824 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=1249763328 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=1254514688 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4751360 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda1 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=10302201856 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=20616794112 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=10297815040 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=6267850752 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=6272561152 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=5242880 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=5242880 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=6272561152 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=6272561152 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/loop1 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=squashfs 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=0 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=41025536 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=41025536 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/loop0 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=squashfs 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=0 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=67108864 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/loop2 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=squashfs 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=0 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=96337920 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=96337920 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda15 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=103089152 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=109422592 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=6334464 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=1254510592 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=1254510592 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt/output 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=93509095424 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=6193684480 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/loop3 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=squashfs 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=0 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=40763392 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=40763392 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/loop4 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=squashfs 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=0 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=67108864 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:39:59.946 * Looking for test storage... 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@368 -- # local target_space new_size 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@372 -- # mount=/ 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@374 -- # target_space=10302201856 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:39:59.946 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ ext4 == tmpfs ]] 00:39:59.947 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ ext4 == ramfs ]] 00:39:59.947 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:39:59.947 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@381 -- # new_size=12512407552 00:39:59.947 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:39:59.947 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:39:59.947 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:39:59.947 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:39:59.947 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:39:59.947 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@389 -- # return 0 00:39:59.947 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # set -o errtrace 00:39:59.947 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:39:59.947 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:39:59.947 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:39:59.947 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@1687 -- # true 00:39:59.947 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@1689 -- # xtrace_fd 00:39:59.947 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:39:59.947 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:39:59.947 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec 00:39:59.947 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec 00:39:59.947 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore 00:39:59.947 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:39:59.947 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:39:59.947 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x 00:39:59.947 21:53:33 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:39:59.947 21:53:33 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:59.947 21:53:33 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:39:59.947 21:53:33 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:39:59.947 21:53:33 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:39:59.947 21:53:33 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:39:59.947 21:53:33 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:39:59.947 21:53:33 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:39:59.947 21:53:33 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:39:59.947 21:53:33 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:39:59.947 21:53:33 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:59.947 21:53:33 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:39:59.947 21:53:33 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=166567 00:39:59.947 21:53:33 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:39:59.947 21:53:33 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:39:59.947 21:53:33 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 166567 /var/tmp/spdk.sock 00:39:59.947 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@829 -- # '[' -z 166567 ']' 00:39:59.947 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:59.947 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:59.947 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:59.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:59.947 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:59.947 21:53:33 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:59.947 [2024-07-15 21:53:33.256414] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:39:59.947 [2024-07-15 21:53:33.256635] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166567 ] 00:40:00.206 [2024-07-15 21:53:33.426704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:00.465 [2024-07-15 21:53:33.688011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:00.465 [2024-07-15 21:53:33.688191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:00.465 [2024-07-15 21:53:33.688213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:40:00.756 [2024-07-15 21:53:34.096920] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:00.756 21:53:34 reactor_set_interrupt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:00.756 21:53:34 reactor_set_interrupt -- common/autotest_common.sh@862 -- # return 0 00:40:00.756 21:53:34 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:40:00.756 21:53:34 reactor_set_interrupt -- interrupt/common.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:01.321 Malloc0 00:40:01.321 Malloc1 00:40:01.321 Malloc2 00:40:01.321 21:53:34 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:40:01.321 21:53:34 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:40:01.321 21:53:34 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:40:01.321 21:53:34 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:40:01.321 5000+0 records in 00:40:01.321 5000+0 records out 00:40:01.321 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0276676 s, 370 MB/s 00:40:01.321 21:53:34 reactor_set_interrupt -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:40:01.579 AIO0 00:40:01.579 21:53:34 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 166567 00:40:01.579 21:53:34 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 166567 without_thd 00:40:01.579 21:53:34 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=166567 00:40:01.579 21:53:34 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:40:01.579 21:53:34 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:40:01.579 21:53:34 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:40:01.579 21:53:34 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:40:01.579 21:53:34 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:40:01.579 21:53:34 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:40:01.579 21:53:34 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:40:01.579 21:53:34 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:40:01.579 21:53:34 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:40:01.837 21:53:34 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:40:01.837 21:53:34 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:40:01.837 21:53:34 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:40:01.837 21:53:34 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:40:01.837 21:53:34 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:40:01.837 21:53:34 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:40:01.837 21:53:34 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:40:01.837 21:53:34 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:40:01.837 21:53:34 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:40:01.837 21:53:35 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:40:01.837 spdk_thread ids are 1 on reactor0. 00:40:01.837 21:53:35 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:40:01.837 21:53:35 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:40:01.837 21:53:35 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:40:01.837 21:53:35 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 166567 0 00:40:01.837 21:53:35 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 166567 0 idle 00:40:01.837 21:53:35 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166567 00:40:01.837 21:53:35 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:01.837 21:53:35 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:01.837 21:53:35 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:40:01.837 21:53:35 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:40:01.837 21:53:35 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:40:01.837 21:53:35 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:40:01.837 21:53:35 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:40:01.838 21:53:35 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166567 -w 256 00:40:01.838 21:53:35 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:40:02.096 21:53:35 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166567 root 20 0 20.1t 151428 31608 S 0.0 1.2 0:00.97 reactor_0' 00:40:02.096 21:53:35 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166567 root 20 0 20.1t 151428 31608 S 0.0 1.2 0:00.97 reactor_0 00:40:02.096 21:53:35 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:40:02.096 21:53:35 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:40:02.096 21:53:35 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:40:02.096 21:53:35 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:40:02.096 21:53:35 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:40:02.096 21:53:35 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:40:02.096 21:53:35 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:40:02.096 21:53:35 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:40:02.096 21:53:35 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:40:02.096 21:53:35 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 166567 1 00:40:02.096 21:53:35 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 166567 1 idle 00:40:02.096 21:53:35 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166567 00:40:02.096 21:53:35 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:02.096 21:53:35 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:02.096 21:53:35 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:40:02.096 21:53:35 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:40:02.096 21:53:35 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:40:02.096 21:53:35 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:40:02.096 21:53:35 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:40:02.096 21:53:35 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166567 -w 256 00:40:02.096 21:53:35 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166571 root 20 0 20.1t 151428 31608 S 0.0 1.2 0:00.00 reactor_1' 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166571 root 20 0 20.1t 151428 31608 S 0.0 1.2 0:00.00 reactor_1 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 166567 2 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 166567 2 idle 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166567 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166567 -w 256 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166572 root 20 0 20.1t 151428 31608 S 0.0 1.2 0:00.00 reactor_2' 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166572 root 20 0 20.1t 151428 31608 S 0.0 1.2 0:00.00 reactor_2 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:40:02.355 21:53:35 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:40:02.616 [2024-07-15 21:53:35.885934] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:02.616 21:53:35 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:40:02.874 [2024-07-15 21:53:36.085644] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:40:02.874 [2024-07-15 21:53:36.086579] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:40:02.874 21:53:36 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:40:03.135 [2024-07-15 21:53:36.277470] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:40:03.135 [2024-07-15 21:53:36.278295] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 166567 0 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 166567 0 busy 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166567 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166567 -w 256 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166567 root 20 0 20.1t 151532 31608 R 99.9 1.2 0:01.35 reactor_0' 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166567 root 20 0 20.1t 151532 31608 R 99.9 1.2 0:01.35 reactor_0 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 166567 2 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 166567 2 busy 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166567 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:40:03.135 21:53:36 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166567 -w 256 00:40:03.393 21:53:36 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166572 root 20 0 20.1t 151532 31608 R 93.3 1.2 0:00.34 reactor_2' 00:40:03.393 21:53:36 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166572 root 20 0 20.1t 151532 31608 R 93.3 1.2 0:00.34 reactor_2 00:40:03.393 21:53:36 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:40:03.393 21:53:36 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:40:03.393 21:53:36 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=93.3 00:40:03.393 21:53:36 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=93 00:40:03.393 21:53:36 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:40:03.393 21:53:36 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 93 -lt 70 ]] 00:40:03.393 21:53:36 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:40:03.393 21:53:36 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:40:03.393 21:53:36 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:40:03.651 [2024-07-15 21:53:36.821487] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:40:03.651 [2024-07-15 21:53:36.822061] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:40:03.651 21:53:36 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:40:03.651 21:53:36 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 166567 2 00:40:03.651 21:53:36 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 166567 2 idle 00:40:03.651 21:53:36 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166567 00:40:03.651 21:53:36 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:40:03.651 21:53:36 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:03.651 21:53:36 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:40:03.651 21:53:36 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:40:03.651 21:53:36 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:40:03.651 21:53:36 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:40:03.651 21:53:36 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:40:03.651 21:53:36 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166567 -w 256 00:40:03.651 21:53:36 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:40:03.651 21:53:36 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166572 root 20 0 20.1t 151596 31608 S 0.0 1.2 0:00.54 reactor_2' 00:40:03.651 21:53:36 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166572 root 20 0 20.1t 151596 31608 S 0.0 1.2 0:00.54 reactor_2 00:40:03.651 21:53:36 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:40:03.651 21:53:37 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:40:03.651 21:53:37 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:40:03.651 21:53:37 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:40:03.651 21:53:37 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:40:03.651 21:53:37 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:40:03.651 21:53:37 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:40:03.651 21:53:37 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:40:03.651 21:53:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:40:03.909 [2024-07-15 21:53:37.189478] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:40:03.909 [2024-07-15 21:53:37.190161] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:40:03.909 21:53:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:40:03.909 21:53:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:40:03.910 21:53:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:40:04.169 [2024-07-15 21:53:37.378100] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:04.169 21:53:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 166567 0 00:40:04.169 21:53:37 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 166567 0 idle 00:40:04.169 21:53:37 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166567 00:40:04.169 21:53:37 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:04.169 21:53:37 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:04.169 21:53:37 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:40:04.169 21:53:37 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:40:04.169 21:53:37 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:40:04.169 21:53:37 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:40:04.169 21:53:37 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:40:04.169 21:53:37 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166567 -w 256 00:40:04.169 21:53:37 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:40:04.428 21:53:37 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166567 root 20 0 20.1t 151684 31608 S 0.0 1.2 0:02.09 reactor_0' 00:40:04.428 21:53:37 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:40:04.428 21:53:37 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166567 root 20 0 20.1t 151684 31608 S 0.0 1.2 0:02.09 reactor_0 00:40:04.428 21:53:37 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:40:04.428 21:53:37 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:40:04.428 21:53:37 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:40:04.428 21:53:37 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:40:04.428 21:53:37 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:40:04.428 21:53:37 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:40:04.428 21:53:37 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:40:04.428 21:53:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:40:04.428 21:53:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:40:04.428 21:53:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:40:04.428 21:53:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 166567 00:40:04.428 21:53:37 reactor_set_interrupt -- common/autotest_common.sh@948 -- # '[' -z 166567 ']' 00:40:04.428 21:53:37 reactor_set_interrupt -- common/autotest_common.sh@952 -- # kill -0 166567 00:40:04.428 21:53:37 reactor_set_interrupt -- common/autotest_common.sh@953 -- # uname 00:40:04.428 21:53:37 reactor_set_interrupt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:04.428 21:53:37 reactor_set_interrupt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 166567 00:40:04.428 21:53:37 reactor_set_interrupt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:04.428 21:53:37 reactor_set_interrupt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:04.429 killing process with pid 166567 00:40:04.429 21:53:37 reactor_set_interrupt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 166567' 00:40:04.429 21:53:37 reactor_set_interrupt -- common/autotest_common.sh@967 -- # kill 166567 00:40:04.429 21:53:37 reactor_set_interrupt -- common/autotest_common.sh@972 -- # wait 166567 00:40:06.334 21:53:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:40:06.334 21:53:39 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:40:06.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:06.334 21:53:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:40:06.334 21:53:39 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:06.334 21:53:39 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:40:06.334 21:53:39 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=166739 00:40:06.334 21:53:39 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:40:06.334 21:53:39 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:40:06.334 21:53:39 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 166739 /var/tmp/spdk.sock 00:40:06.334 21:53:39 reactor_set_interrupt -- common/autotest_common.sh@829 -- # '[' -z 166739 ']' 00:40:06.334 21:53:39 reactor_set_interrupt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:06.334 21:53:39 reactor_set_interrupt -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:06.334 21:53:39 reactor_set_interrupt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:06.334 21:53:39 reactor_set_interrupt -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:06.334 21:53:39 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:06.334 [2024-07-15 21:53:39.499166] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:40:06.334 [2024-07-15 21:53:39.499369] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166739 ] 00:40:06.334 [2024-07-15 21:53:39.674914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:06.593 [2024-07-15 21:53:39.940625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:06.593 [2024-07-15 21:53:39.940792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:06.593 [2024-07-15 21:53:39.940801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:40:07.161 [2024-07-15 21:53:40.356698] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:07.161 21:53:40 reactor_set_interrupt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:07.161 21:53:40 reactor_set_interrupt -- common/autotest_common.sh@862 -- # return 0 00:40:07.161 21:53:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:40:07.161 21:53:40 reactor_set_interrupt -- interrupt/common.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:07.459 Malloc0 00:40:07.459 Malloc1 00:40:07.459 Malloc2 00:40:07.459 21:53:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:40:07.459 21:53:40 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:40:07.459 21:53:40 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:40:07.459 21:53:40 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:40:07.459 5000+0 records in 00:40:07.459 5000+0 records out 00:40:07.459 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0270376 s, 379 MB/s 00:40:07.459 21:53:40 reactor_set_interrupt -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:40:07.719 AIO0 00:40:07.719 21:53:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 166739 00:40:07.719 21:53:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 166739 00:40:07.719 21:53:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=166739 00:40:07.719 21:53:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:40:07.719 21:53:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:40:07.719 21:53:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:40:07.719 21:53:41 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:40:07.719 21:53:41 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:40:07.719 21:53:41 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:40:07.719 21:53:41 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:40:07.719 21:53:41 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:40:07.719 21:53:41 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:40:07.978 21:53:41 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:40:07.978 21:53:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:40:07.978 21:53:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:40:07.978 21:53:41 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:40:07.978 21:53:41 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:40:07.978 21:53:41 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:40:07.978 21:53:41 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:40:07.978 21:53:41 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:40:07.978 21:53:41 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:40:08.238 spdk_thread ids are 1 on reactor0. 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 166739 0 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 166739 0 idle 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166739 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166739 -w 256 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166739 root 20 0 20.1t 151512 31792 S 6.7 1.2 0:00.98 reactor_0' 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166739 root 20 0 20.1t 151512 31792 S 6.7 1.2 0:00.98 reactor_0 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=6.7 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=6 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 6 -gt 30 ]] 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 166739 1 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 166739 1 idle 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166739 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166739 -w 256 00:40:08.238 21:53:41 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:40:08.497 21:53:41 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166742 root 20 0 20.1t 151512 31792 S 0.0 1.2 0:00.00 reactor_1' 00:40:08.497 21:53:41 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166742 root 20 0 20.1t 151512 31792 S 0.0 1.2 0:00.00 reactor_1 00:40:08.497 21:53:41 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:40:08.498 21:53:41 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:40:08.498 21:53:41 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:40:08.498 21:53:41 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:40:08.498 21:53:41 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:40:08.498 21:53:41 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:40:08.498 21:53:41 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:40:08.498 21:53:41 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:40:08.498 21:53:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:40:08.498 21:53:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 166739 2 00:40:08.498 21:53:41 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 166739 2 idle 00:40:08.498 21:53:41 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166739 00:40:08.498 21:53:41 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:40:08.498 21:53:41 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:08.498 21:53:41 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:40:08.498 21:53:41 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:40:08.498 21:53:41 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:40:08.498 21:53:41 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:40:08.498 21:53:41 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:40:08.498 21:53:41 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166739 -w 256 00:40:08.498 21:53:41 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:40:08.757 21:53:41 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166743 root 20 0 20.1t 151512 31792 S 0.0 1.2 0:00.00 reactor_2' 00:40:08.757 21:53:41 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166743 root 20 0 20.1t 151512 31792 S 0.0 1.2 0:00.00 reactor_2 00:40:08.757 21:53:41 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:40:08.757 21:53:41 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:40:08.757 21:53:41 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:40:08.757 21:53:41 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:40:08.757 21:53:41 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:40:08.757 21:53:41 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:40:08.757 21:53:41 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:40:08.757 21:53:41 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:40:08.757 21:53:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:40:08.757 21:53:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:40:09.017 [2024-07-15 21:53:42.161988] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:40:09.017 [2024-07-15 21:53:42.162530] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:40:09.017 [2024-07-15 21:53:42.162874] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:40:09.017 21:53:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:40:09.017 [2024-07-15 21:53:42.381312] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:40:09.017 [2024-07-15 21:53:42.381776] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 166739 0 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 166739 0 busy 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166739 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166739 -w 256 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166739 root 20 0 20.1t 151632 31792 R 93.3 1.2 0:01.38 reactor_0' 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166739 root 20 0 20.1t 151632 31792 R 93.3 1.2 0:01.38 reactor_0 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=93.3 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=93 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 93 -lt 70 ]] 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 166739 2 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 166739 2 busy 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166739 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166739 -w 256 00:40:09.277 21:53:42 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:40:09.536 21:53:42 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166743 root 20 0 20.1t 151632 31792 R 99.9 1.2 0:00.34 reactor_2' 00:40:09.536 21:53:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166743 root 20 0 20.1t 151632 31792 R 99.9 1.2 0:00.34 reactor_2 00:40:09.536 21:53:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:40:09.536 21:53:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:40:09.536 21:53:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:40:09.536 21:53:42 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:40:09.536 21:53:42 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:40:09.536 21:53:42 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:40:09.536 21:53:42 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:40:09.536 21:53:42 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:40:09.536 21:53:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:40:09.796 [2024-07-15 21:53:42.944533] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:40:09.796 [2024-07-15 21:53:42.945011] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:40:09.796 21:53:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:40:09.796 21:53:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 166739 2 00:40:09.796 21:53:42 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 166739 2 idle 00:40:09.796 21:53:42 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166739 00:40:09.796 21:53:42 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:40:09.796 21:53:42 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:09.796 21:53:42 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:40:09.796 21:53:42 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:40:09.796 21:53:42 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:40:09.796 21:53:42 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:40:09.796 21:53:42 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:40:09.796 21:53:42 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166739 -w 256 00:40:09.796 21:53:42 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:40:09.796 21:53:43 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166743 root 20 0 20.1t 151676 31792 S 0.0 1.2 0:00.56 reactor_2' 00:40:09.796 21:53:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166743 root 20 0 20.1t 151676 31792 S 0.0 1.2 0:00.56 reactor_2 00:40:09.796 21:53:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:40:09.796 21:53:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:40:09.796 21:53:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:40:09.796 21:53:43 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:40:09.796 21:53:43 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:40:09.796 21:53:43 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:40:09.796 21:53:43 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:40:09.796 21:53:43 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:40:09.796 21:53:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:40:10.058 [2024-07-15 21:53:43.347839] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:40:10.058 [2024-07-15 21:53:43.348439] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:40:10.058 [2024-07-15 21:53:43.348502] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:40:10.058 21:53:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:40:10.058 21:53:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 166739 0 00:40:10.058 21:53:43 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 166739 0 idle 00:40:10.058 21:53:43 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166739 00:40:10.058 21:53:43 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:10.058 21:53:43 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:10.058 21:53:43 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:40:10.058 21:53:43 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:40:10.058 21:53:43 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:40:10.058 21:53:43 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:40:10.058 21:53:43 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:40:10.058 21:53:43 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166739 -w 256 00:40:10.058 21:53:43 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:40:10.358 21:53:43 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166739 root 20 0 20.1t 151704 31792 S 0.0 1.2 0:02.18 reactor_0' 00:40:10.358 21:53:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166739 root 20 0 20.1t 151704 31792 S 0.0 1.2 0:02.18 reactor_0 00:40:10.358 21:53:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:40:10.358 21:53:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:40:10.358 21:53:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:40:10.358 21:53:43 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:40:10.358 21:53:43 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:40:10.358 21:53:43 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:40:10.358 21:53:43 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:40:10.358 21:53:43 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:40:10.358 21:53:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:40:10.358 21:53:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:40:10.358 21:53:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:40:10.358 21:53:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 166739 00:40:10.358 21:53:43 reactor_set_interrupt -- common/autotest_common.sh@948 -- # '[' -z 166739 ']' 00:40:10.358 21:53:43 reactor_set_interrupt -- common/autotest_common.sh@952 -- # kill -0 166739 00:40:10.358 21:53:43 reactor_set_interrupt -- common/autotest_common.sh@953 -- # uname 00:40:10.358 21:53:43 reactor_set_interrupt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:10.358 21:53:43 reactor_set_interrupt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 166739 00:40:10.358 21:53:43 reactor_set_interrupt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:10.358 21:53:43 reactor_set_interrupt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:10.358 21:53:43 reactor_set_interrupt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 166739' 00:40:10.358 killing process with pid 166739 00:40:10.358 21:53:43 reactor_set_interrupt -- common/autotest_common.sh@967 -- # kill 166739 00:40:10.358 21:53:43 reactor_set_interrupt -- common/autotest_common.sh@972 -- # wait 166739 00:40:12.275 21:53:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:40:12.275 21:53:45 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:40:12.275 ************************************ 00:40:12.275 END TEST reactor_set_interrupt 00:40:12.275 ************************************ 00:40:12.275 00:40:12.275 real 0m12.346s 00:40:12.275 user 0m12.645s 00:40:12.275 sys 0m1.903s 00:40:12.275 21:53:45 reactor_set_interrupt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:12.275 21:53:45 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:12.275 21:53:45 -- common/autotest_common.sh@1142 -- # return 0 00:40:12.275 21:53:45 -- spdk/autotest.sh@194 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:40:12.275 21:53:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:12.275 21:53:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:12.275 21:53:45 -- common/autotest_common.sh@10 -- # set +x 00:40:12.275 ************************************ 00:40:12.275 START TEST reap_unregistered_poller 00:40:12.275 ************************************ 00:40:12.275 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:40:12.275 * Looking for test storage... 00:40:12.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:40:12.275 21:53:45 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:40:12.275 21:53:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:40:12.275 21:53:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:40:12.275 21:53:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:40:12.275 21:53:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:40:12.275 21:53:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:40:12.275 21:53:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:40:12.275 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:40:12.275 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e 00:40:12.275 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:40:12.275 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob 00:40:12.275 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:40:12.275 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:40:12.275 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:40:12.275 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_FIO_PLUGIN=y 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_NVME_CUSE=y 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_RAID5F=y 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_LTO=n 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_SMA=n 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_ISAL=y 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_OPENSSL_PATH= 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_IDXD_KERNEL=n 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_URING_PATH= 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_DAOS=n 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR= 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_OCF=n 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_EXAMPLES=y 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_RDMA_PROV=verbs 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_ISCSI_INITIATOR=y 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_VTUNE=n 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_DPDK_INC_DIR= 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_CET=n 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_TESTS=y 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_APPS=y 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_DAOS_DIR= 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_CRYPTO_MLX5=n 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_XNVME=n 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_UNIT_TESTS=y 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_FUSE=n 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_OCF_PATH= 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_WPDK_DIR= 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_VFIO_USER=n 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_MAX_LCORES=128 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_ARCH=native 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_TSAN=n 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_VIRTIO=y 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_HAVE_EVP_MAC=n 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_IPSEC_MB=n 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_DPDK_UADK=n 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_ASAN=y 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_SHARED=n 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_VTUNE_DIR= 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_RDMA_SET_TOS=y 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_VBDEV_COMPRESS=n 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_VFIO_USER_DIR= 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_PGO_DIR= 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_FUZZER_LIB= 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_HAVE_EXECINFO_H=y 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_USDT=n 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_HAVE_KEYUTILS=y 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_URING_ZNS=n 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_FC_PATH= 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_COVERAGE=y 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_CUSTOMOCF=n 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_DPDK_PKG_CONFIG=n 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_DEBUG=y 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_RDMA=y 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_HAVE_ARC4RANDOM=n 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_FUZZER=n 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_FC=n 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_HAVE_LIBARCHIVE=n 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_DPDK_COMPRESSDEV=n 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_CROSS_PREFIX= 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_PREFIX=/usr/local 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_HAVE_LIBBSD=n 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_UBSAN=y 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_PGO_CAPTURE=n 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_UBLK=n 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_ISAL_CRYPTO=y 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_CRYPTO=n 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_RBD=n 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_LIBDIR= 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_IPSEC_MB_DIR= 00:40:12.275 21:53:45 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_PGO_USE=n 00:40:12.276 21:53:45 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:40:12.276 21:53:45 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_GOLANG=n 00:40:12.276 21:53:45 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_VHOST=y 00:40:12.276 21:53:45 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_IDXD=y 00:40:12.276 21:53:45 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_AVAHI=n 00:40:12.276 21:53:45 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_URING=n 00:40:12.276 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:40:12.276 21:53:45 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:40:12.276 21:53:45 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:40:12.276 21:53:45 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:40:12.276 21:53:45 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:40:12.276 21:53:45 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:40:12.276 21:53:45 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:40:12.276 21:53:45 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:40:12.276 21:53:45 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:40:12.276 21:53:45 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:40:12.276 21:53:45 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:40:12.276 21:53:45 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:40:12.276 21:53:45 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:40:12.276 21:53:45 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:40:12.276 21:53:45 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:40:12.276 21:53:45 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:40:12.276 #define SPDK_CONFIG_H 00:40:12.276 #define SPDK_CONFIG_APPS 1 00:40:12.276 #define SPDK_CONFIG_ARCH native 00:40:12.276 #define SPDK_CONFIG_ASAN 1 00:40:12.276 #undef SPDK_CONFIG_AVAHI 00:40:12.276 #undef SPDK_CONFIG_CET 00:40:12.276 #define SPDK_CONFIG_COVERAGE 1 00:40:12.276 #define SPDK_CONFIG_CROSS_PREFIX 00:40:12.276 #undef SPDK_CONFIG_CRYPTO 00:40:12.276 #undef SPDK_CONFIG_CRYPTO_MLX5 00:40:12.276 #undef SPDK_CONFIG_CUSTOMOCF 00:40:12.276 #undef SPDK_CONFIG_DAOS 00:40:12.276 #define SPDK_CONFIG_DAOS_DIR 00:40:12.276 #define SPDK_CONFIG_DEBUG 1 00:40:12.276 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:40:12.276 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:40:12.276 #define SPDK_CONFIG_DPDK_INC_DIR 00:40:12.276 #define SPDK_CONFIG_DPDK_LIB_DIR 00:40:12.276 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:40:12.276 #undef SPDK_CONFIG_DPDK_UADK 00:40:12.276 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:40:12.276 #define SPDK_CONFIG_EXAMPLES 1 00:40:12.276 #undef SPDK_CONFIG_FC 00:40:12.276 #define SPDK_CONFIG_FC_PATH 00:40:12.276 #define SPDK_CONFIG_FIO_PLUGIN 1 00:40:12.276 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:40:12.276 #undef SPDK_CONFIG_FUSE 00:40:12.276 #undef SPDK_CONFIG_FUZZER 00:40:12.276 #define SPDK_CONFIG_FUZZER_LIB 00:40:12.276 #undef SPDK_CONFIG_GOLANG 00:40:12.276 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:40:12.276 #undef SPDK_CONFIG_HAVE_EVP_MAC 00:40:12.276 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:40:12.276 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:40:12.276 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:40:12.276 #undef SPDK_CONFIG_HAVE_LIBBSD 00:40:12.276 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:40:12.276 #define SPDK_CONFIG_IDXD 1 00:40:12.276 #undef SPDK_CONFIG_IDXD_KERNEL 00:40:12.276 #undef SPDK_CONFIG_IPSEC_MB 00:40:12.276 #define SPDK_CONFIG_IPSEC_MB_DIR 00:40:12.276 #define SPDK_CONFIG_ISAL 1 00:40:12.276 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:40:12.276 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:40:12.276 #define SPDK_CONFIG_LIBDIR 00:40:12.276 #undef SPDK_CONFIG_LTO 00:40:12.276 #define SPDK_CONFIG_MAX_LCORES 128 00:40:12.276 #define SPDK_CONFIG_NVME_CUSE 1 00:40:12.276 #undef SPDK_CONFIG_OCF 00:40:12.276 #define SPDK_CONFIG_OCF_PATH 00:40:12.276 #define SPDK_CONFIG_OPENSSL_PATH 00:40:12.276 #undef SPDK_CONFIG_PGO_CAPTURE 00:40:12.276 #define SPDK_CONFIG_PGO_DIR 00:40:12.276 #undef SPDK_CONFIG_PGO_USE 00:40:12.276 #define SPDK_CONFIG_PREFIX /usr/local 00:40:12.276 #define SPDK_CONFIG_RAID5F 1 00:40:12.276 #undef SPDK_CONFIG_RBD 00:40:12.276 #define SPDK_CONFIG_RDMA 1 00:40:12.276 #define SPDK_CONFIG_RDMA_PROV verbs 00:40:12.276 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:40:12.276 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:40:12.276 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:40:12.276 #undef SPDK_CONFIG_SHARED 00:40:12.276 #undef SPDK_CONFIG_SMA 00:40:12.276 #define SPDK_CONFIG_TESTS 1 00:40:12.276 #undef SPDK_CONFIG_TSAN 00:40:12.276 #undef SPDK_CONFIG_UBLK 00:40:12.276 #define SPDK_CONFIG_UBSAN 1 00:40:12.276 #define SPDK_CONFIG_UNIT_TESTS 1 00:40:12.276 #undef SPDK_CONFIG_URING 00:40:12.276 #define SPDK_CONFIG_URING_PATH 00:40:12.276 #undef SPDK_CONFIG_URING_ZNS 00:40:12.276 #undef SPDK_CONFIG_USDT 00:40:12.276 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:40:12.276 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:40:12.276 #undef SPDK_CONFIG_VFIO_USER 00:40:12.276 #define SPDK_CONFIG_VFIO_USER_DIR 00:40:12.276 #define SPDK_CONFIG_VHOST 1 00:40:12.276 #define SPDK_CONFIG_VIRTIO 1 00:40:12.276 #undef SPDK_CONFIG_VTUNE 00:40:12.276 #define SPDK_CONFIG_VTUNE_DIR 00:40:12.276 #define SPDK_CONFIG_WERROR 1 00:40:12.276 #define SPDK_CONFIG_WPDK_DIR 00:40:12.276 #undef SPDK_CONFIG_XNVME 00:40:12.276 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:40:12.276 21:53:45 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:40:12.276 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:12.276 21:53:45 reap_unregistered_poller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:12.276 21:53:45 reap_unregistered_poller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:12.276 21:53:45 reap_unregistered_poller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:12.276 21:53:45 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:12.276 21:53:45 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:12.276 21:53:45 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:12.276 21:53:45 reap_unregistered_poller -- paths/export.sh@5 -- # export PATH 00:40:12.276 21:53:45 reap_unregistered_poller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:12.276 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:40:12.276 21:53:45 reap_unregistered_poller -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:40:12.276 21:53:45 reap_unregistered_poller -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:40:12.276 21:53:45 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:40:12.276 21:53:45 reap_unregistered_poller -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:40:12.276 21:53:45 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:40:12.276 21:53:45 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A 00:40:12.276 21:53:45 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:40:12.276 21:53:45 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:40:12.276 21:53:45 reap_unregistered_poller -- pm/common@68 -- # uname -s 00:40:12.276 21:53:45 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux 00:40:12.276 21:53:45 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:40:12.276 21:53:45 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:40:12.276 21:53:45 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:40:12.276 21:53:45 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:40:12.276 21:53:45 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:40:12.276 21:53:45 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:40:12.276 21:53:45 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]= 00:40:12.276 21:53:45 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E' 00:40:12.276 21:53:45 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:40:12.276 21:53:45 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:40:12.276 21:53:45 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]] 00:40:12.276 21:53:45 reap_unregistered_poller -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:40:12.276 21:53:45 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:40:12.276 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@58 -- # : 0 00:40:12.276 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:40:12.276 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@62 -- # : 0 00:40:12.276 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:40:12.276 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@64 -- # : 0 00:40:12.276 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:40:12.276 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@66 -- # : 1 00:40:12.276 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:40:12.276 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@68 -- # : 1 00:40:12.276 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@70 -- # : 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@72 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@74 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@76 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@78 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@80 -- # : 1 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@82 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@84 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@86 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@88 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@90 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@92 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@94 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@96 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@98 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@100 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@102 -- # : rdma 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@104 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@106 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@108 -- # : 1 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@110 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@112 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@114 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@116 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@118 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@120 -- # : 1 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@122 -- # : 1 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@124 -- # : 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@126 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@128 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@130 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@132 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@134 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@136 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@138 -- # : 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@140 -- # : true 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@142 -- # : 1 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@144 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@146 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@148 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@150 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@152 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@154 -- # : 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@156 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@158 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@160 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@162 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@164 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@167 -- # : 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@169 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@171 -- # : 0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:40:12.277 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@200 -- # cat 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@253 -- # export QEMU_BIN= 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@253 -- # QEMU_BIN= 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@254 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@263 -- # export valgrind= 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@263 -- # valgrind= 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@269 -- # uname -s 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@279 -- # MAKE=make 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@299 -- # TEST_MODE= 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@318 -- # [[ -z 166918 ]] 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@318 -- # kill -0 166918 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@331 -- # local mount target_dir 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.9GnDr6 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.9GnDr6/tests/interrupt /tmp/spdk.9GnDr6 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@327 -- # df -T 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=udev 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=6224461824 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=6224461824 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=1249763328 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=1254514688 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4751360 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda1 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=10302164992 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=20616794112 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=10297851904 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=6267850752 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=6272561152 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=5242880 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=5242880 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=6272561152 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=6272561152 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/loop1 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=squashfs 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=0 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=41025536 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=41025536 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/loop0 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=squashfs 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=0 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=67108864 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/loop2 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=squashfs 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=0 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=96337920 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=96337920 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda15 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=103089152 00:40:12.278 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=109422592 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=6334464 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=1254510592 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=1254510592 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt/output 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=93500383232 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=6202396672 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/loop3 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=squashfs 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=0 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=40763392 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=40763392 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/loop4 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=squashfs 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=0 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=67108864 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:40:12.279 * Looking for test storage... 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@368 -- # local target_space new_size 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@372 -- # mount=/ 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@374 -- # target_space=10302164992 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ ext4 == tmpfs ]] 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ ext4 == ramfs ]] 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@381 -- # new_size=12512444416 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:40:12.279 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@389 -- # return 0 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # set -o errtrace 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@1687 -- # true 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@1689 -- # xtrace_fd 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x 00:40:12.279 21:53:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:40:12.279 21:53:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:12.279 21:53:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:40:12.279 21:53:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:40:12.279 21:53:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:40:12.279 21:53:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:40:12.279 21:53:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:40:12.279 21:53:45 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:40:12.279 21:53:45 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:40:12.279 21:53:45 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:40:12.279 21:53:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:12.279 21:53:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:40:12.279 21:53:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=166962 00:40:12.279 21:53:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:40:12.279 21:53:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:40:12.279 21:53:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 166962 /var/tmp/spdk.sock 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@829 -- # '[' -z 166962 ']' 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:12.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:12.279 21:53:45 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:40:12.538 [2024-07-15 21:53:45.656353] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:40:12.538 [2024-07-15 21:53:45.656569] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166962 ] 00:40:12.538 [2024-07-15 21:53:45.823238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:12.797 [2024-07-15 21:53:46.034737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:12.797 [2024-07-15 21:53:46.034863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:12.797 [2024-07-15 21:53:46.034873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:40:13.056 [2024-07-15 21:53:46.361967] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:13.315 21:53:46 reap_unregistered_poller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:13.315 21:53:46 reap_unregistered_poller -- common/autotest_common.sh@862 -- # return 0 00:40:13.315 21:53:46 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:40:13.315 21:53:46 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:40:13.315 21:53:46 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:13.315 21:53:46 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:40:13.315 21:53:46 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:13.315 21:53:46 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:40:13.315 "name": "app_thread", 00:40:13.315 "id": 1, 00:40:13.315 "active_pollers": [], 00:40:13.315 "timed_pollers": [ 00:40:13.315 { 00:40:13.315 "name": "rpc_subsystem_poll_servers", 00:40:13.315 "id": 1, 00:40:13.315 "state": "waiting", 00:40:13.315 "run_count": 0, 00:40:13.315 "busy_count": 0, 00:40:13.315 "period_ticks": 9160000 00:40:13.315 } 00:40:13.315 ], 00:40:13.315 "paused_pollers": [] 00:40:13.315 }' 00:40:13.316 21:53:46 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:40:13.316 21:53:46 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:40:13.316 21:53:46 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:40:13.316 21:53:46 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:40:13.316 21:53:46 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:40:13.316 21:53:46 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:40:13.316 21:53:46 reap_unregistered_poller -- interrupt/common.sh@75 -- # uname -s 00:40:13.316 21:53:46 reap_unregistered_poller -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:40:13.316 21:53:46 reap_unregistered_poller -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:40:13.316 5000+0 records in 00:40:13.316 5000+0 records out 00:40:13.316 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0249202 s, 411 MB/s 00:40:13.316 21:53:46 reap_unregistered_poller -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:40:13.575 AIO0 00:40:13.575 21:53:46 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:13.834 21:53:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:40:13.834 21:53:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:40:13.834 21:53:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:40:13.834 21:53:47 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:13.834 21:53:47 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:40:13.834 21:53:47 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:13.834 21:53:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:40:13.834 "name": "app_thread", 00:40:13.834 "id": 1, 00:40:13.834 "active_pollers": [], 00:40:13.834 "timed_pollers": [ 00:40:13.834 { 00:40:13.834 "name": "rpc_subsystem_poll_servers", 00:40:13.834 "id": 1, 00:40:13.834 "state": "waiting", 00:40:13.834 "run_count": 0, 00:40:13.834 "busy_count": 0, 00:40:13.834 "period_ticks": 9160000 00:40:13.834 } 00:40:13.834 ], 00:40:13.834 "paused_pollers": [] 00:40:13.834 }' 00:40:13.834 21:53:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:40:14.093 21:53:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:40:14.093 21:53:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:40:14.093 21:53:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:40:14.093 21:53:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:40:14.093 21:53:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:40:14.093 21:53:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:40:14.093 21:53:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 166962 00:40:14.093 21:53:47 reap_unregistered_poller -- common/autotest_common.sh@948 -- # '[' -z 166962 ']' 00:40:14.093 21:53:47 reap_unregistered_poller -- common/autotest_common.sh@952 -- # kill -0 166962 00:40:14.093 21:53:47 reap_unregistered_poller -- common/autotest_common.sh@953 -- # uname 00:40:14.093 21:53:47 reap_unregistered_poller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:14.093 21:53:47 reap_unregistered_poller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 166962 00:40:14.093 21:53:47 reap_unregistered_poller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:14.093 21:53:47 reap_unregistered_poller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:14.093 21:53:47 reap_unregistered_poller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 166962' 00:40:14.093 killing process with pid 166962 00:40:14.093 21:53:47 reap_unregistered_poller -- common/autotest_common.sh@967 -- # kill 166962 00:40:14.093 21:53:47 reap_unregistered_poller -- common/autotest_common.sh@972 -- # wait 166962 00:40:15.470 21:53:48 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:40:15.470 21:53:48 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:40:15.470 ************************************ 00:40:15.470 END TEST reap_unregistered_poller 00:40:15.470 ************************************ 00:40:15.470 00:40:15.470 real 0m3.293s 00:40:15.470 user 0m2.855s 00:40:15.470 sys 0m0.505s 00:40:15.470 21:53:48 reap_unregistered_poller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:15.470 21:53:48 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:40:15.470 21:53:48 -- common/autotest_common.sh@1142 -- # return 0 00:40:15.470 21:53:48 -- spdk/autotest.sh@198 -- # uname -s 00:40:15.470 21:53:48 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:40:15.470 21:53:48 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:40:15.470 21:53:48 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:40:15.470 21:53:48 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:40:15.470 21:53:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:15.470 21:53:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:15.470 21:53:48 -- common/autotest_common.sh@10 -- # set +x 00:40:15.470 ************************************ 00:40:15.470 START TEST spdk_dd 00:40:15.470 ************************************ 00:40:15.470 21:53:48 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:40:15.470 * Looking for test storage... 00:40:15.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:40:15.470 21:53:48 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:15.470 21:53:48 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:15.470 21:53:48 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:15.470 21:53:48 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:15.470 21:53:48 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:15.470 21:53:48 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:15.470 21:53:48 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:15.470 21:53:48 spdk_dd -- paths/export.sh@5 -- # export PATH 00:40:15.470 21:53:48 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:15.470 21:53:48 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:40:16.036 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:40:16.036 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:40:16.977 21:53:50 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:40:16.977 21:53:50 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@230 -- # local class 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@232 -- # local progif 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@233 -- # class=01 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@15 -- # local i 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@24 -- # return 0 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@325 -- # (( 1 )) 00:40:16.977 21:53:50 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 00:40:16.977 21:53:50 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@139 -- # local lib so 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.5 == liburing.so.* ]] 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@143 -- # [[ libdl.so.2 == liburing.so.* ]] 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@143 -- # [[ librt.so.1 == liburing.so.* ]] 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@143 -- # [[ libssl.so.1.1 == liburing.so.* ]] 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@143 -- # [[ libcrypto.so.1.1 == liburing.so.* ]] 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@143 -- # [[ libkeyutils.so.1 == liburing.so.* ]] 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]] 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@143 -- # [[ libpthread.so.0 == liburing.so.* ]] 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:40:16.977 21:53:50 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:40:16.977 21:53:50 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:40:16.977 21:53:50 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:40:16.977 21:53:50 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:40:16.977 21:53:50 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:16.977 21:53:50 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:40:16.977 ************************************ 00:40:16.977 START TEST spdk_dd_basic_rw 00:40:16.977 ************************************ 00:40:16.977 21:53:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:40:16.977 * Looking for test storage... 00:40:16.977 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:40:16.977 21:53:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:16.977 21:53:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:16.977 21:53:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:16.977 21:53:50 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:16.977 21:53:50 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:16.977 21:53:50 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:16.977 21:53:50 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:16.977 21:53:50 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:40:16.977 21:53:50 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:16.977 21:53:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:40:16.977 21:53:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:40:16.977 21:53:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:40:16.977 21:53:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:40:16.977 21:53:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:40:16.977 21:53:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:40:16.978 21:53:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:40:16.978 21:53:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:16.978 21:53:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:16.978 21:53:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:40:16.978 21:53:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:40:16.978 21:53:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:40:16.978 21:53:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:40:17.261 21:53:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 116 Data Units Written: 7 Host Read Commands: 2440 Host Write Commands: 113 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:40:17.261 21:53:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:40:17.262 21:53:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 116 Data Units Written: 7 Host Read Commands: 2440 Host Write Commands: 113 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:40:17.262 21:53:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:40:17.262 21:53:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:40:17.262 21:53:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:40:17.262 21:53:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:40:17.262 21:53:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:40:17.262 21:53:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:40:17.262 21:53:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:17.262 21:53:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:40:17.262 21:53:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:17.262 21:53:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:40:17.262 21:53:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:40:17.262 ************************************ 00:40:17.262 START TEST dd_bs_lt_native_bs 00:40:17.262 ************************************ 00:40:17.262 21:53:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:40:17.262 21:53:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:40:17.262 21:53:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:40:17.262 21:53:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:17.262 21:53:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:17.262 21:53:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:17.262 21:53:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:17.262 21:53:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:17.262 21:53:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:17.262 21:53:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:17.262 21:53:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:40:17.262 21:53:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:40:17.262 { 00:40:17.262 "subsystems": [ 00:40:17.262 { 00:40:17.262 "subsystem": "bdev", 00:40:17.262 "config": [ 00:40:17.262 { 00:40:17.262 "params": { 00:40:17.262 "trtype": "pcie", 00:40:17.262 "traddr": "0000:00:10.0", 00:40:17.262 "name": "Nvme0" 00:40:17.262 }, 00:40:17.262 "method": "bdev_nvme_attach_controller" 00:40:17.262 }, 00:40:17.262 { 00:40:17.262 "method": "bdev_wait_for_examine" 00:40:17.262 } 00:40:17.262 ] 00:40:17.262 } 00:40:17.262 ] 00:40:17.262 } 00:40:17.537 [2024-07-15 21:53:50.624622] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:40:17.537 [2024-07-15 21:53:50.624831] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167289 ] 00:40:17.537 [2024-07-15 21:53:50.783281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:17.796 [2024-07-15 21:53:50.987857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:18.055 [2024-07-15 21:53:51.367771] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:40:18.055 [2024-07-15 21:53:51.367971] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:18.987 [2024-07-15 21:53:52.106889] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:40:19.245 ************************************ 00:40:19.245 END TEST dd_bs_lt_native_bs 00:40:19.245 ************************************ 00:40:19.246 21:53:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:40:19.246 21:53:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:19.246 21:53:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:40:19.246 21:53:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:40:19.246 21:53:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:40:19.246 21:53:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:19.246 00:40:19.246 real 0m1.942s 00:40:19.246 user 0m1.647s 00:40:19.246 sys 0m0.267s 00:40:19.246 21:53:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:19.246 21:53:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:40:19.246 21:53:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:40:19.246 21:53:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:40:19.246 21:53:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:40:19.246 21:53:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:19.246 21:53:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:40:19.246 ************************************ 00:40:19.246 START TEST dd_rw 00:40:19.246 ************************************ 00:40:19.246 21:53:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:40:19.246 21:53:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:40:19.246 21:53:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:40:19.246 21:53:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:40:19.246 21:53:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:40:19.246 21:53:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:40:19.246 21:53:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:40:19.246 21:53:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:40:19.246 21:53:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:40:19.246 21:53:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:40:19.246 21:53:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:40:19.246 21:53:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:40:19.246 21:53:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:40:19.246 21:53:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:40:19.246 21:53:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:40:19.246 21:53:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:40:19.246 21:53:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:40:19.246 21:53:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:40:19.246 21:53:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:19.814 21:53:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:40:19.814 21:53:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:40:19.814 21:53:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:19.814 21:53:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:19.814 { 00:40:19.814 "subsystems": [ 00:40:19.814 { 00:40:19.814 "subsystem": "bdev", 00:40:19.814 "config": [ 00:40:19.814 { 00:40:19.814 "params": { 00:40:19.814 "trtype": "pcie", 00:40:19.814 "traddr": "0000:00:10.0", 00:40:19.814 "name": "Nvme0" 00:40:19.814 }, 00:40:19.814 "method": "bdev_nvme_attach_controller" 00:40:19.814 }, 00:40:19.814 { 00:40:19.814 "method": "bdev_wait_for_examine" 00:40:19.814 } 00:40:19.814 ] 00:40:19.814 } 00:40:19.814 ] 00:40:19.814 } 00:40:19.814 [2024-07-15 21:53:53.069552] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:40:19.814 [2024-07-15 21:53:53.069695] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167340 ] 00:40:20.073 [2024-07-15 21:53:53.230550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:20.073 [2024-07-15 21:53:53.430846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:21.580  Copying: 60/60 [kB] (average 29 MBps) 00:40:21.580 00:40:21.580 21:53:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:40:21.580 21:53:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:40:21.580 21:53:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:21.580 21:53:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:21.840 { 00:40:21.840 "subsystems": [ 00:40:21.840 { 00:40:21.840 "subsystem": "bdev", 00:40:21.840 "config": [ 00:40:21.840 { 00:40:21.840 "params": { 00:40:21.840 "trtype": "pcie", 00:40:21.840 "traddr": "0000:00:10.0", 00:40:21.840 "name": "Nvme0" 00:40:21.840 }, 00:40:21.840 "method": "bdev_nvme_attach_controller" 00:40:21.840 }, 00:40:21.840 { 00:40:21.840 "method": "bdev_wait_for_examine" 00:40:21.840 } 00:40:21.840 ] 00:40:21.840 } 00:40:21.840 ] 00:40:21.840 } 00:40:21.840 [2024-07-15 21:53:54.968790] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:40:21.840 [2024-07-15 21:53:54.969381] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167375 ] 00:40:21.840 [2024-07-15 21:53:55.130853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:22.100 [2024-07-15 21:53:55.337653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:23.738  Copying: 60/60 [kB] (average 29 MBps) 00:40:23.738 00:40:23.738 21:53:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:23.738 21:53:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:40:23.738 21:53:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:40:23.738 21:53:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:40:23.738 21:53:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:40:23.738 21:53:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:40:23.738 21:53:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:40:23.738 21:53:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:40:23.738 21:53:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:40:23.738 21:53:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:23.738 21:53:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:23.738 [2024-07-15 21:53:56.972798] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:40:23.738 [2024-07-15 21:53:56.973404] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167404 ] 00:40:23.738 { 00:40:23.738 "subsystems": [ 00:40:23.738 { 00:40:23.738 "subsystem": "bdev", 00:40:23.738 "config": [ 00:40:23.738 { 00:40:23.738 "params": { 00:40:23.738 "trtype": "pcie", 00:40:23.738 "traddr": "0000:00:10.0", 00:40:23.738 "name": "Nvme0" 00:40:23.738 }, 00:40:23.738 "method": "bdev_nvme_attach_controller" 00:40:23.738 }, 00:40:23.738 { 00:40:23.738 "method": "bdev_wait_for_examine" 00:40:23.738 } 00:40:23.738 ] 00:40:23.738 } 00:40:23.738 ] 00:40:23.738 } 00:40:23.997 [2024-07-15 21:53:57.134325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:23.997 [2024-07-15 21:53:57.335263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:25.524  Copying: 1024/1024 [kB] (average 500 MBps) 00:40:25.524 00:40:25.524 21:53:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:40:25.524 21:53:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:40:25.525 21:53:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:40:25.525 21:53:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:40:25.525 21:53:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:40:25.525 21:53:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:40:25.525 21:53:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:26.093 21:53:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:40:26.093 21:53:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:40:26.093 21:53:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:26.093 21:53:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:26.093 [2024-07-15 21:53:59.334125] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:40:26.093 [2024-07-15 21:53:59.334335] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167451 ] 00:40:26.093 { 00:40:26.093 "subsystems": [ 00:40:26.093 { 00:40:26.093 "subsystem": "bdev", 00:40:26.093 "config": [ 00:40:26.093 { 00:40:26.093 "params": { 00:40:26.093 "trtype": "pcie", 00:40:26.093 "traddr": "0000:00:10.0", 00:40:26.093 "name": "Nvme0" 00:40:26.093 }, 00:40:26.093 "method": "bdev_nvme_attach_controller" 00:40:26.093 }, 00:40:26.093 { 00:40:26.093 "method": "bdev_wait_for_examine" 00:40:26.093 } 00:40:26.093 ] 00:40:26.093 } 00:40:26.093 ] 00:40:26.093 } 00:40:26.352 [2024-07-15 21:53:59.494775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:26.352 [2024-07-15 21:53:59.686211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:27.856  Copying: 60/60 [kB] (average 58 MBps) 00:40:27.856 00:40:27.856 21:54:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:40:27.856 21:54:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:40:27.856 21:54:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:27.856 21:54:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:28.114 [2024-07-15 21:54:01.274898] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:40:28.114 [2024-07-15 21:54:01.275428] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167481 ] 00:40:28.114 { 00:40:28.114 "subsystems": [ 00:40:28.114 { 00:40:28.114 "subsystem": "bdev", 00:40:28.114 "config": [ 00:40:28.114 { 00:40:28.114 "params": { 00:40:28.114 "trtype": "pcie", 00:40:28.114 "traddr": "0000:00:10.0", 00:40:28.114 "name": "Nvme0" 00:40:28.114 }, 00:40:28.114 "method": "bdev_nvme_attach_controller" 00:40:28.114 }, 00:40:28.114 { 00:40:28.114 "method": "bdev_wait_for_examine" 00:40:28.114 } 00:40:28.114 ] 00:40:28.114 } 00:40:28.114 ] 00:40:28.114 } 00:40:28.114 [2024-07-15 21:54:01.425417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:28.371 [2024-07-15 21:54:01.626103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:30.005  Copying: 60/60 [kB] (average 58 MBps) 00:40:30.005 00:40:30.005 21:54:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:30.005 21:54:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:40:30.005 21:54:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:40:30.005 21:54:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:40:30.005 21:54:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:40:30.005 21:54:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:40:30.005 21:54:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:40:30.005 21:54:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:40:30.005 21:54:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:40:30.005 21:54:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:30.006 21:54:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:30.006 [2024-07-15 21:54:03.153292] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:40:30.006 [2024-07-15 21:54:03.153585] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167513 ] 00:40:30.006 { 00:40:30.006 "subsystems": [ 00:40:30.006 { 00:40:30.006 "subsystem": "bdev", 00:40:30.006 "config": [ 00:40:30.006 { 00:40:30.006 "params": { 00:40:30.006 "trtype": "pcie", 00:40:30.006 "traddr": "0000:00:10.0", 00:40:30.006 "name": "Nvme0" 00:40:30.006 }, 00:40:30.006 "method": "bdev_nvme_attach_controller" 00:40:30.006 }, 00:40:30.006 { 00:40:30.006 "method": "bdev_wait_for_examine" 00:40:30.006 } 00:40:30.006 ] 00:40:30.006 } 00:40:30.006 ] 00:40:30.006 } 00:40:30.006 [2024-07-15 21:54:03.312678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:30.266 [2024-07-15 21:54:03.513951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:31.903  Copying: 1024/1024 [kB] (average 1000 MBps) 00:40:31.903 00:40:31.903 21:54:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:40:31.903 21:54:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:40:31.903 21:54:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:40:31.903 21:54:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:40:31.903 21:54:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:40:31.903 21:54:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:40:31.903 21:54:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:40:31.903 21:54:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:32.161 21:54:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:40:32.161 21:54:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:40:32.161 21:54:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:32.161 21:54:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:32.161 [2024-07-15 21:54:05.523743] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:40:32.161 [2024-07-15 21:54:05.524291] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167541 ] 00:40:32.161 { 00:40:32.161 "subsystems": [ 00:40:32.161 { 00:40:32.161 "subsystem": "bdev", 00:40:32.161 "config": [ 00:40:32.161 { 00:40:32.161 "params": { 00:40:32.161 "trtype": "pcie", 00:40:32.161 "traddr": "0000:00:10.0", 00:40:32.161 "name": "Nvme0" 00:40:32.161 }, 00:40:32.161 "method": "bdev_nvme_attach_controller" 00:40:32.161 }, 00:40:32.161 { 00:40:32.161 "method": "bdev_wait_for_examine" 00:40:32.161 } 00:40:32.161 ] 00:40:32.161 } 00:40:32.161 ] 00:40:32.161 } 00:40:32.419 [2024-07-15 21:54:05.685344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:32.678 [2024-07-15 21:54:05.879860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:34.313  Copying: 56/56 [kB] (average 27 MBps) 00:40:34.313 00:40:34.313 21:54:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:40:34.313 21:54:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:40:34.313 21:54:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:34.313 21:54:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:34.313 [2024-07-15 21:54:07.395894] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:40:34.313 [2024-07-15 21:54:07.396410] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167572 ] 00:40:34.313 { 00:40:34.313 "subsystems": [ 00:40:34.313 { 00:40:34.313 "subsystem": "bdev", 00:40:34.313 "config": [ 00:40:34.313 { 00:40:34.313 "params": { 00:40:34.313 "trtype": "pcie", 00:40:34.313 "traddr": "0000:00:10.0", 00:40:34.313 "name": "Nvme0" 00:40:34.313 }, 00:40:34.313 "method": "bdev_nvme_attach_controller" 00:40:34.313 }, 00:40:34.313 { 00:40:34.313 "method": "bdev_wait_for_examine" 00:40:34.313 } 00:40:34.313 ] 00:40:34.313 } 00:40:34.313 ] 00:40:34.313 } 00:40:34.313 [2024-07-15 21:54:07.554588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:34.572 [2024-07-15 21:54:07.754447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:36.208  Copying: 56/56 [kB] (average 54 MBps) 00:40:36.208 00:40:36.208 21:54:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:36.208 21:54:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:40:36.208 21:54:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:40:36.208 21:54:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:40:36.208 21:54:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:40:36.208 21:54:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:40:36.208 21:54:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:40:36.209 21:54:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:40:36.209 21:54:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:40:36.209 21:54:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:36.209 21:54:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:36.209 [2024-07-15 21:54:09.355360] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:40:36.209 [2024-07-15 21:54:09.355651] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167601 ] 00:40:36.209 { 00:40:36.209 "subsystems": [ 00:40:36.209 { 00:40:36.209 "subsystem": "bdev", 00:40:36.209 "config": [ 00:40:36.209 { 00:40:36.209 "params": { 00:40:36.209 "trtype": "pcie", 00:40:36.209 "traddr": "0000:00:10.0", 00:40:36.209 "name": "Nvme0" 00:40:36.209 }, 00:40:36.209 "method": "bdev_nvme_attach_controller" 00:40:36.209 }, 00:40:36.209 { 00:40:36.209 "method": "bdev_wait_for_examine" 00:40:36.209 } 00:40:36.209 ] 00:40:36.209 } 00:40:36.209 ] 00:40:36.209 } 00:40:36.209 [2024-07-15 21:54:09.517413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:36.467 [2024-07-15 21:54:09.723781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:38.111  Copying: 1024/1024 [kB] (average 1000 MBps) 00:40:38.111 00:40:38.111 21:54:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:40:38.111 21:54:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:40:38.111 21:54:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:40:38.111 21:54:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:40:38.111 21:54:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:40:38.111 21:54:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:40:38.111 21:54:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:38.370 21:54:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:40:38.370 21:54:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:40:38.370 21:54:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:38.370 21:54:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:38.370 [2024-07-15 21:54:11.660480] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:40:38.370 [2024-07-15 21:54:11.660646] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167650 ] 00:40:38.370 { 00:40:38.370 "subsystems": [ 00:40:38.370 { 00:40:38.370 "subsystem": "bdev", 00:40:38.370 "config": [ 00:40:38.370 { 00:40:38.370 "params": { 00:40:38.370 "trtype": "pcie", 00:40:38.370 "traddr": "0000:00:10.0", 00:40:38.370 "name": "Nvme0" 00:40:38.370 }, 00:40:38.370 "method": "bdev_nvme_attach_controller" 00:40:38.370 }, 00:40:38.370 { 00:40:38.370 "method": "bdev_wait_for_examine" 00:40:38.370 } 00:40:38.370 ] 00:40:38.370 } 00:40:38.370 ] 00:40:38.370 } 00:40:38.628 [2024-07-15 21:54:11.823426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:38.886 [2024-07-15 21:54:12.020930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:40.521  Copying: 56/56 [kB] (average 54 MBps) 00:40:40.521 00:40:40.521 21:54:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:40:40.521 21:54:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:40:40.521 21:54:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:40.521 21:54:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:40.521 [2024-07-15 21:54:13.654503] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:40:40.521 [2024-07-15 21:54:13.654765] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167678 ] 00:40:40.521 { 00:40:40.521 "subsystems": [ 00:40:40.521 { 00:40:40.521 "subsystem": "bdev", 00:40:40.521 "config": [ 00:40:40.521 { 00:40:40.521 "params": { 00:40:40.521 "trtype": "pcie", 00:40:40.521 "traddr": "0000:00:10.0", 00:40:40.521 "name": "Nvme0" 00:40:40.521 }, 00:40:40.521 "method": "bdev_nvme_attach_controller" 00:40:40.521 }, 00:40:40.521 { 00:40:40.521 "method": "bdev_wait_for_examine" 00:40:40.521 } 00:40:40.521 ] 00:40:40.521 } 00:40:40.521 ] 00:40:40.521 } 00:40:40.521 [2024-07-15 21:54:13.814197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:40.779 [2024-07-15 21:54:14.015558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:42.412  Copying: 56/56 [kB] (average 54 MBps) 00:40:42.412 00:40:42.412 21:54:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:42.412 21:54:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:40:42.412 21:54:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:40:42.412 21:54:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:40:42.412 21:54:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:40:42.412 21:54:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:40:42.412 21:54:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:40:42.412 21:54:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:40:42.412 21:54:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:40:42.412 21:54:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:42.412 21:54:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:42.412 [2024-07-15 21:54:15.529704] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:40:42.412 [2024-07-15 21:54:15.529920] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167710 ] 00:40:42.412 { 00:40:42.412 "subsystems": [ 00:40:42.412 { 00:40:42.412 "subsystem": "bdev", 00:40:42.412 "config": [ 00:40:42.412 { 00:40:42.412 "params": { 00:40:42.412 "trtype": "pcie", 00:40:42.412 "traddr": "0000:00:10.0", 00:40:42.412 "name": "Nvme0" 00:40:42.412 }, 00:40:42.412 "method": "bdev_nvme_attach_controller" 00:40:42.412 }, 00:40:42.412 { 00:40:42.412 "method": "bdev_wait_for_examine" 00:40:42.412 } 00:40:42.412 ] 00:40:42.412 } 00:40:42.412 ] 00:40:42.412 } 00:40:42.412 [2024-07-15 21:54:15.692038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:42.670 [2024-07-15 21:54:15.891673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:44.321  Copying: 1024/1024 [kB] (average 1000 MBps) 00:40:44.321 00:40:44.321 21:54:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:40:44.321 21:54:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:40:44.321 21:54:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:40:44.322 21:54:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:40:44.322 21:54:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:40:44.322 21:54:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:40:44.322 21:54:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:40:44.322 21:54:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:44.581 21:54:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:40:44.581 21:54:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:40:44.581 21:54:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:44.581 21:54:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:44.581 [2024-07-15 21:54:17.879779] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:40:44.581 [2024-07-15 21:54:17.879970] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167745 ] 00:40:44.581 { 00:40:44.581 "subsystems": [ 00:40:44.581 { 00:40:44.581 "subsystem": "bdev", 00:40:44.581 "config": [ 00:40:44.581 { 00:40:44.581 "params": { 00:40:44.581 "trtype": "pcie", 00:40:44.581 "traddr": "0000:00:10.0", 00:40:44.581 "name": "Nvme0" 00:40:44.581 }, 00:40:44.581 "method": "bdev_nvme_attach_controller" 00:40:44.581 }, 00:40:44.581 { 00:40:44.581 "method": "bdev_wait_for_examine" 00:40:44.581 } 00:40:44.581 ] 00:40:44.581 } 00:40:44.581 ] 00:40:44.581 } 00:40:44.841 [2024-07-15 21:54:18.039997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:45.100 [2024-07-15 21:54:18.244077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:46.298  Copying: 48/48 [kB] (average 46 MBps) 00:40:46.298 00:40:46.557 21:54:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:40:46.557 21:54:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:40:46.557 21:54:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:46.557 21:54:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:46.557 { 00:40:46.557 "subsystems": [ 00:40:46.557 { 00:40:46.557 "subsystem": "bdev", 00:40:46.557 "config": [ 00:40:46.557 { 00:40:46.557 "params": { 00:40:46.557 "trtype": "pcie", 00:40:46.557 "traddr": "0000:00:10.0", 00:40:46.557 "name": "Nvme0" 00:40:46.557 }, 00:40:46.557 "method": "bdev_nvme_attach_controller" 00:40:46.557 }, 00:40:46.557 { 00:40:46.557 "method": "bdev_wait_for_examine" 00:40:46.557 } 00:40:46.557 ] 00:40:46.557 } 00:40:46.557 ] 00:40:46.557 } 00:40:46.557 [2024-07-15 21:54:19.742984] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:40:46.557 [2024-07-15 21:54:19.743183] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167769 ] 00:40:46.557 [2024-07-15 21:54:19.910474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:46.823 [2024-07-15 21:54:20.103912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:48.466  Copying: 48/48 [kB] (average 46 MBps) 00:40:48.466 00:40:48.466 21:54:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:48.466 21:54:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:40:48.466 21:54:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:40:48.466 21:54:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:40:48.466 21:54:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:40:48.466 21:54:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:40:48.466 21:54:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:40:48.466 21:54:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:40:48.466 21:54:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:40:48.466 21:54:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:48.466 21:54:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:48.466 { 00:40:48.466 "subsystems": [ 00:40:48.466 { 00:40:48.466 "subsystem": "bdev", 00:40:48.466 "config": [ 00:40:48.466 { 00:40:48.466 "params": { 00:40:48.466 "trtype": "pcie", 00:40:48.466 "traddr": "0000:00:10.0", 00:40:48.466 "name": "Nvme0" 00:40:48.466 }, 00:40:48.466 "method": "bdev_nvme_attach_controller" 00:40:48.466 }, 00:40:48.466 { 00:40:48.466 "method": "bdev_wait_for_examine" 00:40:48.466 } 00:40:48.466 ] 00:40:48.466 } 00:40:48.466 ] 00:40:48.466 } 00:40:48.466 [2024-07-15 21:54:21.698323] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:40:48.466 [2024-07-15 21:54:21.698518] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167817 ] 00:40:48.725 [2024-07-15 21:54:21.859929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:48.725 [2024-07-15 21:54:22.064274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:50.227  Copying: 1024/1024 [kB] (average 1000 MBps) 00:40:50.227 00:40:50.227 21:54:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:40:50.227 21:54:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:40:50.227 21:54:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:40:50.227 21:54:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:40:50.227 21:54:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:40:50.227 21:54:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:40:50.227 21:54:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:50.794 21:54:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:40:50.794 21:54:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:40:50.794 21:54:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:50.794 21:54:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:50.794 { 00:40:50.794 "subsystems": [ 00:40:50.794 { 00:40:50.794 "subsystem": "bdev", 00:40:50.794 "config": [ 00:40:50.794 { 00:40:50.794 "params": { 00:40:50.794 "trtype": "pcie", 00:40:50.794 "traddr": "0000:00:10.0", 00:40:50.794 "name": "Nvme0" 00:40:50.794 }, 00:40:50.794 "method": "bdev_nvme_attach_controller" 00:40:50.794 }, 00:40:50.794 { 00:40:50.794 "method": "bdev_wait_for_examine" 00:40:50.794 } 00:40:50.794 ] 00:40:50.794 } 00:40:50.794 ] 00:40:50.794 } 00:40:50.794 [2024-07-15 21:54:23.960829] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:40:50.794 [2024-07-15 21:54:23.961049] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167848 ] 00:40:50.794 [2024-07-15 21:54:24.122211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:51.052 [2024-07-15 21:54:24.319322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:52.560  Copying: 48/48 [kB] (average 46 MBps) 00:40:52.560 00:40:52.560 21:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:40:52.560 21:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:40:52.560 21:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:52.560 21:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:52.820 [2024-07-15 21:54:25.958941] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:40:52.820 [2024-07-15 21:54:25.959145] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167876 ] 00:40:52.820 { 00:40:52.820 "subsystems": [ 00:40:52.820 { 00:40:52.820 "subsystem": "bdev", 00:40:52.820 "config": [ 00:40:52.820 { 00:40:52.820 "params": { 00:40:52.820 "trtype": "pcie", 00:40:52.820 "traddr": "0000:00:10.0", 00:40:52.820 "name": "Nvme0" 00:40:52.820 }, 00:40:52.820 "method": "bdev_nvme_attach_controller" 00:40:52.820 }, 00:40:52.820 { 00:40:52.820 "method": "bdev_wait_for_examine" 00:40:52.820 } 00:40:52.820 ] 00:40:52.820 } 00:40:52.820 ] 00:40:52.820 } 00:40:52.820 [2024-07-15 21:54:26.119704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:53.080 [2024-07-15 21:54:26.321147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:54.743  Copying: 48/48 [kB] (average 46 MBps) 00:40:54.743 00:40:54.743 21:54:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:54.743 21:54:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:40:54.743 21:54:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:40:54.743 21:54:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:40:54.743 21:54:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:40:54.743 21:54:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:40:54.743 21:54:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:40:54.743 21:54:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:40:54.743 21:54:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:40:54.743 21:54:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:54.743 21:54:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:54.743 [2024-07-15 21:54:27.852560] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:40:54.743 [2024-07-15 21:54:27.852771] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167904 ] 00:40:54.743 { 00:40:54.743 "subsystems": [ 00:40:54.743 { 00:40:54.743 "subsystem": "bdev", 00:40:54.743 "config": [ 00:40:54.743 { 00:40:54.743 "params": { 00:40:54.743 "trtype": "pcie", 00:40:54.743 "traddr": "0000:00:10.0", 00:40:54.743 "name": "Nvme0" 00:40:54.743 }, 00:40:54.743 "method": "bdev_nvme_attach_controller" 00:40:54.743 }, 00:40:54.743 { 00:40:54.743 "method": "bdev_wait_for_examine" 00:40:54.743 } 00:40:54.743 ] 00:40:54.743 } 00:40:54.743 ] 00:40:54.743 } 00:40:54.743 [2024-07-15 21:54:28.014986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:55.003 [2024-07-15 21:54:28.220020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:56.643  Copying: 1024/1024 [kB] (average 1000 MBps) 00:40:56.643 00:40:56.643 ************************************ 00:40:56.643 END TEST dd_rw 00:40:56.643 ************************************ 00:40:56.643 00:40:56.643 real 0m37.283s 00:40:56.643 user 0m32.234s 00:40:56.643 sys 0m3.845s 00:40:56.643 21:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:56.643 21:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:56.643 21:54:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:40:56.643 21:54:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:40:56.643 21:54:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:56.643 21:54:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:56.643 21:54:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:40:56.643 ************************************ 00:40:56.643 START TEST dd_rw_offset 00:40:56.643 ************************************ 00:40:56.643 21:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:40:56.643 21:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:40:56.643 21:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:40:56.643 21:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:40:56.643 21:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:40:56.643 21:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:40:56.643 21:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=ny2onxj1vie355rpapfat2ooqmxno9encevncpsf7b1pa5bcsbdbsfmxenmcpz1203si7oiisz8gougougpavpa2b9ejz760qsmoe0qr7hkuia064zn8uz1ivwunckcoywz5o72842awblu90ftll09wxli2t16uhukj8d30az9mnh5o5jn0e57yr8zrk8wkydcubcf0iwv4z8bvc8yycp517abixis7ujx8lq3cplt8el4tk3hg22i9qiag3as67vt0a4tvx9h7t4gq2c5udn1sepnyjwahgcy4lxsjs1hhon3bo3ly3bbv06zmknia0nmdbjfv8xg75bnxu3h9ct7o6tb869l9qron3t7r1ji9ogcqhmvid84hd6e8fxc5p7lysifbpfzr7sm5xma4xw8s8klz5hjoyyo7c5573o73q2cywk8yv58dgrw2k0krmcu56ruuz2ybio5d6qrqif8j9rb14ar2uoh3ogi60w90qukglkw6xaglfq5u81vw43dbarw0kbjs7p4wbkdl0vdqbbx20f2yom1qmept5gj6ty2sk90kk82d8a6lxvyfx0lqw2061xluyj711q1gkajqyqeecourq4erhq7gjzljj1zrx4tdlzvl6jijo69mpbjzrl66jgbrb3s75wqj5uztxozzb743rw8iz7ku1yzmkg0tgs7iv5iw21lsigdzntgmxsbuyib8o24re60g85vhe2x87pc2s6g08vxi7n1ivdtc55s4la9hgbmaggu1ovytuod5uyggo3bgg4qbpzdzbijbekad8v6dp2a0cfhh2zhdynxbprb3m3sqgefo13t3gwpdsrdmwcu5ax3udu6kmhx3i95ipt3nce7undiwzqbjs3bfsrf8e20u00ddeivujgymaf7mevh8sgzdtmosxp3j1rp41vgxb8h10i9cqmqcmln21x7h100503r41lctw6if1uwed7y0wmwb3p257u6usdp5vry9e89706ix6ahgp1dsg59zakmrb4cxfkxjxcfglvys80n59h0qvy61wekn9hl0t73zwzwjh2eyzuf860ih8gusbs0t6jgfmi4gbcdamh3bsil4kc22fn4jz5gm7ecovcn7sxuky32tuxas7bi4jhtu9yh0y6bkirruqbjd8fgi1f0c0i5zcjfwy6ndzq3x9d1d60z3rc07c1ikh0g4jyfulfmpansgw293ouq4cdapqrhihi67rmfp2f2xr3muhevjs09oh3bi1ug48aphb1ef7sb0ol6a59lsqs9qb2mz7v1uo2ew4yzlb2eevrre91x1zvdk4r6w2j00tjib75uw0apqtaryyntolkqi0mp194dethqrg7hj9va0i6ipunlg3j3i8gzadiyzxfjflkfs47zmryoxd7ex4dd0tg5cplt831xzd9ylu3av6xpjua32idie5dcgd8a9j3468m5txnqsxbtsq5jiskedzvbk2fc00pcouvb2q9xpshre5nx0eglv2c88kcxouxb2fj2z3mv5mk7iyvt100ftw5tzy5k9axagup2c7sp2ql673v0b24kgwlce9c4kxkuiirqth9eup3snrpx2yuux4922jgowdml3ge9qmdx8xpikbpec0b6d6glzdqao41l95fgtu5gmji50q35dz2d8mdohkckmal9hvh2nr3bnyusuuwpi3jy36e3dbkt9rdtz31cc09vzjs0toyo8j0eshc7k1wdoxmf1sr9hg06282l73nbpea8yskp0xhus8smzbvegcl557ej8k1qb8b6f0sd77kcjpqndi6yuf2n5lktulhzj41u20lmrnk4g24aee7w6mz59qnpanwd1i4u4riugah5z4uqw9yqlw8yojy8g7ds2ez3biel9k10cd59ph8dksijod18l8crvy8txnn5rnk54tdud5p8222rvemk1kbpe30f08i9ryhqdvr8jphf5rhgpsz21xjbjdg379wcf3rjp7sayf86x5u6g2h70cdipwpzfg84z1200shgt5cj2h3q15gytzfotyoe48qn4f76atklgb9ufirtgp2uiw279t68fog9nxs56b1wpt77zrwkydh8pkuttbwfxwpfz4frnndb4owor608jgsy59nz3qafj3jrd30feoh0lkpliaxo0xvltqr82rvc4qx13s0cn6qq6s4qlad68t8h9spzri3d57acou0vnn7az16w3aa1zj8a6dajpp5g3n376fmi7b2a4yiev9m9un1ki33xpkfyrfuucngl7m5md270rnaekeshlxir0wkv0b7n5zvapqxvkttn5pkazgew8ctc1aud1y40lz7yubhj7s7qbrel7zgy50mg9twq1wr11vxbr9dyv0udnjqm65sbfb7unw50vdeum05kzdvs7ec6xklbs7jmiesl63ztqnzwimfoyknfr52rbqnurqku5px1egtj744rj05p04agndv6igh37wfi2btj1r9p2of1vsf79dznyjtlfiy2vy3yzompry4dpasp3k9uuvewjplaoxgmfpsg9dclqmfsvw0cgyxofxhaucltgwikoydpbkize6nf65n8z3t8g897rwqsc22cd9u8289kpubpnl74jbxj4vkbgay2g39i7kpk99kzu7pm3rlykhf0ges7pc0h6dkc9nanwntqa5cfxlirjdy1n1gbub4guydxasiqb80vfllcg8cxyb68fq3awhguz787l1dzv3od3jheknntua6oj8xwvbfsvfqgv192fsq8m0lct2qegh7a19t6gpc1755bj0aw4whsptquxuplxzuk9372iqmwzqyug3hkwrtmnjhz9vdl7o7xo3z9kk8ciw3fac8rqah764otez2h65d8fcs1abtdzngzzaw7igba50pihcnvf8spxscr7w2oa5xhinuyiyu3de4hor6nw7bebea1wrffa1uwn47rjndiji2mvo0di1xw7sy217fe4ru2t65qo8luupsfthuk0797ilq1dycnxjq4ubudj9wqk2zc1qd05t0v54mjds7f7v1ir3vm1elh24fg1xzsyj9eeiey3t2rg57fta6g0yqofxyz2d3x56tbade1nsfxiwtovuqed0gui8i7qiqa43ix87gq1ptc29m15u3q4q2zqk489a3ji4rdfuham0n6peyabw2ke6icqnzptbw3xwdjbp7hqxz7tmo2ctmls9cvnimpz5hs2nlbrk1mp2ko9xsdhs7ka10w75g2651nsj7j2wubklkr9yrf4j2ygfkmmt2fx6744mn7yrm7jwrrzfpt5k0twiekjnarders1kqnrdrgvq3wfstsh1unf144wooigkve96k3nvt6ivxd5zvwougfrzrzc9idc851jw1hzokql5ouawa4hyyf4buyt9s8xyca9ztmjx7kimges4aevss1cuvj6rbalh8ikhsaxqhjya2e9dufrosd38raoy4vh4svdtwdldb5wixizl0kac1w7pk5uq8cuntgao6avwt84fuk9t2j9d2tph4i01lsx2hx0f0q9xco3vqjlcgxvtp9i7mgq6i20ly2d60ftey4zt7483badtk1588sewtq08ny58hwqwr8ha7pdodwt0ah061atnb50w55uqlvnmub9e9y0vqlj6vbf2zijsyfvv3vt2f6ojy344ks2ahq99fq6sus4ckavv161atbmpaxk37op03nayyg6fldt59cvaxdn2csorafpr5eiv3pea4wdiqg20mfihgrxy7z1y37ultafdv8sd87muj2w45nl1cb18l86lt2cukpfqcikkurw2w6kkmgbo5f576fhjman7ypjw8uk099yw53admk06lur5mowe5iaug3xzff9k2dmo20p30ephc1y6h837g4hvmqwxso9hfu68a8gje6wzzfiahsx045aaluaqku7y7sgp10l7c02359d6s5jfk63vzw2pqbdo3m44gau6zuhf6f2kfhghmstxjw1s63pjbhxsz9731f2eezhaa2cjy0ggkacjtvhodx5gxfr9ehl893ocmtnamanr994cnfypbd1 00:40:56.643 21:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:40:56.643 21:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:40:56.643 21:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:40:56.643 21:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:40:56.643 [2024-07-15 21:54:30.011650] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:40:56.643 [2024-07-15 21:54:30.011851] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167956 ] 00:40:56.903 { 00:40:56.903 "subsystems": [ 00:40:56.903 { 00:40:56.903 "subsystem": "bdev", 00:40:56.903 "config": [ 00:40:56.903 { 00:40:56.903 "params": { 00:40:56.903 "trtype": "pcie", 00:40:56.903 "traddr": "0000:00:10.0", 00:40:56.903 "name": "Nvme0" 00:40:56.903 }, 00:40:56.903 "method": "bdev_nvme_attach_controller" 00:40:56.903 }, 00:40:56.903 { 00:40:56.903 "method": "bdev_wait_for_examine" 00:40:56.903 } 00:40:56.903 ] 00:40:56.903 } 00:40:56.903 ] 00:40:56.903 } 00:40:56.903 [2024-07-15 21:54:30.171923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:57.163 [2024-07-15 21:54:30.374000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:58.835  Copying: 4096/4096 [B] (average 4000 kBps) 00:40:58.835 00:40:58.835 21:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:40:58.835 21:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:40:58.835 21:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:40:58.835 21:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:40:58.835 { 00:40:58.835 "subsystems": [ 00:40:58.835 { 00:40:58.835 "subsystem": "bdev", 00:40:58.835 "config": [ 00:40:58.835 { 00:40:58.835 "params": { 00:40:58.835 "trtype": "pcie", 00:40:58.835 "traddr": "0000:00:10.0", 00:40:58.835 "name": "Nvme0" 00:40:58.835 }, 00:40:58.835 "method": "bdev_nvme_attach_controller" 00:40:58.835 }, 00:40:58.835 { 00:40:58.835 "method": "bdev_wait_for_examine" 00:40:58.835 } 00:40:58.835 ] 00:40:58.835 } 00:40:58.835 ] 00:40:58.835 } 00:40:58.835 [2024-07-15 21:54:31.893917] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:40:58.835 [2024-07-15 21:54:31.894087] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168002 ] 00:40:58.835 [2024-07-15 21:54:32.055779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:59.094 [2024-07-15 21:54:32.256939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:00.731  Copying: 4096/4096 [B] (average 4000 kBps) 00:41:00.731 00:41:00.731 ************************************ 00:41:00.731 END TEST dd_rw_offset 00:41:00.731 ************************************ 00:41:00.731 21:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:41:00.732 21:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ ny2onxj1vie355rpapfat2ooqmxno9encevncpsf7b1pa5bcsbdbsfmxenmcpz1203si7oiisz8gougougpavpa2b9ejz760qsmoe0qr7hkuia064zn8uz1ivwunckcoywz5o72842awblu90ftll09wxli2t16uhukj8d30az9mnh5o5jn0e57yr8zrk8wkydcubcf0iwv4z8bvc8yycp517abixis7ujx8lq3cplt8el4tk3hg22i9qiag3as67vt0a4tvx9h7t4gq2c5udn1sepnyjwahgcy4lxsjs1hhon3bo3ly3bbv06zmknia0nmdbjfv8xg75bnxu3h9ct7o6tb869l9qron3t7r1ji9ogcqhmvid84hd6e8fxc5p7lysifbpfzr7sm5xma4xw8s8klz5hjoyyo7c5573o73q2cywk8yv58dgrw2k0krmcu56ruuz2ybio5d6qrqif8j9rb14ar2uoh3ogi60w90qukglkw6xaglfq5u81vw43dbarw0kbjs7p4wbkdl0vdqbbx20f2yom1qmept5gj6ty2sk90kk82d8a6lxvyfx0lqw2061xluyj711q1gkajqyqeecourq4erhq7gjzljj1zrx4tdlzvl6jijo69mpbjzrl66jgbrb3s75wqj5uztxozzb743rw8iz7ku1yzmkg0tgs7iv5iw21lsigdzntgmxsbuyib8o24re60g85vhe2x87pc2s6g08vxi7n1ivdtc55s4la9hgbmaggu1ovytuod5uyggo3bgg4qbpzdzbijbekad8v6dp2a0cfhh2zhdynxbprb3m3sqgefo13t3gwpdsrdmwcu5ax3udu6kmhx3i95ipt3nce7undiwzqbjs3bfsrf8e20u00ddeivujgymaf7mevh8sgzdtmosxp3j1rp41vgxb8h10i9cqmqcmln21x7h100503r41lctw6if1uwed7y0wmwb3p257u6usdp5vry9e89706ix6ahgp1dsg59zakmrb4cxfkxjxcfglvys80n59h0qvy61wekn9hl0t73zwzwjh2eyzuf860ih8gusbs0t6jgfmi4gbcdamh3bsil4kc22fn4jz5gm7ecovcn7sxuky32tuxas7bi4jhtu9yh0y6bkirruqbjd8fgi1f0c0i5zcjfwy6ndzq3x9d1d60z3rc07c1ikh0g4jyfulfmpansgw293ouq4cdapqrhihi67rmfp2f2xr3muhevjs09oh3bi1ug48aphb1ef7sb0ol6a59lsqs9qb2mz7v1uo2ew4yzlb2eevrre91x1zvdk4r6w2j00tjib75uw0apqtaryyntolkqi0mp194dethqrg7hj9va0i6ipunlg3j3i8gzadiyzxfjflkfs47zmryoxd7ex4dd0tg5cplt831xzd9ylu3av6xpjua32idie5dcgd8a9j3468m5txnqsxbtsq5jiskedzvbk2fc00pcouvb2q9xpshre5nx0eglv2c88kcxouxb2fj2z3mv5mk7iyvt100ftw5tzy5k9axagup2c7sp2ql673v0b24kgwlce9c4kxkuiirqth9eup3snrpx2yuux4922jgowdml3ge9qmdx8xpikbpec0b6d6glzdqao41l95fgtu5gmji50q35dz2d8mdohkckmal9hvh2nr3bnyusuuwpi3jy36e3dbkt9rdtz31cc09vzjs0toyo8j0eshc7k1wdoxmf1sr9hg06282l73nbpea8yskp0xhus8smzbvegcl557ej8k1qb8b6f0sd77kcjpqndi6yuf2n5lktulhzj41u20lmrnk4g24aee7w6mz59qnpanwd1i4u4riugah5z4uqw9yqlw8yojy8g7ds2ez3biel9k10cd59ph8dksijod18l8crvy8txnn5rnk54tdud5p8222rvemk1kbpe30f08i9ryhqdvr8jphf5rhgpsz21xjbjdg379wcf3rjp7sayf86x5u6g2h70cdipwpzfg84z1200shgt5cj2h3q15gytzfotyoe48qn4f76atklgb9ufirtgp2uiw279t68fog9nxs56b1wpt77zrwkydh8pkuttbwfxwpfz4frnndb4owor608jgsy59nz3qafj3jrd30feoh0lkpliaxo0xvltqr82rvc4qx13s0cn6qq6s4qlad68t8h9spzri3d57acou0vnn7az16w3aa1zj8a6dajpp5g3n376fmi7b2a4yiev9m9un1ki33xpkfyrfuucngl7m5md270rnaekeshlxir0wkv0b7n5zvapqxvkttn5pkazgew8ctc1aud1y40lz7yubhj7s7qbrel7zgy50mg9twq1wr11vxbr9dyv0udnjqm65sbfb7unw50vdeum05kzdvs7ec6xklbs7jmiesl63ztqnzwimfoyknfr52rbqnurqku5px1egtj744rj05p04agndv6igh37wfi2btj1r9p2of1vsf79dznyjtlfiy2vy3yzompry4dpasp3k9uuvewjplaoxgmfpsg9dclqmfsvw0cgyxofxhaucltgwikoydpbkize6nf65n8z3t8g897rwqsc22cd9u8289kpubpnl74jbxj4vkbgay2g39i7kpk99kzu7pm3rlykhf0ges7pc0h6dkc9nanwntqa5cfxlirjdy1n1gbub4guydxasiqb80vfllcg8cxyb68fq3awhguz787l1dzv3od3jheknntua6oj8xwvbfsvfqgv192fsq8m0lct2qegh7a19t6gpc1755bj0aw4whsptquxuplxzuk9372iqmwzqyug3hkwrtmnjhz9vdl7o7xo3z9kk8ciw3fac8rqah764otez2h65d8fcs1abtdzngzzaw7igba50pihcnvf8spxscr7w2oa5xhinuyiyu3de4hor6nw7bebea1wrffa1uwn47rjndiji2mvo0di1xw7sy217fe4ru2t65qo8luupsfthuk0797ilq1dycnxjq4ubudj9wqk2zc1qd05t0v54mjds7f7v1ir3vm1elh24fg1xzsyj9eeiey3t2rg57fta6g0yqofxyz2d3x56tbade1nsfxiwtovuqed0gui8i7qiqa43ix87gq1ptc29m15u3q4q2zqk489a3ji4rdfuham0n6peyabw2ke6icqnzptbw3xwdjbp7hqxz7tmo2ctmls9cvnimpz5hs2nlbrk1mp2ko9xsdhs7ka10w75g2651nsj7j2wubklkr9yrf4j2ygfkmmt2fx6744mn7yrm7jwrrzfpt5k0twiekjnarders1kqnrdrgvq3wfstsh1unf144wooigkve96k3nvt6ivxd5zvwougfrzrzc9idc851jw1hzokql5ouawa4hyyf4buyt9s8xyca9ztmjx7kimges4aevss1cuvj6rbalh8ikhsaxqhjya2e9dufrosd38raoy4vh4svdtwdldb5wixizl0kac1w7pk5uq8cuntgao6avwt84fuk9t2j9d2tph4i01lsx2hx0f0q9xco3vqjlcgxvtp9i7mgq6i20ly2d60ftey4zt7483badtk1588sewtq08ny58hwqwr8ha7pdodwt0ah061atnb50w55uqlvnmub9e9y0vqlj6vbf2zijsyfvv3vt2f6ojy344ks2ahq99fq6sus4ckavv161atbmpaxk37op03nayyg6fldt59cvaxdn2csorafpr5eiv3pea4wdiqg20mfihgrxy7z1y37ultafdv8sd87muj2w45nl1cb18l86lt2cukpfqcikkurw2w6kkmgbo5f576fhjman7ypjw8uk099yw53admk06lur5mowe5iaug3xzff9k2dmo20p30ephc1y6h837g4hvmqwxso9hfu68a8gje6wzzfiahsx045aaluaqku7y7sgp10l7c02359d6s5jfk63vzw2pqbdo3m44gau6zuhf6f2kfhghmstxjw1s63pjbhxsz9731f2eezhaa2cjy0ggkacjtvhodx5gxfr9ehl893ocmtnamanr994cnfypbd1 == \n\y\2\o\n\x\j\1\v\i\e\3\5\5\r\p\a\p\f\a\t\2\o\o\q\m\x\n\o\9\e\n\c\e\v\n\c\p\s\f\7\b\1\p\a\5\b\c\s\b\d\b\s\f\m\x\e\n\m\c\p\z\1\2\0\3\s\i\7\o\i\i\s\z\8\g\o\u\g\o\u\g\p\a\v\p\a\2\b\9\e\j\z\7\6\0\q\s\m\o\e\0\q\r\7\h\k\u\i\a\0\6\4\z\n\8\u\z\1\i\v\w\u\n\c\k\c\o\y\w\z\5\o\7\2\8\4\2\a\w\b\l\u\9\0\f\t\l\l\0\9\w\x\l\i\2\t\1\6\u\h\u\k\j\8\d\3\0\a\z\9\m\n\h\5\o\5\j\n\0\e\5\7\y\r\8\z\r\k\8\w\k\y\d\c\u\b\c\f\0\i\w\v\4\z\8\b\v\c\8\y\y\c\p\5\1\7\a\b\i\x\i\s\7\u\j\x\8\l\q\3\c\p\l\t\8\e\l\4\t\k\3\h\g\2\2\i\9\q\i\a\g\3\a\s\6\7\v\t\0\a\4\t\v\x\9\h\7\t\4\g\q\2\c\5\u\d\n\1\s\e\p\n\y\j\w\a\h\g\c\y\4\l\x\s\j\s\1\h\h\o\n\3\b\o\3\l\y\3\b\b\v\0\6\z\m\k\n\i\a\0\n\m\d\b\j\f\v\8\x\g\7\5\b\n\x\u\3\h\9\c\t\7\o\6\t\b\8\6\9\l\9\q\r\o\n\3\t\7\r\1\j\i\9\o\g\c\q\h\m\v\i\d\8\4\h\d\6\e\8\f\x\c\5\p\7\l\y\s\i\f\b\p\f\z\r\7\s\m\5\x\m\a\4\x\w\8\s\8\k\l\z\5\h\j\o\y\y\o\7\c\5\5\7\3\o\7\3\q\2\c\y\w\k\8\y\v\5\8\d\g\r\w\2\k\0\k\r\m\c\u\5\6\r\u\u\z\2\y\b\i\o\5\d\6\q\r\q\i\f\8\j\9\r\b\1\4\a\r\2\u\o\h\3\o\g\i\6\0\w\9\0\q\u\k\g\l\k\w\6\x\a\g\l\f\q\5\u\8\1\v\w\4\3\d\b\a\r\w\0\k\b\j\s\7\p\4\w\b\k\d\l\0\v\d\q\b\b\x\2\0\f\2\y\o\m\1\q\m\e\p\t\5\g\j\6\t\y\2\s\k\9\0\k\k\8\2\d\8\a\6\l\x\v\y\f\x\0\l\q\w\2\0\6\1\x\l\u\y\j\7\1\1\q\1\g\k\a\j\q\y\q\e\e\c\o\u\r\q\4\e\r\h\q\7\g\j\z\l\j\j\1\z\r\x\4\t\d\l\z\v\l\6\j\i\j\o\6\9\m\p\b\j\z\r\l\6\6\j\g\b\r\b\3\s\7\5\w\q\j\5\u\z\t\x\o\z\z\b\7\4\3\r\w\8\i\z\7\k\u\1\y\z\m\k\g\0\t\g\s\7\i\v\5\i\w\2\1\l\s\i\g\d\z\n\t\g\m\x\s\b\u\y\i\b\8\o\2\4\r\e\6\0\g\8\5\v\h\e\2\x\8\7\p\c\2\s\6\g\0\8\v\x\i\7\n\1\i\v\d\t\c\5\5\s\4\l\a\9\h\g\b\m\a\g\g\u\1\o\v\y\t\u\o\d\5\u\y\g\g\o\3\b\g\g\4\q\b\p\z\d\z\b\i\j\b\e\k\a\d\8\v\6\d\p\2\a\0\c\f\h\h\2\z\h\d\y\n\x\b\p\r\b\3\m\3\s\q\g\e\f\o\1\3\t\3\g\w\p\d\s\r\d\m\w\c\u\5\a\x\3\u\d\u\6\k\m\h\x\3\i\9\5\i\p\t\3\n\c\e\7\u\n\d\i\w\z\q\b\j\s\3\b\f\s\r\f\8\e\2\0\u\0\0\d\d\e\i\v\u\j\g\y\m\a\f\7\m\e\v\h\8\s\g\z\d\t\m\o\s\x\p\3\j\1\r\p\4\1\v\g\x\b\8\h\1\0\i\9\c\q\m\q\c\m\l\n\2\1\x\7\h\1\0\0\5\0\3\r\4\1\l\c\t\w\6\i\f\1\u\w\e\d\7\y\0\w\m\w\b\3\p\2\5\7\u\6\u\s\d\p\5\v\r\y\9\e\8\9\7\0\6\i\x\6\a\h\g\p\1\d\s\g\5\9\z\a\k\m\r\b\4\c\x\f\k\x\j\x\c\f\g\l\v\y\s\8\0\n\5\9\h\0\q\v\y\6\1\w\e\k\n\9\h\l\0\t\7\3\z\w\z\w\j\h\2\e\y\z\u\f\8\6\0\i\h\8\g\u\s\b\s\0\t\6\j\g\f\m\i\4\g\b\c\d\a\m\h\3\b\s\i\l\4\k\c\2\2\f\n\4\j\z\5\g\m\7\e\c\o\v\c\n\7\s\x\u\k\y\3\2\t\u\x\a\s\7\b\i\4\j\h\t\u\9\y\h\0\y\6\b\k\i\r\r\u\q\b\j\d\8\f\g\i\1\f\0\c\0\i\5\z\c\j\f\w\y\6\n\d\z\q\3\x\9\d\1\d\6\0\z\3\r\c\0\7\c\1\i\k\h\0\g\4\j\y\f\u\l\f\m\p\a\n\s\g\w\2\9\3\o\u\q\4\c\d\a\p\q\r\h\i\h\i\6\7\r\m\f\p\2\f\2\x\r\3\m\u\h\e\v\j\s\0\9\o\h\3\b\i\1\u\g\4\8\a\p\h\b\1\e\f\7\s\b\0\o\l\6\a\5\9\l\s\q\s\9\q\b\2\m\z\7\v\1\u\o\2\e\w\4\y\z\l\b\2\e\e\v\r\r\e\9\1\x\1\z\v\d\k\4\r\6\w\2\j\0\0\t\j\i\b\7\5\u\w\0\a\p\q\t\a\r\y\y\n\t\o\l\k\q\i\0\m\p\1\9\4\d\e\t\h\q\r\g\7\h\j\9\v\a\0\i\6\i\p\u\n\l\g\3\j\3\i\8\g\z\a\d\i\y\z\x\f\j\f\l\k\f\s\4\7\z\m\r\y\o\x\d\7\e\x\4\d\d\0\t\g\5\c\p\l\t\8\3\1\x\z\d\9\y\l\u\3\a\v\6\x\p\j\u\a\3\2\i\d\i\e\5\d\c\g\d\8\a\9\j\3\4\6\8\m\5\t\x\n\q\s\x\b\t\s\q\5\j\i\s\k\e\d\z\v\b\k\2\f\c\0\0\p\c\o\u\v\b\2\q\9\x\p\s\h\r\e\5\n\x\0\e\g\l\v\2\c\8\8\k\c\x\o\u\x\b\2\f\j\2\z\3\m\v\5\m\k\7\i\y\v\t\1\0\0\f\t\w\5\t\z\y\5\k\9\a\x\a\g\u\p\2\c\7\s\p\2\q\l\6\7\3\v\0\b\2\4\k\g\w\l\c\e\9\c\4\k\x\k\u\i\i\r\q\t\h\9\e\u\p\3\s\n\r\p\x\2\y\u\u\x\4\9\2\2\j\g\o\w\d\m\l\3\g\e\9\q\m\d\x\8\x\p\i\k\b\p\e\c\0\b\6\d\6\g\l\z\d\q\a\o\4\1\l\9\5\f\g\t\u\5\g\m\j\i\5\0\q\3\5\d\z\2\d\8\m\d\o\h\k\c\k\m\a\l\9\h\v\h\2\n\r\3\b\n\y\u\s\u\u\w\p\i\3\j\y\3\6\e\3\d\b\k\t\9\r\d\t\z\3\1\c\c\0\9\v\z\j\s\0\t\o\y\o\8\j\0\e\s\h\c\7\k\1\w\d\o\x\m\f\1\s\r\9\h\g\0\6\2\8\2\l\7\3\n\b\p\e\a\8\y\s\k\p\0\x\h\u\s\8\s\m\z\b\v\e\g\c\l\5\5\7\e\j\8\k\1\q\b\8\b\6\f\0\s\d\7\7\k\c\j\p\q\n\d\i\6\y\u\f\2\n\5\l\k\t\u\l\h\z\j\4\1\u\2\0\l\m\r\n\k\4\g\2\4\a\e\e\7\w\6\m\z\5\9\q\n\p\a\n\w\d\1\i\4\u\4\r\i\u\g\a\h\5\z\4\u\q\w\9\y\q\l\w\8\y\o\j\y\8\g\7\d\s\2\e\z\3\b\i\e\l\9\k\1\0\c\d\5\9\p\h\8\d\k\s\i\j\o\d\1\8\l\8\c\r\v\y\8\t\x\n\n\5\r\n\k\5\4\t\d\u\d\5\p\8\2\2\2\r\v\e\m\k\1\k\b\p\e\3\0\f\0\8\i\9\r\y\h\q\d\v\r\8\j\p\h\f\5\r\h\g\p\s\z\2\1\x\j\b\j\d\g\3\7\9\w\c\f\3\r\j\p\7\s\a\y\f\8\6\x\5\u\6\g\2\h\7\0\c\d\i\p\w\p\z\f\g\8\4\z\1\2\0\0\s\h\g\t\5\c\j\2\h\3\q\1\5\g\y\t\z\f\o\t\y\o\e\4\8\q\n\4\f\7\6\a\t\k\l\g\b\9\u\f\i\r\t\g\p\2\u\i\w\2\7\9\t\6\8\f\o\g\9\n\x\s\5\6\b\1\w\p\t\7\7\z\r\w\k\y\d\h\8\p\k\u\t\t\b\w\f\x\w\p\f\z\4\f\r\n\n\d\b\4\o\w\o\r\6\0\8\j\g\s\y\5\9\n\z\3\q\a\f\j\3\j\r\d\3\0\f\e\o\h\0\l\k\p\l\i\a\x\o\0\x\v\l\t\q\r\8\2\r\v\c\4\q\x\1\3\s\0\c\n\6\q\q\6\s\4\q\l\a\d\6\8\t\8\h\9\s\p\z\r\i\3\d\5\7\a\c\o\u\0\v\n\n\7\a\z\1\6\w\3\a\a\1\z\j\8\a\6\d\a\j\p\p\5\g\3\n\3\7\6\f\m\i\7\b\2\a\4\y\i\e\v\9\m\9\u\n\1\k\i\3\3\x\p\k\f\y\r\f\u\u\c\n\g\l\7\m\5\m\d\2\7\0\r\n\a\e\k\e\s\h\l\x\i\r\0\w\k\v\0\b\7\n\5\z\v\a\p\q\x\v\k\t\t\n\5\p\k\a\z\g\e\w\8\c\t\c\1\a\u\d\1\y\4\0\l\z\7\y\u\b\h\j\7\s\7\q\b\r\e\l\7\z\g\y\5\0\m\g\9\t\w\q\1\w\r\1\1\v\x\b\r\9\d\y\v\0\u\d\n\j\q\m\6\5\s\b\f\b\7\u\n\w\5\0\v\d\e\u\m\0\5\k\z\d\v\s\7\e\c\6\x\k\l\b\s\7\j\m\i\e\s\l\6\3\z\t\q\n\z\w\i\m\f\o\y\k\n\f\r\5\2\r\b\q\n\u\r\q\k\u\5\p\x\1\e\g\t\j\7\4\4\r\j\0\5\p\0\4\a\g\n\d\v\6\i\g\h\3\7\w\f\i\2\b\t\j\1\r\9\p\2\o\f\1\v\s\f\7\9\d\z\n\y\j\t\l\f\i\y\2\v\y\3\y\z\o\m\p\r\y\4\d\p\a\s\p\3\k\9\u\u\v\e\w\j\p\l\a\o\x\g\m\f\p\s\g\9\d\c\l\q\m\f\s\v\w\0\c\g\y\x\o\f\x\h\a\u\c\l\t\g\w\i\k\o\y\d\p\b\k\i\z\e\6\n\f\6\5\n\8\z\3\t\8\g\8\9\7\r\w\q\s\c\2\2\c\d\9\u\8\2\8\9\k\p\u\b\p\n\l\7\4\j\b\x\j\4\v\k\b\g\a\y\2\g\3\9\i\7\k\p\k\9\9\k\z\u\7\p\m\3\r\l\y\k\h\f\0\g\e\s\7\p\c\0\h\6\d\k\c\9\n\a\n\w\n\t\q\a\5\c\f\x\l\i\r\j\d\y\1\n\1\g\b\u\b\4\g\u\y\d\x\a\s\i\q\b\8\0\v\f\l\l\c\g\8\c\x\y\b\6\8\f\q\3\a\w\h\g\u\z\7\8\7\l\1\d\z\v\3\o\d\3\j\h\e\k\n\n\t\u\a\6\o\j\8\x\w\v\b\f\s\v\f\q\g\v\1\9\2\f\s\q\8\m\0\l\c\t\2\q\e\g\h\7\a\1\9\t\6\g\p\c\1\7\5\5\b\j\0\a\w\4\w\h\s\p\t\q\u\x\u\p\l\x\z\u\k\9\3\7\2\i\q\m\w\z\q\y\u\g\3\h\k\w\r\t\m\n\j\h\z\9\v\d\l\7\o\7\x\o\3\z\9\k\k\8\c\i\w\3\f\a\c\8\r\q\a\h\7\6\4\o\t\e\z\2\h\6\5\d\8\f\c\s\1\a\b\t\d\z\n\g\z\z\a\w\7\i\g\b\a\5\0\p\i\h\c\n\v\f\8\s\p\x\s\c\r\7\w\2\o\a\5\x\h\i\n\u\y\i\y\u\3\d\e\4\h\o\r\6\n\w\7\b\e\b\e\a\1\w\r\f\f\a\1\u\w\n\4\7\r\j\n\d\i\j\i\2\m\v\o\0\d\i\1\x\w\7\s\y\2\1\7\f\e\4\r\u\2\t\6\5\q\o\8\l\u\u\p\s\f\t\h\u\k\0\7\9\7\i\l\q\1\d\y\c\n\x\j\q\4\u\b\u\d\j\9\w\q\k\2\z\c\1\q\d\0\5\t\0\v\5\4\m\j\d\s\7\f\7\v\1\i\r\3\v\m\1\e\l\h\2\4\f\g\1\x\z\s\y\j\9\e\e\i\e\y\3\t\2\r\g\5\7\f\t\a\6\g\0\y\q\o\f\x\y\z\2\d\3\x\5\6\t\b\a\d\e\1\n\s\f\x\i\w\t\o\v\u\q\e\d\0\g\u\i\8\i\7\q\i\q\a\4\3\i\x\8\7\g\q\1\p\t\c\2\9\m\1\5\u\3\q\4\q\2\z\q\k\4\8\9\a\3\j\i\4\r\d\f\u\h\a\m\0\n\6\p\e\y\a\b\w\2\k\e\6\i\c\q\n\z\p\t\b\w\3\x\w\d\j\b\p\7\h\q\x\z\7\t\m\o\2\c\t\m\l\s\9\c\v\n\i\m\p\z\5\h\s\2\n\l\b\r\k\1\m\p\2\k\o\9\x\s\d\h\s\7\k\a\1\0\w\7\5\g\2\6\5\1\n\s\j\7\j\2\w\u\b\k\l\k\r\9\y\r\f\4\j\2\y\g\f\k\m\m\t\2\f\x\6\7\4\4\m\n\7\y\r\m\7\j\w\r\r\z\f\p\t\5\k\0\t\w\i\e\k\j\n\a\r\d\e\r\s\1\k\q\n\r\d\r\g\v\q\3\w\f\s\t\s\h\1\u\n\f\1\4\4\w\o\o\i\g\k\v\e\9\6\k\3\n\v\t\6\i\v\x\d\5\z\v\w\o\u\g\f\r\z\r\z\c\9\i\d\c\8\5\1\j\w\1\h\z\o\k\q\l\5\o\u\a\w\a\4\h\y\y\f\4\b\u\y\t\9\s\8\x\y\c\a\9\z\t\m\j\x\7\k\i\m\g\e\s\4\a\e\v\s\s\1\c\u\v\j\6\r\b\a\l\h\8\i\k\h\s\a\x\q\h\j\y\a\2\e\9\d\u\f\r\o\s\d\3\8\r\a\o\y\4\v\h\4\s\v\d\t\w\d\l\d\b\5\w\i\x\i\z\l\0\k\a\c\1\w\7\p\k\5\u\q\8\c\u\n\t\g\a\o\6\a\v\w\t\8\4\f\u\k\9\t\2\j\9\d\2\t\p\h\4\i\0\1\l\s\x\2\h\x\0\f\0\q\9\x\c\o\3\v\q\j\l\c\g\x\v\t\p\9\i\7\m\g\q\6\i\2\0\l\y\2\d\6\0\f\t\e\y\4\z\t\7\4\8\3\b\a\d\t\k\1\5\8\8\s\e\w\t\q\0\8\n\y\5\8\h\w\q\w\r\8\h\a\7\p\d\o\d\w\t\0\a\h\0\6\1\a\t\n\b\5\0\w\5\5\u\q\l\v\n\m\u\b\9\e\9\y\0\v\q\l\j\6\v\b\f\2\z\i\j\s\y\f\v\v\3\v\t\2\f\6\o\j\y\3\4\4\k\s\2\a\h\q\9\9\f\q\6\s\u\s\4\c\k\a\v\v\1\6\1\a\t\b\m\p\a\x\k\3\7\o\p\0\3\n\a\y\y\g\6\f\l\d\t\5\9\c\v\a\x\d\n\2\c\s\o\r\a\f\p\r\5\e\i\v\3\p\e\a\4\w\d\i\q\g\2\0\m\f\i\h\g\r\x\y\7\z\1\y\3\7\u\l\t\a\f\d\v\8\s\d\8\7\m\u\j\2\w\4\5\n\l\1\c\b\1\8\l\8\6\l\t\2\c\u\k\p\f\q\c\i\k\k\u\r\w\2\w\6\k\k\m\g\b\o\5\f\5\7\6\f\h\j\m\a\n\7\y\p\j\w\8\u\k\0\9\9\y\w\5\3\a\d\m\k\0\6\l\u\r\5\m\o\w\e\5\i\a\u\g\3\x\z\f\f\9\k\2\d\m\o\2\0\p\3\0\e\p\h\c\1\y\6\h\8\3\7\g\4\h\v\m\q\w\x\s\o\9\h\f\u\6\8\a\8\g\j\e\6\w\z\z\f\i\a\h\s\x\0\4\5\a\a\l\u\a\q\k\u\7\y\7\s\g\p\1\0\l\7\c\0\2\3\5\9\d\6\s\5\j\f\k\6\3\v\z\w\2\p\q\b\d\o\3\m\4\4\g\a\u\6\z\u\h\f\6\f\2\k\f\h\g\h\m\s\t\x\j\w\1\s\6\3\p\j\b\h\x\s\z\9\7\3\1\f\2\e\e\z\h\a\a\2\c\j\y\0\g\g\k\a\c\j\t\v\h\o\d\x\5\g\x\f\r\9\e\h\l\8\9\3\o\c\m\t\n\a\m\a\n\r\9\9\4\c\n\f\y\p\b\d\1 ]] 00:41:00.732 00:41:00.732 real 0m3.943s 00:41:00.732 user 0m3.379s 00:41:00.732 sys 0m0.443s 00:41:00.732 21:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:00.732 21:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:41:00.732 21:54:33 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:41:00.732 21:54:33 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:41:00.732 21:54:33 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:41:00.732 21:54:33 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:41:00.732 21:54:33 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:41:00.732 21:54:33 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:41:00.732 21:54:33 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:41:00.732 21:54:33 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:41:00.732 21:54:33 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:41:00.732 21:54:33 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:41:00.732 21:54:33 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:41:00.732 21:54:33 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:41:00.732 [2024-07-15 21:54:33.945664] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:41:00.732 [2024-07-15 21:54:33.945878] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168048 ] 00:41:00.732 { 00:41:00.732 "subsystems": [ 00:41:00.732 { 00:41:00.732 "subsystem": "bdev", 00:41:00.732 "config": [ 00:41:00.732 { 00:41:00.732 "params": { 00:41:00.732 "trtype": "pcie", 00:41:00.732 "traddr": "0000:00:10.0", 00:41:00.732 "name": "Nvme0" 00:41:00.732 }, 00:41:00.732 "method": "bdev_nvme_attach_controller" 00:41:00.732 }, 00:41:00.732 { 00:41:00.732 "method": "bdev_wait_for_examine" 00:41:00.732 } 00:41:00.732 ] 00:41:00.732 } 00:41:00.732 ] 00:41:00.732 } 00:41:00.732 [2024-07-15 21:54:34.088079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:00.991 [2024-07-15 21:54:34.325125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:02.493  Copying: 1024/1024 [kB] (average 1000 MBps) 00:41:02.493 00:41:02.493 21:54:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:02.493 ************************************ 00:41:02.493 END TEST spdk_dd_basic_rw 00:41:02.493 ************************************ 00:41:02.493 00:41:02.493 real 0m45.662s 00:41:02.493 user 0m39.143s 00:41:02.493 sys 0m5.015s 00:41:02.493 21:54:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:02.493 21:54:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:41:02.493 21:54:35 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:41:02.493 21:54:35 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:41:02.493 21:54:35 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:02.493 21:54:35 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:02.493 21:54:35 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:41:02.493 ************************************ 00:41:02.493 START TEST spdk_dd_posix 00:41:02.493 ************************************ 00:41:02.493 21:54:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:41:02.750 * Looking for test storage... 00:41:02.750 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:41:02.750 * First test run, using AIO 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:41:02.750 ************************************ 00:41:02.750 START TEST dd_flag_append 00:41:02.750 ************************************ 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=yz23mg4lolpxjpfcmj1cci3uwldrdlkv 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=br3h7dhiv84uee0hh4g1sdx3qfvotdhe 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s yz23mg4lolpxjpfcmj1cci3uwldrdlkv 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s br3h7dhiv84uee0hh4g1sdx3qfvotdhe 00:41:02.750 21:54:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:41:02.750 [2024-07-15 21:54:36.034309] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:41:02.750 [2024-07-15 21:54:36.034503] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168127 ] 00:41:03.007 [2024-07-15 21:54:36.197468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:03.266 [2024-07-15 21:54:36.399718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:04.478  Copying: 32/32 [B] (average 31 kBps) 00:41:04.478 00:41:04.736 ************************************ 00:41:04.736 END TEST dd_flag_append 00:41:04.736 ************************************ 00:41:04.736 21:54:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ br3h7dhiv84uee0hh4g1sdx3qfvotdheyz23mg4lolpxjpfcmj1cci3uwldrdlkv == \b\r\3\h\7\d\h\i\v\8\4\u\e\e\0\h\h\4\g\1\s\d\x\3\q\f\v\o\t\d\h\e\y\z\2\3\m\g\4\l\o\l\p\x\j\p\f\c\m\j\1\c\c\i\3\u\w\l\d\r\d\l\k\v ]] 00:41:04.736 00:41:04.736 real 0m1.902s 00:41:04.736 user 0m1.593s 00:41:04.736 sys 0m0.176s 00:41:04.736 21:54:37 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:04.736 21:54:37 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:41:04.736 21:54:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:41:04.736 21:54:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:41:04.736 21:54:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:04.736 21:54:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:04.736 21:54:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:41:04.736 ************************************ 00:41:04.736 START TEST dd_flag_directory 00:41:04.736 ************************************ 00:41:04.736 21:54:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:41:04.736 21:54:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:04.736 21:54:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:41:04.736 21:54:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:04.736 21:54:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:04.736 21:54:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:04.736 21:54:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:04.736 21:54:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:04.736 21:54:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:04.736 21:54:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:04.736 21:54:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:04.736 21:54:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:04.736 21:54:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:04.736 [2024-07-15 21:54:37.998107] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:41:04.736 [2024-07-15 21:54:37.998303] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168181 ] 00:41:04.993 [2024-07-15 21:54:38.155828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:04.993 [2024-07-15 21:54:38.353699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:05.560 [2024-07-15 21:54:38.659705] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:41:05.560 [2024-07-15 21:54:38.659867] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:41:05.560 [2024-07-15 21:54:38.659904] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:06.130 [2024-07-15 21:54:39.409659] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:41:06.717 21:54:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:41:06.717 21:54:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:06.717 21:54:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:41:06.717 21:54:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:41:06.717 21:54:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:41:06.717 21:54:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:06.717 21:54:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:41:06.718 21:54:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:41:06.718 21:54:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:41:06.718 21:54:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:06.718 21:54:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:06.718 21:54:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:06.718 21:54:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:06.718 21:54:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:06.718 21:54:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:06.718 21:54:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:06.718 21:54:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:06.718 21:54:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:41:06.718 [2024-07-15 21:54:39.865556] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:41:06.718 [2024-07-15 21:54:39.865844] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168208 ] 00:41:06.718 [2024-07-15 21:54:40.035952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:06.976 [2024-07-15 21:54:40.239166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:07.234 [2024-07-15 21:54:40.546987] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:41:07.234 [2024-07-15 21:54:40.547146] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:41:07.234 [2024-07-15 21:54:40.547203] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:08.169 [2024-07-15 21:54:41.300384] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:41:08.457 21:54:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:41:08.457 21:54:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:08.457 21:54:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:41:08.457 21:54:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:41:08.457 21:54:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:41:08.457 ************************************ 00:41:08.457 END TEST dd_flag_directory 00:41:08.457 ************************************ 00:41:08.457 21:54:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:08.457 00:41:08.457 real 0m3.754s 00:41:08.457 user 0m3.155s 00:41:08.457 sys 0m0.396s 00:41:08.457 21:54:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:08.457 21:54:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:41:08.457 21:54:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:41:08.457 21:54:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:41:08.457 21:54:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:08.457 21:54:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:08.457 21:54:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:41:08.457 ************************************ 00:41:08.457 START TEST dd_flag_nofollow 00:41:08.457 ************************************ 00:41:08.457 21:54:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:41:08.457 21:54:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:41:08.457 21:54:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:41:08.457 21:54:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:41:08.457 21:54:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:41:08.457 21:54:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:08.457 21:54:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:41:08.457 21:54:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:08.457 21:54:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:08.457 21:54:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:08.457 21:54:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:08.457 21:54:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:08.457 21:54:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:08.457 21:54:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:08.457 21:54:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:08.457 21:54:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:08.457 21:54:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:08.457 [2024-07-15 21:54:41.810310] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:41:08.457 [2024-07-15 21:54:41.810515] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168278 ] 00:41:08.715 [2024-07-15 21:54:41.969815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:08.973 [2024-07-15 21:54:42.172756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:09.232 [2024-07-15 21:54:42.474012] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:41:09.232 [2024-07-15 21:54:42.474168] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:41:09.232 [2024-07-15 21:54:42.474210] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:10.168 [2024-07-15 21:54:43.218355] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:41:10.427 21:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:41:10.427 21:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:10.427 21:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:41:10.427 21:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:41:10.427 21:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:41:10.427 21:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:10.427 21:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:41:10.427 21:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:41:10.427 21:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:41:10.427 21:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:10.427 21:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:10.427 21:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:10.427 21:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:10.427 21:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:10.427 21:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:10.427 21:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:10.427 21:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:10.427 21:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:41:10.427 [2024-07-15 21:54:43.658671] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:41:10.427 [2024-07-15 21:54:43.658909] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168310 ] 00:41:10.685 [2024-07-15 21:54:43.830588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:10.685 [2024-07-15 21:54:44.031815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:11.253 [2024-07-15 21:54:44.330954] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:41:11.253 [2024-07-15 21:54:44.331115] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:41:11.253 [2024-07-15 21:54:44.331180] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:11.822 [2024-07-15 21:54:45.088746] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:41:12.390 21:54:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:41:12.390 21:54:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:12.390 21:54:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:41:12.390 21:54:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:41:12.390 21:54:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:41:12.390 21:54:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:12.390 21:54:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:41:12.390 21:54:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:41:12.390 21:54:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:41:12.390 21:54:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:12.390 [2024-07-15 21:54:45.540090] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:41:12.390 [2024-07-15 21:54:45.540304] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168333 ] 00:41:12.390 [2024-07-15 21:54:45.699681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:12.649 [2024-07-15 21:54:45.893656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:14.366  Copying: 512/512 [B] (average 500 kBps) 00:41:14.366 00:41:14.366 ************************************ 00:41:14.366 END TEST dd_flag_nofollow 00:41:14.366 ************************************ 00:41:14.366 21:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 8eg85etvs738pe5zrl9ewusuu7nhnremvng8529h70y6mkuuz2d4w16p5uis7r82xsg2yuu5d9qe71vbbg7a6pdss193vwli1mgydhqweltb8zvdecp4qacpw63cg62ocsjgdw26ut3oj0kb412epmsjnflcq5odt46n1v4yxnb0gtlgkyeaip5ym3vebw4uexvqj5ozieel8a7d2mh67uhefljnmw4ak64papbnoebtlys05oqnfes42fjft1g9ci8f3s378z7gz6161kob60ewukwwsf3gbvrhl5hfor0mmr4ek6shps3caoprae99jqao86l2ubjbnfgivvuq8jv50dwqbn1o6qyzdkbu7nquip1g4iq9s09569y1na9zaguty7ujtm50zmjd5dvqn6vo1zynoug0mnfwlbsbyhrcwfqsel64c5gqw63l80o52r1rcod6y79r7imzda8pz9ttmxz63i1wgj5gabps6rcv12rqksorp4bnecvs0twx == \8\e\g\8\5\e\t\v\s\7\3\8\p\e\5\z\r\l\9\e\w\u\s\u\u\7\n\h\n\r\e\m\v\n\g\8\5\2\9\h\7\0\y\6\m\k\u\u\z\2\d\4\w\1\6\p\5\u\i\s\7\r\8\2\x\s\g\2\y\u\u\5\d\9\q\e\7\1\v\b\b\g\7\a\6\p\d\s\s\1\9\3\v\w\l\i\1\m\g\y\d\h\q\w\e\l\t\b\8\z\v\d\e\c\p\4\q\a\c\p\w\6\3\c\g\6\2\o\c\s\j\g\d\w\2\6\u\t\3\o\j\0\k\b\4\1\2\e\p\m\s\j\n\f\l\c\q\5\o\d\t\4\6\n\1\v\4\y\x\n\b\0\g\t\l\g\k\y\e\a\i\p\5\y\m\3\v\e\b\w\4\u\e\x\v\q\j\5\o\z\i\e\e\l\8\a\7\d\2\m\h\6\7\u\h\e\f\l\j\n\m\w\4\a\k\6\4\p\a\p\b\n\o\e\b\t\l\y\s\0\5\o\q\n\f\e\s\4\2\f\j\f\t\1\g\9\c\i\8\f\3\s\3\7\8\z\7\g\z\6\1\6\1\k\o\b\6\0\e\w\u\k\w\w\s\f\3\g\b\v\r\h\l\5\h\f\o\r\0\m\m\r\4\e\k\6\s\h\p\s\3\c\a\o\p\r\a\e\9\9\j\q\a\o\8\6\l\2\u\b\j\b\n\f\g\i\v\v\u\q\8\j\v\5\0\d\w\q\b\n\1\o\6\q\y\z\d\k\b\u\7\n\q\u\i\p\1\g\4\i\q\9\s\0\9\5\6\9\y\1\n\a\9\z\a\g\u\t\y\7\u\j\t\m\5\0\z\m\j\d\5\d\v\q\n\6\v\o\1\z\y\n\o\u\g\0\m\n\f\w\l\b\s\b\y\h\r\c\w\f\q\s\e\l\6\4\c\5\g\q\w\6\3\l\8\0\o\5\2\r\1\r\c\o\d\6\y\7\9\r\7\i\m\z\d\a\8\p\z\9\t\t\m\x\z\6\3\i\1\w\g\j\5\g\a\b\p\s\6\r\c\v\1\2\r\q\k\s\o\r\p\4\b\n\e\c\v\s\0\t\w\x ]] 00:41:14.366 00:41:14.366 real 0m5.605s 00:41:14.366 user 0m4.645s 00:41:14.366 sys 0m0.630s 00:41:14.366 21:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:14.366 21:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:41:14.366 21:54:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:41:14.366 21:54:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:41:14.366 21:54:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:14.366 21:54:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:14.366 21:54:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:41:14.366 ************************************ 00:41:14.366 START TEST dd_flag_noatime 00:41:14.366 ************************************ 00:41:14.366 21:54:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:41:14.366 21:54:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:41:14.366 21:54:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:41:14.366 21:54:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:41:14.366 21:54:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:41:14.366 21:54:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:41:14.366 21:54:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:14.366 21:54:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721080486 00:41:14.366 21:54:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:14.366 21:54:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721080487 00:41:14.366 21:54:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:41:15.300 21:54:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:15.300 [2024-07-15 21:54:48.498358] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:41:15.300 [2024-07-15 21:54:48.498567] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168390 ] 00:41:15.300 [2024-07-15 21:54:48.658153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:15.559 [2024-07-15 21:54:48.856363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:17.222  Copying: 512/512 [B] (average 500 kBps) 00:41:17.222 00:41:17.222 21:54:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:17.222 21:54:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721080486 )) 00:41:17.222 21:54:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:17.222 21:54:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721080487 )) 00:41:17.222 21:54:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:17.222 [2024-07-15 21:54:50.372741] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:41:17.222 [2024-07-15 21:54:50.373381] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168420 ] 00:41:17.222 [2024-07-15 21:54:50.541987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:17.493 [2024-07-15 21:54:50.743130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:19.125  Copying: 512/512 [B] (average 500 kBps) 00:41:19.125 00:41:19.125 21:54:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:19.125 ************************************ 00:41:19.125 END TEST dd_flag_noatime 00:41:19.125 ************************************ 00:41:19.125 21:54:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721080491 )) 00:41:19.125 00:41:19.125 real 0m4.802s 00:41:19.125 user 0m3.151s 00:41:19.125 sys 0m0.390s 00:41:19.125 21:54:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:19.125 21:54:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:41:19.125 21:54:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:41:19.125 21:54:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:41:19.125 21:54:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:19.125 21:54:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:19.125 21:54:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:41:19.125 ************************************ 00:41:19.125 START TEST dd_flags_misc 00:41:19.125 ************************************ 00:41:19.125 21:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:41:19.125 21:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:41:19.125 21:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:41:19.125 21:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:41:19.125 21:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:41:19.125 21:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:41:19.125 21:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:41:19.125 21:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:41:19.125 21:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:19.125 21:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:41:19.125 [2024-07-15 21:54:52.348466] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:41:19.125 [2024-07-15 21:54:52.348685] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168484 ] 00:41:19.383 [2024-07-15 21:54:52.505495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:19.383 [2024-07-15 21:54:52.702404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:21.061  Copying: 512/512 [B] (average 500 kBps) 00:41:21.061 00:41:21.061 21:54:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ c6es3e1xgy1vuiti16qqsh7rlz5nf3cuyri8324fsuwrvrsrgz40igzbh46jlta70a8gpdhv14osvghk47rgso3stwd4hbk7k6hnpbe20pdawf1iu6wuyk0hymogwn6mjr2s90d1d3o3tzc1lgpp823b5tgu5ul61tcb4vf083xhd60sq5petrkeh8zqkafe03h2gzq7tn6zxe7y40h7hrzbebjiv56c4ymjnmmb4kjgtqv9wfybxv3kk9hr86kpp51np8plp338sffgc25l3fp3fggb3ydorqrp4ncenjbtb6esmjv3s3case50msde1p4583ac3lgvnhamarbkn3n12qoe9pn6h0belk4yvg2ennapww7oz25rq84k0y684sty9jcsg6z13ra7dauwdsymjvx8nlnlvj5lnot5z5eatdr2na4fk4vc1ay56xav9cup0f4shwap0t57g8zo497jo7nny31xib8ntowguooibfvxbk0y5cudb5i2nq5k == \c\6\e\s\3\e\1\x\g\y\1\v\u\i\t\i\1\6\q\q\s\h\7\r\l\z\5\n\f\3\c\u\y\r\i\8\3\2\4\f\s\u\w\r\v\r\s\r\g\z\4\0\i\g\z\b\h\4\6\j\l\t\a\7\0\a\8\g\p\d\h\v\1\4\o\s\v\g\h\k\4\7\r\g\s\o\3\s\t\w\d\4\h\b\k\7\k\6\h\n\p\b\e\2\0\p\d\a\w\f\1\i\u\6\w\u\y\k\0\h\y\m\o\g\w\n\6\m\j\r\2\s\9\0\d\1\d\3\o\3\t\z\c\1\l\g\p\p\8\2\3\b\5\t\g\u\5\u\l\6\1\t\c\b\4\v\f\0\8\3\x\h\d\6\0\s\q\5\p\e\t\r\k\e\h\8\z\q\k\a\f\e\0\3\h\2\g\z\q\7\t\n\6\z\x\e\7\y\4\0\h\7\h\r\z\b\e\b\j\i\v\5\6\c\4\y\m\j\n\m\m\b\4\k\j\g\t\q\v\9\w\f\y\b\x\v\3\k\k\9\h\r\8\6\k\p\p\5\1\n\p\8\p\l\p\3\3\8\s\f\f\g\c\2\5\l\3\f\p\3\f\g\g\b\3\y\d\o\r\q\r\p\4\n\c\e\n\j\b\t\b\6\e\s\m\j\v\3\s\3\c\a\s\e\5\0\m\s\d\e\1\p\4\5\8\3\a\c\3\l\g\v\n\h\a\m\a\r\b\k\n\3\n\1\2\q\o\e\9\p\n\6\h\0\b\e\l\k\4\y\v\g\2\e\n\n\a\p\w\w\7\o\z\2\5\r\q\8\4\k\0\y\6\8\4\s\t\y\9\j\c\s\g\6\z\1\3\r\a\7\d\a\u\w\d\s\y\m\j\v\x\8\n\l\n\l\v\j\5\l\n\o\t\5\z\5\e\a\t\d\r\2\n\a\4\f\k\4\v\c\1\a\y\5\6\x\a\v\9\c\u\p\0\f\4\s\h\w\a\p\0\t\5\7\g\8\z\o\4\9\7\j\o\7\n\n\y\3\1\x\i\b\8\n\t\o\w\g\u\o\o\i\b\f\v\x\b\k\0\y\5\c\u\d\b\5\i\2\n\q\5\k ]] 00:41:21.061 21:54:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:21.061 21:54:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:41:21.061 [2024-07-15 21:54:54.269652] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:41:21.061 [2024-07-15 21:54:54.269863] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168512 ] 00:41:21.061 [2024-07-15 21:54:54.426784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:21.319 [2024-07-15 21:54:54.623086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:22.954  Copying: 512/512 [B] (average 500 kBps) 00:41:22.954 00:41:22.954 21:54:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ c6es3e1xgy1vuiti16qqsh7rlz5nf3cuyri8324fsuwrvrsrgz40igzbh46jlta70a8gpdhv14osvghk47rgso3stwd4hbk7k6hnpbe20pdawf1iu6wuyk0hymogwn6mjr2s90d1d3o3tzc1lgpp823b5tgu5ul61tcb4vf083xhd60sq5petrkeh8zqkafe03h2gzq7tn6zxe7y40h7hrzbebjiv56c4ymjnmmb4kjgtqv9wfybxv3kk9hr86kpp51np8plp338sffgc25l3fp3fggb3ydorqrp4ncenjbtb6esmjv3s3case50msde1p4583ac3lgvnhamarbkn3n12qoe9pn6h0belk4yvg2ennapww7oz25rq84k0y684sty9jcsg6z13ra7dauwdsymjvx8nlnlvj5lnot5z5eatdr2na4fk4vc1ay56xav9cup0f4shwap0t57g8zo497jo7nny31xib8ntowguooibfvxbk0y5cudb5i2nq5k == \c\6\e\s\3\e\1\x\g\y\1\v\u\i\t\i\1\6\q\q\s\h\7\r\l\z\5\n\f\3\c\u\y\r\i\8\3\2\4\f\s\u\w\r\v\r\s\r\g\z\4\0\i\g\z\b\h\4\6\j\l\t\a\7\0\a\8\g\p\d\h\v\1\4\o\s\v\g\h\k\4\7\r\g\s\o\3\s\t\w\d\4\h\b\k\7\k\6\h\n\p\b\e\2\0\p\d\a\w\f\1\i\u\6\w\u\y\k\0\h\y\m\o\g\w\n\6\m\j\r\2\s\9\0\d\1\d\3\o\3\t\z\c\1\l\g\p\p\8\2\3\b\5\t\g\u\5\u\l\6\1\t\c\b\4\v\f\0\8\3\x\h\d\6\0\s\q\5\p\e\t\r\k\e\h\8\z\q\k\a\f\e\0\3\h\2\g\z\q\7\t\n\6\z\x\e\7\y\4\0\h\7\h\r\z\b\e\b\j\i\v\5\6\c\4\y\m\j\n\m\m\b\4\k\j\g\t\q\v\9\w\f\y\b\x\v\3\k\k\9\h\r\8\6\k\p\p\5\1\n\p\8\p\l\p\3\3\8\s\f\f\g\c\2\5\l\3\f\p\3\f\g\g\b\3\y\d\o\r\q\r\p\4\n\c\e\n\j\b\t\b\6\e\s\m\j\v\3\s\3\c\a\s\e\5\0\m\s\d\e\1\p\4\5\8\3\a\c\3\l\g\v\n\h\a\m\a\r\b\k\n\3\n\1\2\q\o\e\9\p\n\6\h\0\b\e\l\k\4\y\v\g\2\e\n\n\a\p\w\w\7\o\z\2\5\r\q\8\4\k\0\y\6\8\4\s\t\y\9\j\c\s\g\6\z\1\3\r\a\7\d\a\u\w\d\s\y\m\j\v\x\8\n\l\n\l\v\j\5\l\n\o\t\5\z\5\e\a\t\d\r\2\n\a\4\f\k\4\v\c\1\a\y\5\6\x\a\v\9\c\u\p\0\f\4\s\h\w\a\p\0\t\5\7\g\8\z\o\4\9\7\j\o\7\n\n\y\3\1\x\i\b\8\n\t\o\w\g\u\o\o\i\b\f\v\x\b\k\0\y\5\c\u\d\b\5\i\2\n\q\5\k ]] 00:41:22.954 21:54:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:22.954 21:54:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:41:22.954 [2024-07-15 21:54:56.141342] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:41:22.954 [2024-07-15 21:54:56.141638] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168540 ] 00:41:22.954 [2024-07-15 21:54:56.315749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:23.213 [2024-07-15 21:54:56.511982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:24.851  Copying: 512/512 [B] (average 166 kBps) 00:41:24.851 00:41:24.851 21:54:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ c6es3e1xgy1vuiti16qqsh7rlz5nf3cuyri8324fsuwrvrsrgz40igzbh46jlta70a8gpdhv14osvghk47rgso3stwd4hbk7k6hnpbe20pdawf1iu6wuyk0hymogwn6mjr2s90d1d3o3tzc1lgpp823b5tgu5ul61tcb4vf083xhd60sq5petrkeh8zqkafe03h2gzq7tn6zxe7y40h7hrzbebjiv56c4ymjnmmb4kjgtqv9wfybxv3kk9hr86kpp51np8plp338sffgc25l3fp3fggb3ydorqrp4ncenjbtb6esmjv3s3case50msde1p4583ac3lgvnhamarbkn3n12qoe9pn6h0belk4yvg2ennapww7oz25rq84k0y684sty9jcsg6z13ra7dauwdsymjvx8nlnlvj5lnot5z5eatdr2na4fk4vc1ay56xav9cup0f4shwap0t57g8zo497jo7nny31xib8ntowguooibfvxbk0y5cudb5i2nq5k == \c\6\e\s\3\e\1\x\g\y\1\v\u\i\t\i\1\6\q\q\s\h\7\r\l\z\5\n\f\3\c\u\y\r\i\8\3\2\4\f\s\u\w\r\v\r\s\r\g\z\4\0\i\g\z\b\h\4\6\j\l\t\a\7\0\a\8\g\p\d\h\v\1\4\o\s\v\g\h\k\4\7\r\g\s\o\3\s\t\w\d\4\h\b\k\7\k\6\h\n\p\b\e\2\0\p\d\a\w\f\1\i\u\6\w\u\y\k\0\h\y\m\o\g\w\n\6\m\j\r\2\s\9\0\d\1\d\3\o\3\t\z\c\1\l\g\p\p\8\2\3\b\5\t\g\u\5\u\l\6\1\t\c\b\4\v\f\0\8\3\x\h\d\6\0\s\q\5\p\e\t\r\k\e\h\8\z\q\k\a\f\e\0\3\h\2\g\z\q\7\t\n\6\z\x\e\7\y\4\0\h\7\h\r\z\b\e\b\j\i\v\5\6\c\4\y\m\j\n\m\m\b\4\k\j\g\t\q\v\9\w\f\y\b\x\v\3\k\k\9\h\r\8\6\k\p\p\5\1\n\p\8\p\l\p\3\3\8\s\f\f\g\c\2\5\l\3\f\p\3\f\g\g\b\3\y\d\o\r\q\r\p\4\n\c\e\n\j\b\t\b\6\e\s\m\j\v\3\s\3\c\a\s\e\5\0\m\s\d\e\1\p\4\5\8\3\a\c\3\l\g\v\n\h\a\m\a\r\b\k\n\3\n\1\2\q\o\e\9\p\n\6\h\0\b\e\l\k\4\y\v\g\2\e\n\n\a\p\w\w\7\o\z\2\5\r\q\8\4\k\0\y\6\8\4\s\t\y\9\j\c\s\g\6\z\1\3\r\a\7\d\a\u\w\d\s\y\m\j\v\x\8\n\l\n\l\v\j\5\l\n\o\t\5\z\5\e\a\t\d\r\2\n\a\4\f\k\4\v\c\1\a\y\5\6\x\a\v\9\c\u\p\0\f\4\s\h\w\a\p\0\t\5\7\g\8\z\o\4\9\7\j\o\7\n\n\y\3\1\x\i\b\8\n\t\o\w\g\u\o\o\i\b\f\v\x\b\k\0\y\5\c\u\d\b\5\i\2\n\q\5\k ]] 00:41:24.851 21:54:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:24.851 21:54:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:41:24.851 [2024-07-15 21:54:58.016841] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:41:24.851 [2024-07-15 21:54:58.017112] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168565 ] 00:41:24.851 [2024-07-15 21:54:58.186350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:25.110 [2024-07-15 21:54:58.379098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:26.746  Copying: 512/512 [B] (average 250 kBps) 00:41:26.746 00:41:26.746 21:54:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ c6es3e1xgy1vuiti16qqsh7rlz5nf3cuyri8324fsuwrvrsrgz40igzbh46jlta70a8gpdhv14osvghk47rgso3stwd4hbk7k6hnpbe20pdawf1iu6wuyk0hymogwn6mjr2s90d1d3o3tzc1lgpp823b5tgu5ul61tcb4vf083xhd60sq5petrkeh8zqkafe03h2gzq7tn6zxe7y40h7hrzbebjiv56c4ymjnmmb4kjgtqv9wfybxv3kk9hr86kpp51np8plp338sffgc25l3fp3fggb3ydorqrp4ncenjbtb6esmjv3s3case50msde1p4583ac3lgvnhamarbkn3n12qoe9pn6h0belk4yvg2ennapww7oz25rq84k0y684sty9jcsg6z13ra7dauwdsymjvx8nlnlvj5lnot5z5eatdr2na4fk4vc1ay56xav9cup0f4shwap0t57g8zo497jo7nny31xib8ntowguooibfvxbk0y5cudb5i2nq5k == \c\6\e\s\3\e\1\x\g\y\1\v\u\i\t\i\1\6\q\q\s\h\7\r\l\z\5\n\f\3\c\u\y\r\i\8\3\2\4\f\s\u\w\r\v\r\s\r\g\z\4\0\i\g\z\b\h\4\6\j\l\t\a\7\0\a\8\g\p\d\h\v\1\4\o\s\v\g\h\k\4\7\r\g\s\o\3\s\t\w\d\4\h\b\k\7\k\6\h\n\p\b\e\2\0\p\d\a\w\f\1\i\u\6\w\u\y\k\0\h\y\m\o\g\w\n\6\m\j\r\2\s\9\0\d\1\d\3\o\3\t\z\c\1\l\g\p\p\8\2\3\b\5\t\g\u\5\u\l\6\1\t\c\b\4\v\f\0\8\3\x\h\d\6\0\s\q\5\p\e\t\r\k\e\h\8\z\q\k\a\f\e\0\3\h\2\g\z\q\7\t\n\6\z\x\e\7\y\4\0\h\7\h\r\z\b\e\b\j\i\v\5\6\c\4\y\m\j\n\m\m\b\4\k\j\g\t\q\v\9\w\f\y\b\x\v\3\k\k\9\h\r\8\6\k\p\p\5\1\n\p\8\p\l\p\3\3\8\s\f\f\g\c\2\5\l\3\f\p\3\f\g\g\b\3\y\d\o\r\q\r\p\4\n\c\e\n\j\b\t\b\6\e\s\m\j\v\3\s\3\c\a\s\e\5\0\m\s\d\e\1\p\4\5\8\3\a\c\3\l\g\v\n\h\a\m\a\r\b\k\n\3\n\1\2\q\o\e\9\p\n\6\h\0\b\e\l\k\4\y\v\g\2\e\n\n\a\p\w\w\7\o\z\2\5\r\q\8\4\k\0\y\6\8\4\s\t\y\9\j\c\s\g\6\z\1\3\r\a\7\d\a\u\w\d\s\y\m\j\v\x\8\n\l\n\l\v\j\5\l\n\o\t\5\z\5\e\a\t\d\r\2\n\a\4\f\k\4\v\c\1\a\y\5\6\x\a\v\9\c\u\p\0\f\4\s\h\w\a\p\0\t\5\7\g\8\z\o\4\9\7\j\o\7\n\n\y\3\1\x\i\b\8\n\t\o\w\g\u\o\o\i\b\f\v\x\b\k\0\y\5\c\u\d\b\5\i\2\n\q\5\k ]] 00:41:26.746 21:54:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:41:26.746 21:54:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:41:26.746 21:54:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:41:26.746 21:54:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:41:26.746 21:54:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:26.746 21:54:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:41:26.746 [2024-07-15 21:54:59.944876] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:41:26.746 [2024-07-15 21:54:59.945083] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168589 ] 00:41:26.746 [2024-07-15 21:55:00.103153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:27.004 [2024-07-15 21:55:00.303322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:28.634  Copying: 512/512 [B] (average 500 kBps) 00:41:28.634 00:41:28.634 21:55:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ lck90bsldf07xsy4xykprmmdnnv6wsqpyj8sg7lbrf4d1zb3m4sk77w6tbzhhmjocaujvp9hclq5qnpe838zqsnrylek1i2whq64rbbhwfm5temeb5t8cnre125966o0h001cvr1yci9272r2rwlp9xr2jwh4qt6jicg3o1au511nfe8tww686c6u62z8n3jrla9t10ev474if3j3b8k00k2o0e9tn8vr8xy67918obhoms7zqnlhny8grstyhvqa00pbvccntrh7tjhtrngtv31kk500i37csf5rsxw30ge9so7xlreai5ezcckdcuv04t4jmm98zu24imykdgjgxasd5l9trvxzdads9jwtxes381b67k7kvihpvc8laiqoa1z12si0cs4hvx11c9vnuunwa2n9n3wsjhtxwf5ehzkafqux9gvbipxaymvwh1xz9fommsd4xc0w8jip8haf2ib0cxmd7oqdi1kfw51ealcppwxs914ihwcjzqcvw3w == \l\c\k\9\0\b\s\l\d\f\0\7\x\s\y\4\x\y\k\p\r\m\m\d\n\n\v\6\w\s\q\p\y\j\8\s\g\7\l\b\r\f\4\d\1\z\b\3\m\4\s\k\7\7\w\6\t\b\z\h\h\m\j\o\c\a\u\j\v\p\9\h\c\l\q\5\q\n\p\e\8\3\8\z\q\s\n\r\y\l\e\k\1\i\2\w\h\q\6\4\r\b\b\h\w\f\m\5\t\e\m\e\b\5\t\8\c\n\r\e\1\2\5\9\6\6\o\0\h\0\0\1\c\v\r\1\y\c\i\9\2\7\2\r\2\r\w\l\p\9\x\r\2\j\w\h\4\q\t\6\j\i\c\g\3\o\1\a\u\5\1\1\n\f\e\8\t\w\w\6\8\6\c\6\u\6\2\z\8\n\3\j\r\l\a\9\t\1\0\e\v\4\7\4\i\f\3\j\3\b\8\k\0\0\k\2\o\0\e\9\t\n\8\v\r\8\x\y\6\7\9\1\8\o\b\h\o\m\s\7\z\q\n\l\h\n\y\8\g\r\s\t\y\h\v\q\a\0\0\p\b\v\c\c\n\t\r\h\7\t\j\h\t\r\n\g\t\v\3\1\k\k\5\0\0\i\3\7\c\s\f\5\r\s\x\w\3\0\g\e\9\s\o\7\x\l\r\e\a\i\5\e\z\c\c\k\d\c\u\v\0\4\t\4\j\m\m\9\8\z\u\2\4\i\m\y\k\d\g\j\g\x\a\s\d\5\l\9\t\r\v\x\z\d\a\d\s\9\j\w\t\x\e\s\3\8\1\b\6\7\k\7\k\v\i\h\p\v\c\8\l\a\i\q\o\a\1\z\1\2\s\i\0\c\s\4\h\v\x\1\1\c\9\v\n\u\u\n\w\a\2\n\9\n\3\w\s\j\h\t\x\w\f\5\e\h\z\k\a\f\q\u\x\9\g\v\b\i\p\x\a\y\m\v\w\h\1\x\z\9\f\o\m\m\s\d\4\x\c\0\w\8\j\i\p\8\h\a\f\2\i\b\0\c\x\m\d\7\o\q\d\i\1\k\f\w\5\1\e\a\l\c\p\p\w\x\s\9\1\4\i\h\w\c\j\z\q\c\v\w\3\w ]] 00:41:28.634 21:55:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:28.634 21:55:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:41:28.634 [2024-07-15 21:55:01.863239] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:41:28.634 [2024-07-15 21:55:01.863467] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168634 ] 00:41:28.634 [2024-07-15 21:55:02.009992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:28.892 [2024-07-15 21:55:02.200594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:30.524  Copying: 512/512 [B] (average 500 kBps) 00:41:30.524 00:41:30.524 21:55:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ lck90bsldf07xsy4xykprmmdnnv6wsqpyj8sg7lbrf4d1zb3m4sk77w6tbzhhmjocaujvp9hclq5qnpe838zqsnrylek1i2whq64rbbhwfm5temeb5t8cnre125966o0h001cvr1yci9272r2rwlp9xr2jwh4qt6jicg3o1au511nfe8tww686c6u62z8n3jrla9t10ev474if3j3b8k00k2o0e9tn8vr8xy67918obhoms7zqnlhny8grstyhvqa00pbvccntrh7tjhtrngtv31kk500i37csf5rsxw30ge9so7xlreai5ezcckdcuv04t4jmm98zu24imykdgjgxasd5l9trvxzdads9jwtxes381b67k7kvihpvc8laiqoa1z12si0cs4hvx11c9vnuunwa2n9n3wsjhtxwf5ehzkafqux9gvbipxaymvwh1xz9fommsd4xc0w8jip8haf2ib0cxmd7oqdi1kfw51ealcppwxs914ihwcjzqcvw3w == \l\c\k\9\0\b\s\l\d\f\0\7\x\s\y\4\x\y\k\p\r\m\m\d\n\n\v\6\w\s\q\p\y\j\8\s\g\7\l\b\r\f\4\d\1\z\b\3\m\4\s\k\7\7\w\6\t\b\z\h\h\m\j\o\c\a\u\j\v\p\9\h\c\l\q\5\q\n\p\e\8\3\8\z\q\s\n\r\y\l\e\k\1\i\2\w\h\q\6\4\r\b\b\h\w\f\m\5\t\e\m\e\b\5\t\8\c\n\r\e\1\2\5\9\6\6\o\0\h\0\0\1\c\v\r\1\y\c\i\9\2\7\2\r\2\r\w\l\p\9\x\r\2\j\w\h\4\q\t\6\j\i\c\g\3\o\1\a\u\5\1\1\n\f\e\8\t\w\w\6\8\6\c\6\u\6\2\z\8\n\3\j\r\l\a\9\t\1\0\e\v\4\7\4\i\f\3\j\3\b\8\k\0\0\k\2\o\0\e\9\t\n\8\v\r\8\x\y\6\7\9\1\8\o\b\h\o\m\s\7\z\q\n\l\h\n\y\8\g\r\s\t\y\h\v\q\a\0\0\p\b\v\c\c\n\t\r\h\7\t\j\h\t\r\n\g\t\v\3\1\k\k\5\0\0\i\3\7\c\s\f\5\r\s\x\w\3\0\g\e\9\s\o\7\x\l\r\e\a\i\5\e\z\c\c\k\d\c\u\v\0\4\t\4\j\m\m\9\8\z\u\2\4\i\m\y\k\d\g\j\g\x\a\s\d\5\l\9\t\r\v\x\z\d\a\d\s\9\j\w\t\x\e\s\3\8\1\b\6\7\k\7\k\v\i\h\p\v\c\8\l\a\i\q\o\a\1\z\1\2\s\i\0\c\s\4\h\v\x\1\1\c\9\v\n\u\u\n\w\a\2\n\9\n\3\w\s\j\h\t\x\w\f\5\e\h\z\k\a\f\q\u\x\9\g\v\b\i\p\x\a\y\m\v\w\h\1\x\z\9\f\o\m\m\s\d\4\x\c\0\w\8\j\i\p\8\h\a\f\2\i\b\0\c\x\m\d\7\o\q\d\i\1\k\f\w\5\1\e\a\l\c\p\p\w\x\s\9\1\4\i\h\w\c\j\z\q\c\v\w\3\w ]] 00:41:30.524 21:55:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:30.525 21:55:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:41:30.525 [2024-07-15 21:55:03.708183] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:41:30.525 [2024-07-15 21:55:03.708400] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168651 ] 00:41:30.525 [2024-07-15 21:55:03.864473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:30.783 [2024-07-15 21:55:04.061118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:32.418  Copying: 512/512 [B] (average 100 kBps) 00:41:32.418 00:41:32.418 21:55:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ lck90bsldf07xsy4xykprmmdnnv6wsqpyj8sg7lbrf4d1zb3m4sk77w6tbzhhmjocaujvp9hclq5qnpe838zqsnrylek1i2whq64rbbhwfm5temeb5t8cnre125966o0h001cvr1yci9272r2rwlp9xr2jwh4qt6jicg3o1au511nfe8tww686c6u62z8n3jrla9t10ev474if3j3b8k00k2o0e9tn8vr8xy67918obhoms7zqnlhny8grstyhvqa00pbvccntrh7tjhtrngtv31kk500i37csf5rsxw30ge9so7xlreai5ezcckdcuv04t4jmm98zu24imykdgjgxasd5l9trvxzdads9jwtxes381b67k7kvihpvc8laiqoa1z12si0cs4hvx11c9vnuunwa2n9n3wsjhtxwf5ehzkafqux9gvbipxaymvwh1xz9fommsd4xc0w8jip8haf2ib0cxmd7oqdi1kfw51ealcppwxs914ihwcjzqcvw3w == \l\c\k\9\0\b\s\l\d\f\0\7\x\s\y\4\x\y\k\p\r\m\m\d\n\n\v\6\w\s\q\p\y\j\8\s\g\7\l\b\r\f\4\d\1\z\b\3\m\4\s\k\7\7\w\6\t\b\z\h\h\m\j\o\c\a\u\j\v\p\9\h\c\l\q\5\q\n\p\e\8\3\8\z\q\s\n\r\y\l\e\k\1\i\2\w\h\q\6\4\r\b\b\h\w\f\m\5\t\e\m\e\b\5\t\8\c\n\r\e\1\2\5\9\6\6\o\0\h\0\0\1\c\v\r\1\y\c\i\9\2\7\2\r\2\r\w\l\p\9\x\r\2\j\w\h\4\q\t\6\j\i\c\g\3\o\1\a\u\5\1\1\n\f\e\8\t\w\w\6\8\6\c\6\u\6\2\z\8\n\3\j\r\l\a\9\t\1\0\e\v\4\7\4\i\f\3\j\3\b\8\k\0\0\k\2\o\0\e\9\t\n\8\v\r\8\x\y\6\7\9\1\8\o\b\h\o\m\s\7\z\q\n\l\h\n\y\8\g\r\s\t\y\h\v\q\a\0\0\p\b\v\c\c\n\t\r\h\7\t\j\h\t\r\n\g\t\v\3\1\k\k\5\0\0\i\3\7\c\s\f\5\r\s\x\w\3\0\g\e\9\s\o\7\x\l\r\e\a\i\5\e\z\c\c\k\d\c\u\v\0\4\t\4\j\m\m\9\8\z\u\2\4\i\m\y\k\d\g\j\g\x\a\s\d\5\l\9\t\r\v\x\z\d\a\d\s\9\j\w\t\x\e\s\3\8\1\b\6\7\k\7\k\v\i\h\p\v\c\8\l\a\i\q\o\a\1\z\1\2\s\i\0\c\s\4\h\v\x\1\1\c\9\v\n\u\u\n\w\a\2\n\9\n\3\w\s\j\h\t\x\w\f\5\e\h\z\k\a\f\q\u\x\9\g\v\b\i\p\x\a\y\m\v\w\h\1\x\z\9\f\o\m\m\s\d\4\x\c\0\w\8\j\i\p\8\h\a\f\2\i\b\0\c\x\m\d\7\o\q\d\i\1\k\f\w\5\1\e\a\l\c\p\p\w\x\s\9\1\4\i\h\w\c\j\z\q\c\v\w\3\w ]] 00:41:32.418 21:55:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:32.418 21:55:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:41:32.418 [2024-07-15 21:55:05.591919] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:41:32.418 [2024-07-15 21:55:05.592183] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168676 ] 00:41:32.418 [2024-07-15 21:55:05.766858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:32.677 [2024-07-15 21:55:05.962980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:34.313  Copying: 512/512 [B] (average 166 kBps) 00:41:34.313 00:41:34.313 ************************************ 00:41:34.313 END TEST dd_flags_misc 00:41:34.313 ************************************ 00:41:34.313 21:55:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ lck90bsldf07xsy4xykprmmdnnv6wsqpyj8sg7lbrf4d1zb3m4sk77w6tbzhhmjocaujvp9hclq5qnpe838zqsnrylek1i2whq64rbbhwfm5temeb5t8cnre125966o0h001cvr1yci9272r2rwlp9xr2jwh4qt6jicg3o1au511nfe8tww686c6u62z8n3jrla9t10ev474if3j3b8k00k2o0e9tn8vr8xy67918obhoms7zqnlhny8grstyhvqa00pbvccntrh7tjhtrngtv31kk500i37csf5rsxw30ge9so7xlreai5ezcckdcuv04t4jmm98zu24imykdgjgxasd5l9trvxzdads9jwtxes381b67k7kvihpvc8laiqoa1z12si0cs4hvx11c9vnuunwa2n9n3wsjhtxwf5ehzkafqux9gvbipxaymvwh1xz9fommsd4xc0w8jip8haf2ib0cxmd7oqdi1kfw51ealcppwxs914ihwcjzqcvw3w == \l\c\k\9\0\b\s\l\d\f\0\7\x\s\y\4\x\y\k\p\r\m\m\d\n\n\v\6\w\s\q\p\y\j\8\s\g\7\l\b\r\f\4\d\1\z\b\3\m\4\s\k\7\7\w\6\t\b\z\h\h\m\j\o\c\a\u\j\v\p\9\h\c\l\q\5\q\n\p\e\8\3\8\z\q\s\n\r\y\l\e\k\1\i\2\w\h\q\6\4\r\b\b\h\w\f\m\5\t\e\m\e\b\5\t\8\c\n\r\e\1\2\5\9\6\6\o\0\h\0\0\1\c\v\r\1\y\c\i\9\2\7\2\r\2\r\w\l\p\9\x\r\2\j\w\h\4\q\t\6\j\i\c\g\3\o\1\a\u\5\1\1\n\f\e\8\t\w\w\6\8\6\c\6\u\6\2\z\8\n\3\j\r\l\a\9\t\1\0\e\v\4\7\4\i\f\3\j\3\b\8\k\0\0\k\2\o\0\e\9\t\n\8\v\r\8\x\y\6\7\9\1\8\o\b\h\o\m\s\7\z\q\n\l\h\n\y\8\g\r\s\t\y\h\v\q\a\0\0\p\b\v\c\c\n\t\r\h\7\t\j\h\t\r\n\g\t\v\3\1\k\k\5\0\0\i\3\7\c\s\f\5\r\s\x\w\3\0\g\e\9\s\o\7\x\l\r\e\a\i\5\e\z\c\c\k\d\c\u\v\0\4\t\4\j\m\m\9\8\z\u\2\4\i\m\y\k\d\g\j\g\x\a\s\d\5\l\9\t\r\v\x\z\d\a\d\s\9\j\w\t\x\e\s\3\8\1\b\6\7\k\7\k\v\i\h\p\v\c\8\l\a\i\q\o\a\1\z\1\2\s\i\0\c\s\4\h\v\x\1\1\c\9\v\n\u\u\n\w\a\2\n\9\n\3\w\s\j\h\t\x\w\f\5\e\h\z\k\a\f\q\u\x\9\g\v\b\i\p\x\a\y\m\v\w\h\1\x\z\9\f\o\m\m\s\d\4\x\c\0\w\8\j\i\p\8\h\a\f\2\i\b\0\c\x\m\d\7\o\q\d\i\1\k\f\w\5\1\e\a\l\c\p\p\w\x\s\9\1\4\i\h\w\c\j\z\q\c\v\w\3\w ]] 00:41:34.313 00:41:34.313 real 0m15.164s 00:41:34.313 user 0m12.489s 00:41:34.313 sys 0m1.557s 00:41:34.313 21:55:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:34.313 21:55:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:41:34.313 21:55:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:41:34.313 21:55:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:41:34.313 21:55:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:41:34.313 * Second test run, using AIO 00:41:34.313 21:55:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:41:34.313 21:55:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:41:34.313 21:55:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:34.313 21:55:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:34.313 21:55:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:41:34.313 ************************************ 00:41:34.313 START TEST dd_flag_append_forced_aio 00:41:34.313 ************************************ 00:41:34.313 21:55:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:41:34.313 21:55:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:41:34.313 21:55:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:41:34.313 21:55:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:41:34.313 21:55:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:41:34.313 21:55:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:41:34.313 21:55:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=qrkqqk7a2k2jj0l6j9q7hvza0v0uyi60 00:41:34.313 21:55:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:41:34.313 21:55:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:41:34.313 21:55:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:41:34.313 21:55:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=fb72bizu9w59i1rq7fikdbuylohnfz2m 00:41:34.313 21:55:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s qrkqqk7a2k2jj0l6j9q7hvza0v0uyi60 00:41:34.313 21:55:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s fb72bizu9w59i1rq7fikdbuylohnfz2m 00:41:34.313 21:55:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:41:34.313 [2024-07-15 21:55:07.568408] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:41:34.313 [2024-07-15 21:55:07.568614] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168725 ] 00:41:34.571 [2024-07-15 21:55:07.725642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:34.571 [2024-07-15 21:55:07.927887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:36.074  Copying: 32/32 [B] (average 31 kBps) 00:41:36.074 00:41:36.074 ************************************ 00:41:36.074 END TEST dd_flag_append_forced_aio 00:41:36.074 ************************************ 00:41:36.074 21:55:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ fb72bizu9w59i1rq7fikdbuylohnfz2mqrkqqk7a2k2jj0l6j9q7hvza0v0uyi60 == \f\b\7\2\b\i\z\u\9\w\5\9\i\1\r\q\7\f\i\k\d\b\u\y\l\o\h\n\f\z\2\m\q\r\k\q\q\k\7\a\2\k\2\j\j\0\l\6\j\9\q\7\h\v\z\a\0\v\0\u\y\i\6\0 ]] 00:41:36.074 00:41:36.074 real 0m1.919s 00:41:36.074 user 0m1.598s 00:41:36.074 sys 0m0.188s 00:41:36.075 21:55:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:36.075 21:55:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:41:36.333 21:55:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:41:36.333 21:55:09 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:41:36.333 21:55:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:36.333 21:55:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:36.333 21:55:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:41:36.333 ************************************ 00:41:36.333 START TEST dd_flag_directory_forced_aio 00:41:36.333 ************************************ 00:41:36.333 21:55:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:41:36.333 21:55:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:36.333 21:55:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:41:36.333 21:55:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:36.333 21:55:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:36.333 21:55:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:36.333 21:55:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:36.333 21:55:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:36.333 21:55:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:36.333 21:55:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:36.333 21:55:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:36.333 21:55:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:36.333 21:55:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:36.333 [2024-07-15 21:55:09.534586] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:41:36.333 [2024-07-15 21:55:09.534795] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168773 ] 00:41:36.333 [2024-07-15 21:55:09.695668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:36.592 [2024-07-15 21:55:09.892968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:36.851 [2024-07-15 21:55:10.193071] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:41:36.851 [2024-07-15 21:55:10.193247] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:41:36.851 [2024-07-15 21:55:10.193303] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:37.789 [2024-07-15 21:55:10.924486] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:41:38.048 21:55:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:41:38.048 21:55:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:38.048 21:55:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:41:38.048 21:55:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:41:38.048 21:55:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:41:38.048 21:55:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:38.048 21:55:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:41:38.048 21:55:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:41:38.048 21:55:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:41:38.048 21:55:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:38.048 21:55:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:38.048 21:55:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:38.048 21:55:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:38.048 21:55:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:38.048 21:55:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:38.048 21:55:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:38.048 21:55:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:38.048 21:55:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:41:38.048 [2024-07-15 21:55:11.377354] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:41:38.048 [2024-07-15 21:55:11.377576] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168826 ] 00:41:38.306 [2024-07-15 21:55:11.541128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:38.625 [2024-07-15 21:55:11.741443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:38.883 [2024-07-15 21:55:12.034823] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:41:38.883 [2024-07-15 21:55:12.034984] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:41:38.884 [2024-07-15 21:55:12.035042] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:39.451 [2024-07-15 21:55:12.783712] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:41:40.052 ************************************ 00:41:40.053 END TEST dd_flag_directory_forced_aio 00:41:40.053 ************************************ 00:41:40.053 21:55:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:41:40.053 21:55:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:40.053 21:55:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:41:40.053 21:55:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:41:40.053 21:55:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:41:40.053 21:55:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:40.053 00:41:40.053 real 0m3.699s 00:41:40.053 user 0m3.096s 00:41:40.053 sys 0m0.400s 00:41:40.053 21:55:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:40.053 21:55:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:41:40.053 21:55:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:41:40.053 21:55:13 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:41:40.053 21:55:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:40.053 21:55:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:40.053 21:55:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:41:40.053 ************************************ 00:41:40.053 START TEST dd_flag_nofollow_forced_aio 00:41:40.053 ************************************ 00:41:40.053 21:55:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:41:40.053 21:55:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:41:40.053 21:55:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:41:40.053 21:55:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:41:40.053 21:55:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:41:40.053 21:55:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:40.053 21:55:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:41:40.053 21:55:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:40.053 21:55:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:40.053 21:55:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:40.053 21:55:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:40.053 21:55:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:40.053 21:55:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:40.053 21:55:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:40.053 21:55:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:40.053 21:55:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:40.053 21:55:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:40.053 [2024-07-15 21:55:13.313785] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:41:40.053 [2024-07-15 21:55:13.313988] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168873 ] 00:41:40.310 [2024-07-15 21:55:13.475111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:40.310 [2024-07-15 21:55:13.668147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:40.876 [2024-07-15 21:55:13.969923] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:41:40.876 [2024-07-15 21:55:13.970094] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:41:40.876 [2024-07-15 21:55:13.970159] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:41.443 [2024-07-15 21:55:14.725597] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:41:42.010 21:55:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:41:42.010 21:55:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:42.010 21:55:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:41:42.010 21:55:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:41:42.010 21:55:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:41:42.010 21:55:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:42.010 21:55:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:41:42.010 21:55:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:41:42.010 21:55:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:41:42.010 21:55:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:42.010 21:55:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:42.010 21:55:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:42.010 21:55:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:42.010 21:55:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:42.010 21:55:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:42.010 21:55:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:42.010 21:55:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:42.010 21:55:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:41:42.010 [2024-07-15 21:55:15.162924] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:41:42.011 [2024-07-15 21:55:15.163143] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168900 ] 00:41:42.011 [2024-07-15 21:55:15.324424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:42.269 [2024-07-15 21:55:15.523138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:42.528 [2024-07-15 21:55:15.817271] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:41:42.528 [2024-07-15 21:55:15.817450] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:41:42.528 [2024-07-15 21:55:15.817525] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:43.490 [2024-07-15 21:55:16.563304] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:41:43.749 21:55:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:41:43.749 21:55:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:43.749 21:55:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:41:43.749 21:55:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:41:43.749 21:55:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:41:43.749 21:55:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:43.749 21:55:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:41:43.749 21:55:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:41:43.749 21:55:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:41:43.749 21:55:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:43.749 [2024-07-15 21:55:17.011828] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:41:43.749 [2024-07-15 21:55:17.012057] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168920 ] 00:41:44.007 [2024-07-15 21:55:17.174022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:44.007 [2024-07-15 21:55:17.370512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:45.517  Copying: 512/512 [B] (average 500 kBps) 00:41:45.517 00:41:45.517 ************************************ 00:41:45.517 END TEST dd_flag_nofollow_forced_aio 00:41:45.517 ************************************ 00:41:45.517 21:55:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ ah42l4ogo7k1z6grxdw7lnu2b9gpov4foi8bluj9sljfmou4xvtejwzf2hhob040iqi4vn5ddftwkcfx46eiulde50nixpv26wikgwapu58zhadnx5iejut8dl6il8g4i6tgygrc72en3g6qc63ghqp9dpbeakvxvhso9d04uvpa4g0pjm149b4p2bae502e0pbbhzvmxhmqm5hkegn212s6sb61lw86a01zdgawqshjnllwpqhx29r6pqoxbpatexw719n8furlc6o4y8vsir7iickrq4xkrhp7w081nekk21kcu134dsyisuhq1f6ifduh8tf5kvgphpkl681luc1ohkfsjxi6qzfu54s8rq4ox207lcbpiuzveauh9yybbcrg92awrx5hdwb3disagfbgqlnp0hl3k8b0dsi6qktctqzfqy1uoum590ml51yu3tkp6hdb1crwj5kf7qcuhguc925yisup23u2lwsyovypd3w513r20r7fawnmvhmi == \a\h\4\2\l\4\o\g\o\7\k\1\z\6\g\r\x\d\w\7\l\n\u\2\b\9\g\p\o\v\4\f\o\i\8\b\l\u\j\9\s\l\j\f\m\o\u\4\x\v\t\e\j\w\z\f\2\h\h\o\b\0\4\0\i\q\i\4\v\n\5\d\d\f\t\w\k\c\f\x\4\6\e\i\u\l\d\e\5\0\n\i\x\p\v\2\6\w\i\k\g\w\a\p\u\5\8\z\h\a\d\n\x\5\i\e\j\u\t\8\d\l\6\i\l\8\g\4\i\6\t\g\y\g\r\c\7\2\e\n\3\g\6\q\c\6\3\g\h\q\p\9\d\p\b\e\a\k\v\x\v\h\s\o\9\d\0\4\u\v\p\a\4\g\0\p\j\m\1\4\9\b\4\p\2\b\a\e\5\0\2\e\0\p\b\b\h\z\v\m\x\h\m\q\m\5\h\k\e\g\n\2\1\2\s\6\s\b\6\1\l\w\8\6\a\0\1\z\d\g\a\w\q\s\h\j\n\l\l\w\p\q\h\x\2\9\r\6\p\q\o\x\b\p\a\t\e\x\w\7\1\9\n\8\f\u\r\l\c\6\o\4\y\8\v\s\i\r\7\i\i\c\k\r\q\4\x\k\r\h\p\7\w\0\8\1\n\e\k\k\2\1\k\c\u\1\3\4\d\s\y\i\s\u\h\q\1\f\6\i\f\d\u\h\8\t\f\5\k\v\g\p\h\p\k\l\6\8\1\l\u\c\1\o\h\k\f\s\j\x\i\6\q\z\f\u\5\4\s\8\r\q\4\o\x\2\0\7\l\c\b\p\i\u\z\v\e\a\u\h\9\y\y\b\b\c\r\g\9\2\a\w\r\x\5\h\d\w\b\3\d\i\s\a\g\f\b\g\q\l\n\p\0\h\l\3\k\8\b\0\d\s\i\6\q\k\t\c\t\q\z\f\q\y\1\u\o\u\m\5\9\0\m\l\5\1\y\u\3\t\k\p\6\h\d\b\1\c\r\w\j\5\k\f\7\q\c\u\h\g\u\c\9\2\5\y\i\s\u\p\2\3\u\2\l\w\s\y\o\v\y\p\d\3\w\5\1\3\r\2\0\r\7\f\a\w\n\m\v\h\m\i ]] 00:41:45.517 00:41:45.517 real 0m5.572s 00:41:45.517 user 0m4.612s 00:41:45.517 sys 0m0.627s 00:41:45.517 21:55:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:45.517 21:55:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:41:45.517 21:55:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:41:45.517 21:55:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:41:45.517 21:55:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:45.517 21:55:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:45.517 21:55:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:41:45.517 ************************************ 00:41:45.517 START TEST dd_flag_noatime_forced_aio 00:41:45.517 ************************************ 00:41:45.517 21:55:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:41:45.517 21:55:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:41:45.517 21:55:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:41:45.517 21:55:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:41:45.517 21:55:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:41:45.517 21:55:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:41:45.517 21:55:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:45.786 21:55:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721080517 00:41:45.786 21:55:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:45.786 21:55:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721080518 00:41:45.786 21:55:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:41:46.722 21:55:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:46.722 [2024-07-15 21:55:19.957343] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:41:46.722 [2024-07-15 21:55:19.957610] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168984 ] 00:41:46.981 [2024-07-15 21:55:20.130354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:46.981 [2024-07-15 21:55:20.331818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:48.485  Copying: 512/512 [B] (average 500 kBps) 00:41:48.485 00:41:48.485 21:55:21 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:48.485 21:55:21 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721080517 )) 00:41:48.485 21:55:21 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:48.485 21:55:21 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721080518 )) 00:41:48.485 21:55:21 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:48.743 [2024-07-15 21:55:21.864083] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:41:48.743 [2024-07-15 21:55:21.864275] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169028 ] 00:41:48.743 [2024-07-15 21:55:22.024257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:49.002 [2024-07-15 21:55:22.222202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:50.638  Copying: 512/512 [B] (average 500 kBps) 00:41:50.638 00:41:50.638 21:55:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:50.638 21:55:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721080522 )) 00:41:50.638 00:41:50.638 real 0m4.913s 00:41:50.638 user 0m3.147s 00:41:50.638 sys 0m0.469s 00:41:50.638 21:55:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:50.638 21:55:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:41:50.638 ************************************ 00:41:50.638 END TEST dd_flag_noatime_forced_aio 00:41:50.638 ************************************ 00:41:50.638 21:55:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:41:50.638 21:55:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:41:50.638 21:55:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:50.638 21:55:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:50.638 21:55:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:41:50.638 ************************************ 00:41:50.638 START TEST dd_flags_misc_forced_aio 00:41:50.638 ************************************ 00:41:50.638 21:55:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:41:50.638 21:55:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:41:50.638 21:55:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:41:50.638 21:55:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:41:50.638 21:55:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:41:50.638 21:55:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:41:50.638 21:55:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:41:50.638 21:55:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:41:50.638 21:55:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:50.638 21:55:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:41:50.638 [2024-07-15 21:55:23.932222] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:41:50.638 [2024-07-15 21:55:23.932460] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169076 ] 00:41:50.897 [2024-07-15 21:55:24.093076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:51.156 [2024-07-15 21:55:24.298599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:52.794  Copying: 512/512 [B] (average 500 kBps) 00:41:52.794 00:41:52.794 21:55:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ w718fcgog9birt1cb7r6ub7mc9obns73ggyha5mz26vl8wp9tkxvowr1zwxom6d2buvgc89w6z99h0c6non2l07bl4oezmrgs3ef75a0cd1ir7r6v5o8pot9itcdk1d33nstn3s523u0uc6o3zxpuip9ruo6o0j2026m26uzp2498iwu0y866u1f103aohw03na4mohypcl8vrkesorp1ro4prawzpxfhh220ybd9yg740v30ha391abeknd3s5paaxver3ifc0kku9xv7znm8ervvcxci92b0jq4k3t6p7gloradst6e3843213poq9upokr7uskfys2q5e8p6t56b954h90zvhtgr0fsmphjernfo21m8tm37jquq1cci8xjbp2ib6f7gy8btc3fpm6yl04d7wt2islwrm06sdkjv8tm2urteudswv3bm1yrnykrepziiz7fveq5ryxi0bw7yr3ew220setyqwcfrbp10l0rdiryoz8bqzt9yk35dp == \w\7\1\8\f\c\g\o\g\9\b\i\r\t\1\c\b\7\r\6\u\b\7\m\c\9\o\b\n\s\7\3\g\g\y\h\a\5\m\z\2\6\v\l\8\w\p\9\t\k\x\v\o\w\r\1\z\w\x\o\m\6\d\2\b\u\v\g\c\8\9\w\6\z\9\9\h\0\c\6\n\o\n\2\l\0\7\b\l\4\o\e\z\m\r\g\s\3\e\f\7\5\a\0\c\d\1\i\r\7\r\6\v\5\o\8\p\o\t\9\i\t\c\d\k\1\d\3\3\n\s\t\n\3\s\5\2\3\u\0\u\c\6\o\3\z\x\p\u\i\p\9\r\u\o\6\o\0\j\2\0\2\6\m\2\6\u\z\p\2\4\9\8\i\w\u\0\y\8\6\6\u\1\f\1\0\3\a\o\h\w\0\3\n\a\4\m\o\h\y\p\c\l\8\v\r\k\e\s\o\r\p\1\r\o\4\p\r\a\w\z\p\x\f\h\h\2\2\0\y\b\d\9\y\g\7\4\0\v\3\0\h\a\3\9\1\a\b\e\k\n\d\3\s\5\p\a\a\x\v\e\r\3\i\f\c\0\k\k\u\9\x\v\7\z\n\m\8\e\r\v\v\c\x\c\i\9\2\b\0\j\q\4\k\3\t\6\p\7\g\l\o\r\a\d\s\t\6\e\3\8\4\3\2\1\3\p\o\q\9\u\p\o\k\r\7\u\s\k\f\y\s\2\q\5\e\8\p\6\t\5\6\b\9\5\4\h\9\0\z\v\h\t\g\r\0\f\s\m\p\h\j\e\r\n\f\o\2\1\m\8\t\m\3\7\j\q\u\q\1\c\c\i\8\x\j\b\p\2\i\b\6\f\7\g\y\8\b\t\c\3\f\p\m\6\y\l\0\4\d\7\w\t\2\i\s\l\w\r\m\0\6\s\d\k\j\v\8\t\m\2\u\r\t\e\u\d\s\w\v\3\b\m\1\y\r\n\y\k\r\e\p\z\i\i\z\7\f\v\e\q\5\r\y\x\i\0\b\w\7\y\r\3\e\w\2\2\0\s\e\t\y\q\w\c\f\r\b\p\1\0\l\0\r\d\i\r\y\o\z\8\b\q\z\t\9\y\k\3\5\d\p ]] 00:41:52.794 21:55:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:52.794 21:55:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:41:52.794 [2024-07-15 21:55:25.860125] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:41:52.794 [2024-07-15 21:55:25.860335] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169104 ] 00:41:52.794 [2024-07-15 21:55:26.022838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:53.054 [2024-07-15 21:55:26.211717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:54.694  Copying: 512/512 [B] (average 500 kBps) 00:41:54.694 00:41:54.694 21:55:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ w718fcgog9birt1cb7r6ub7mc9obns73ggyha5mz26vl8wp9tkxvowr1zwxom6d2buvgc89w6z99h0c6non2l07bl4oezmrgs3ef75a0cd1ir7r6v5o8pot9itcdk1d33nstn3s523u0uc6o3zxpuip9ruo6o0j2026m26uzp2498iwu0y866u1f103aohw03na4mohypcl8vrkesorp1ro4prawzpxfhh220ybd9yg740v30ha391abeknd3s5paaxver3ifc0kku9xv7znm8ervvcxci92b0jq4k3t6p7gloradst6e3843213poq9upokr7uskfys2q5e8p6t56b954h90zvhtgr0fsmphjernfo21m8tm37jquq1cci8xjbp2ib6f7gy8btc3fpm6yl04d7wt2islwrm06sdkjv8tm2urteudswv3bm1yrnykrepziiz7fveq5ryxi0bw7yr3ew220setyqwcfrbp10l0rdiryoz8bqzt9yk35dp == \w\7\1\8\f\c\g\o\g\9\b\i\r\t\1\c\b\7\r\6\u\b\7\m\c\9\o\b\n\s\7\3\g\g\y\h\a\5\m\z\2\6\v\l\8\w\p\9\t\k\x\v\o\w\r\1\z\w\x\o\m\6\d\2\b\u\v\g\c\8\9\w\6\z\9\9\h\0\c\6\n\o\n\2\l\0\7\b\l\4\o\e\z\m\r\g\s\3\e\f\7\5\a\0\c\d\1\i\r\7\r\6\v\5\o\8\p\o\t\9\i\t\c\d\k\1\d\3\3\n\s\t\n\3\s\5\2\3\u\0\u\c\6\o\3\z\x\p\u\i\p\9\r\u\o\6\o\0\j\2\0\2\6\m\2\6\u\z\p\2\4\9\8\i\w\u\0\y\8\6\6\u\1\f\1\0\3\a\o\h\w\0\3\n\a\4\m\o\h\y\p\c\l\8\v\r\k\e\s\o\r\p\1\r\o\4\p\r\a\w\z\p\x\f\h\h\2\2\0\y\b\d\9\y\g\7\4\0\v\3\0\h\a\3\9\1\a\b\e\k\n\d\3\s\5\p\a\a\x\v\e\r\3\i\f\c\0\k\k\u\9\x\v\7\z\n\m\8\e\r\v\v\c\x\c\i\9\2\b\0\j\q\4\k\3\t\6\p\7\g\l\o\r\a\d\s\t\6\e\3\8\4\3\2\1\3\p\o\q\9\u\p\o\k\r\7\u\s\k\f\y\s\2\q\5\e\8\p\6\t\5\6\b\9\5\4\h\9\0\z\v\h\t\g\r\0\f\s\m\p\h\j\e\r\n\f\o\2\1\m\8\t\m\3\7\j\q\u\q\1\c\c\i\8\x\j\b\p\2\i\b\6\f\7\g\y\8\b\t\c\3\f\p\m\6\y\l\0\4\d\7\w\t\2\i\s\l\w\r\m\0\6\s\d\k\j\v\8\t\m\2\u\r\t\e\u\d\s\w\v\3\b\m\1\y\r\n\y\k\r\e\p\z\i\i\z\7\f\v\e\q\5\r\y\x\i\0\b\w\7\y\r\3\e\w\2\2\0\s\e\t\y\q\w\c\f\r\b\p\1\0\l\0\r\d\i\r\y\o\z\8\b\q\z\t\9\y\k\3\5\d\p ]] 00:41:54.694 21:55:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:54.694 21:55:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:41:54.694 [2024-07-15 21:55:27.758261] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:41:54.694 [2024-07-15 21:55:27.758488] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169128 ] 00:41:54.694 [2024-07-15 21:55:27.917568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:54.953 [2024-07-15 21:55:28.121461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:56.591  Copying: 512/512 [B] (average 250 kBps) 00:41:56.591 00:41:56.591 21:55:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ w718fcgog9birt1cb7r6ub7mc9obns73ggyha5mz26vl8wp9tkxvowr1zwxom6d2buvgc89w6z99h0c6non2l07bl4oezmrgs3ef75a0cd1ir7r6v5o8pot9itcdk1d33nstn3s523u0uc6o3zxpuip9ruo6o0j2026m26uzp2498iwu0y866u1f103aohw03na4mohypcl8vrkesorp1ro4prawzpxfhh220ybd9yg740v30ha391abeknd3s5paaxver3ifc0kku9xv7znm8ervvcxci92b0jq4k3t6p7gloradst6e3843213poq9upokr7uskfys2q5e8p6t56b954h90zvhtgr0fsmphjernfo21m8tm37jquq1cci8xjbp2ib6f7gy8btc3fpm6yl04d7wt2islwrm06sdkjv8tm2urteudswv3bm1yrnykrepziiz7fveq5ryxi0bw7yr3ew220setyqwcfrbp10l0rdiryoz8bqzt9yk35dp == \w\7\1\8\f\c\g\o\g\9\b\i\r\t\1\c\b\7\r\6\u\b\7\m\c\9\o\b\n\s\7\3\g\g\y\h\a\5\m\z\2\6\v\l\8\w\p\9\t\k\x\v\o\w\r\1\z\w\x\o\m\6\d\2\b\u\v\g\c\8\9\w\6\z\9\9\h\0\c\6\n\o\n\2\l\0\7\b\l\4\o\e\z\m\r\g\s\3\e\f\7\5\a\0\c\d\1\i\r\7\r\6\v\5\o\8\p\o\t\9\i\t\c\d\k\1\d\3\3\n\s\t\n\3\s\5\2\3\u\0\u\c\6\o\3\z\x\p\u\i\p\9\r\u\o\6\o\0\j\2\0\2\6\m\2\6\u\z\p\2\4\9\8\i\w\u\0\y\8\6\6\u\1\f\1\0\3\a\o\h\w\0\3\n\a\4\m\o\h\y\p\c\l\8\v\r\k\e\s\o\r\p\1\r\o\4\p\r\a\w\z\p\x\f\h\h\2\2\0\y\b\d\9\y\g\7\4\0\v\3\0\h\a\3\9\1\a\b\e\k\n\d\3\s\5\p\a\a\x\v\e\r\3\i\f\c\0\k\k\u\9\x\v\7\z\n\m\8\e\r\v\v\c\x\c\i\9\2\b\0\j\q\4\k\3\t\6\p\7\g\l\o\r\a\d\s\t\6\e\3\8\4\3\2\1\3\p\o\q\9\u\p\o\k\r\7\u\s\k\f\y\s\2\q\5\e\8\p\6\t\5\6\b\9\5\4\h\9\0\z\v\h\t\g\r\0\f\s\m\p\h\j\e\r\n\f\o\2\1\m\8\t\m\3\7\j\q\u\q\1\c\c\i\8\x\j\b\p\2\i\b\6\f\7\g\y\8\b\t\c\3\f\p\m\6\y\l\0\4\d\7\w\t\2\i\s\l\w\r\m\0\6\s\d\k\j\v\8\t\m\2\u\r\t\e\u\d\s\w\v\3\b\m\1\y\r\n\y\k\r\e\p\z\i\i\z\7\f\v\e\q\5\r\y\x\i\0\b\w\7\y\r\3\e\w\2\2\0\s\e\t\y\q\w\c\f\r\b\p\1\0\l\0\r\d\i\r\y\o\z\8\b\q\z\t\9\y\k\3\5\d\p ]] 00:41:56.591 21:55:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:56.591 21:55:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:41:56.591 [2024-07-15 21:55:29.748303] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:41:56.591 [2024-07-15 21:55:29.748503] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169156 ] 00:41:56.591 [2024-07-15 21:55:29.909521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:56.849 [2024-07-15 21:55:30.104043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:58.493  Copying: 512/512 [B] (average 250 kBps) 00:41:58.493 00:41:58.493 21:55:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ w718fcgog9birt1cb7r6ub7mc9obns73ggyha5mz26vl8wp9tkxvowr1zwxom6d2buvgc89w6z99h0c6non2l07bl4oezmrgs3ef75a0cd1ir7r6v5o8pot9itcdk1d33nstn3s523u0uc6o3zxpuip9ruo6o0j2026m26uzp2498iwu0y866u1f103aohw03na4mohypcl8vrkesorp1ro4prawzpxfhh220ybd9yg740v30ha391abeknd3s5paaxver3ifc0kku9xv7znm8ervvcxci92b0jq4k3t6p7gloradst6e3843213poq9upokr7uskfys2q5e8p6t56b954h90zvhtgr0fsmphjernfo21m8tm37jquq1cci8xjbp2ib6f7gy8btc3fpm6yl04d7wt2islwrm06sdkjv8tm2urteudswv3bm1yrnykrepziiz7fveq5ryxi0bw7yr3ew220setyqwcfrbp10l0rdiryoz8bqzt9yk35dp == \w\7\1\8\f\c\g\o\g\9\b\i\r\t\1\c\b\7\r\6\u\b\7\m\c\9\o\b\n\s\7\3\g\g\y\h\a\5\m\z\2\6\v\l\8\w\p\9\t\k\x\v\o\w\r\1\z\w\x\o\m\6\d\2\b\u\v\g\c\8\9\w\6\z\9\9\h\0\c\6\n\o\n\2\l\0\7\b\l\4\o\e\z\m\r\g\s\3\e\f\7\5\a\0\c\d\1\i\r\7\r\6\v\5\o\8\p\o\t\9\i\t\c\d\k\1\d\3\3\n\s\t\n\3\s\5\2\3\u\0\u\c\6\o\3\z\x\p\u\i\p\9\r\u\o\6\o\0\j\2\0\2\6\m\2\6\u\z\p\2\4\9\8\i\w\u\0\y\8\6\6\u\1\f\1\0\3\a\o\h\w\0\3\n\a\4\m\o\h\y\p\c\l\8\v\r\k\e\s\o\r\p\1\r\o\4\p\r\a\w\z\p\x\f\h\h\2\2\0\y\b\d\9\y\g\7\4\0\v\3\0\h\a\3\9\1\a\b\e\k\n\d\3\s\5\p\a\a\x\v\e\r\3\i\f\c\0\k\k\u\9\x\v\7\z\n\m\8\e\r\v\v\c\x\c\i\9\2\b\0\j\q\4\k\3\t\6\p\7\g\l\o\r\a\d\s\t\6\e\3\8\4\3\2\1\3\p\o\q\9\u\p\o\k\r\7\u\s\k\f\y\s\2\q\5\e\8\p\6\t\5\6\b\9\5\4\h\9\0\z\v\h\t\g\r\0\f\s\m\p\h\j\e\r\n\f\o\2\1\m\8\t\m\3\7\j\q\u\q\1\c\c\i\8\x\j\b\p\2\i\b\6\f\7\g\y\8\b\t\c\3\f\p\m\6\y\l\0\4\d\7\w\t\2\i\s\l\w\r\m\0\6\s\d\k\j\v\8\t\m\2\u\r\t\e\u\d\s\w\v\3\b\m\1\y\r\n\y\k\r\e\p\z\i\i\z\7\f\v\e\q\5\r\y\x\i\0\b\w\7\y\r\3\e\w\2\2\0\s\e\t\y\q\w\c\f\r\b\p\1\0\l\0\r\d\i\r\y\o\z\8\b\q\z\t\9\y\k\3\5\d\p ]] 00:41:58.493 21:55:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:41:58.493 21:55:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:41:58.493 21:55:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:41:58.493 21:55:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:41:58.493 21:55:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:58.493 21:55:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:41:58.493 [2024-07-15 21:55:31.688036] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:41:58.493 [2024-07-15 21:55:31.688270] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169189 ] 00:41:58.493 [2024-07-15 21:55:31.838453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:58.751 [2024-07-15 21:55:32.045957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:00.386  Copying: 512/512 [B] (average 500 kBps) 00:42:00.386 00:42:00.386 21:55:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ r08ej2ne41kkxqhm4iwnwizoq03ptts6syurui4gxjzpzuonoqfi043j0rdlt828yqleo8qxpwi7rgtc8lglif4q9y69he51rk4va08kghtkprbazwibfi69mtq45da43w4znzo4uz5qxoaoj5pnwy0deyxr0namv1k4wbwpfkdo0hqoys2lbcawoeoze53o92vqcs0suney6olbqawrxeu9gmwlwf2vd3g9ho81m8rqxo8g3u7u1uwbldwq8xa2aik8czem3d4e40tad6s63wg0xmiwvak2uovw3nkfx39i5sn221h9thbsmnf1et9lxvsobqt78ssttfjc1phvb7ru9jqd2i5hkz2gk4cdeapx1veys1zbwffzsavxtaab0bi99ye8h57hq9rzbvq4y8m71ziueplfxlzk2ccenc1637h5iro39ukdrmmb4dh9xvc6h3sibigk2veso8h8g72phga3rtwvbbwlif8e3zqqrh139i08kf5lowmti5zc == \r\0\8\e\j\2\n\e\4\1\k\k\x\q\h\m\4\i\w\n\w\i\z\o\q\0\3\p\t\t\s\6\s\y\u\r\u\i\4\g\x\j\z\p\z\u\o\n\o\q\f\i\0\4\3\j\0\r\d\l\t\8\2\8\y\q\l\e\o\8\q\x\p\w\i\7\r\g\t\c\8\l\g\l\i\f\4\q\9\y\6\9\h\e\5\1\r\k\4\v\a\0\8\k\g\h\t\k\p\r\b\a\z\w\i\b\f\i\6\9\m\t\q\4\5\d\a\4\3\w\4\z\n\z\o\4\u\z\5\q\x\o\a\o\j\5\p\n\w\y\0\d\e\y\x\r\0\n\a\m\v\1\k\4\w\b\w\p\f\k\d\o\0\h\q\o\y\s\2\l\b\c\a\w\o\e\o\z\e\5\3\o\9\2\v\q\c\s\0\s\u\n\e\y\6\o\l\b\q\a\w\r\x\e\u\9\g\m\w\l\w\f\2\v\d\3\g\9\h\o\8\1\m\8\r\q\x\o\8\g\3\u\7\u\1\u\w\b\l\d\w\q\8\x\a\2\a\i\k\8\c\z\e\m\3\d\4\e\4\0\t\a\d\6\s\6\3\w\g\0\x\m\i\w\v\a\k\2\u\o\v\w\3\n\k\f\x\3\9\i\5\s\n\2\2\1\h\9\t\h\b\s\m\n\f\1\e\t\9\l\x\v\s\o\b\q\t\7\8\s\s\t\t\f\j\c\1\p\h\v\b\7\r\u\9\j\q\d\2\i\5\h\k\z\2\g\k\4\c\d\e\a\p\x\1\v\e\y\s\1\z\b\w\f\f\z\s\a\v\x\t\a\a\b\0\b\i\9\9\y\e\8\h\5\7\h\q\9\r\z\b\v\q\4\y\8\m\7\1\z\i\u\e\p\l\f\x\l\z\k\2\c\c\e\n\c\1\6\3\7\h\5\i\r\o\3\9\u\k\d\r\m\m\b\4\d\h\9\x\v\c\6\h\3\s\i\b\i\g\k\2\v\e\s\o\8\h\8\g\7\2\p\h\g\a\3\r\t\w\v\b\b\w\l\i\f\8\e\3\z\q\q\r\h\1\3\9\i\0\8\k\f\5\l\o\w\m\t\i\5\z\c ]] 00:42:00.386 21:55:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:42:00.386 21:55:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:42:00.386 [2024-07-15 21:55:33.584375] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:42:00.386 [2024-07-15 21:55:33.584577] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169218 ] 00:42:00.386 [2024-07-15 21:55:33.744799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:00.644 [2024-07-15 21:55:33.944573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:02.276  Copying: 512/512 [B] (average 500 kBps) 00:42:02.276 00:42:02.276 21:55:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ r08ej2ne41kkxqhm4iwnwizoq03ptts6syurui4gxjzpzuonoqfi043j0rdlt828yqleo8qxpwi7rgtc8lglif4q9y69he51rk4va08kghtkprbazwibfi69mtq45da43w4znzo4uz5qxoaoj5pnwy0deyxr0namv1k4wbwpfkdo0hqoys2lbcawoeoze53o92vqcs0suney6olbqawrxeu9gmwlwf2vd3g9ho81m8rqxo8g3u7u1uwbldwq8xa2aik8czem3d4e40tad6s63wg0xmiwvak2uovw3nkfx39i5sn221h9thbsmnf1et9lxvsobqt78ssttfjc1phvb7ru9jqd2i5hkz2gk4cdeapx1veys1zbwffzsavxtaab0bi99ye8h57hq9rzbvq4y8m71ziueplfxlzk2ccenc1637h5iro39ukdrmmb4dh9xvc6h3sibigk2veso8h8g72phga3rtwvbbwlif8e3zqqrh139i08kf5lowmti5zc == \r\0\8\e\j\2\n\e\4\1\k\k\x\q\h\m\4\i\w\n\w\i\z\o\q\0\3\p\t\t\s\6\s\y\u\r\u\i\4\g\x\j\z\p\z\u\o\n\o\q\f\i\0\4\3\j\0\r\d\l\t\8\2\8\y\q\l\e\o\8\q\x\p\w\i\7\r\g\t\c\8\l\g\l\i\f\4\q\9\y\6\9\h\e\5\1\r\k\4\v\a\0\8\k\g\h\t\k\p\r\b\a\z\w\i\b\f\i\6\9\m\t\q\4\5\d\a\4\3\w\4\z\n\z\o\4\u\z\5\q\x\o\a\o\j\5\p\n\w\y\0\d\e\y\x\r\0\n\a\m\v\1\k\4\w\b\w\p\f\k\d\o\0\h\q\o\y\s\2\l\b\c\a\w\o\e\o\z\e\5\3\o\9\2\v\q\c\s\0\s\u\n\e\y\6\o\l\b\q\a\w\r\x\e\u\9\g\m\w\l\w\f\2\v\d\3\g\9\h\o\8\1\m\8\r\q\x\o\8\g\3\u\7\u\1\u\w\b\l\d\w\q\8\x\a\2\a\i\k\8\c\z\e\m\3\d\4\e\4\0\t\a\d\6\s\6\3\w\g\0\x\m\i\w\v\a\k\2\u\o\v\w\3\n\k\f\x\3\9\i\5\s\n\2\2\1\h\9\t\h\b\s\m\n\f\1\e\t\9\l\x\v\s\o\b\q\t\7\8\s\s\t\t\f\j\c\1\p\h\v\b\7\r\u\9\j\q\d\2\i\5\h\k\z\2\g\k\4\c\d\e\a\p\x\1\v\e\y\s\1\z\b\w\f\f\z\s\a\v\x\t\a\a\b\0\b\i\9\9\y\e\8\h\5\7\h\q\9\r\z\b\v\q\4\y\8\m\7\1\z\i\u\e\p\l\f\x\l\z\k\2\c\c\e\n\c\1\6\3\7\h\5\i\r\o\3\9\u\k\d\r\m\m\b\4\d\h\9\x\v\c\6\h\3\s\i\b\i\g\k\2\v\e\s\o\8\h\8\g\7\2\p\h\g\a\3\r\t\w\v\b\b\w\l\i\f\8\e\3\z\q\q\r\h\1\3\9\i\0\8\k\f\5\l\o\w\m\t\i\5\z\c ]] 00:42:02.277 21:55:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:42:02.277 21:55:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:42:02.277 [2024-07-15 21:55:35.601906] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:42:02.277 [2024-07-15 21:55:35.602134] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169240 ] 00:42:02.536 [2024-07-15 21:55:35.767104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:02.795 [2024-07-15 21:55:35.976770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:04.436  Copying: 512/512 [B] (average 250 kBps) 00:42:04.436 00:42:04.436 21:55:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ r08ej2ne41kkxqhm4iwnwizoq03ptts6syurui4gxjzpzuonoqfi043j0rdlt828yqleo8qxpwi7rgtc8lglif4q9y69he51rk4va08kghtkprbazwibfi69mtq45da43w4znzo4uz5qxoaoj5pnwy0deyxr0namv1k4wbwpfkdo0hqoys2lbcawoeoze53o92vqcs0suney6olbqawrxeu9gmwlwf2vd3g9ho81m8rqxo8g3u7u1uwbldwq8xa2aik8czem3d4e40tad6s63wg0xmiwvak2uovw3nkfx39i5sn221h9thbsmnf1et9lxvsobqt78ssttfjc1phvb7ru9jqd2i5hkz2gk4cdeapx1veys1zbwffzsavxtaab0bi99ye8h57hq9rzbvq4y8m71ziueplfxlzk2ccenc1637h5iro39ukdrmmb4dh9xvc6h3sibigk2veso8h8g72phga3rtwvbbwlif8e3zqqrh139i08kf5lowmti5zc == \r\0\8\e\j\2\n\e\4\1\k\k\x\q\h\m\4\i\w\n\w\i\z\o\q\0\3\p\t\t\s\6\s\y\u\r\u\i\4\g\x\j\z\p\z\u\o\n\o\q\f\i\0\4\3\j\0\r\d\l\t\8\2\8\y\q\l\e\o\8\q\x\p\w\i\7\r\g\t\c\8\l\g\l\i\f\4\q\9\y\6\9\h\e\5\1\r\k\4\v\a\0\8\k\g\h\t\k\p\r\b\a\z\w\i\b\f\i\6\9\m\t\q\4\5\d\a\4\3\w\4\z\n\z\o\4\u\z\5\q\x\o\a\o\j\5\p\n\w\y\0\d\e\y\x\r\0\n\a\m\v\1\k\4\w\b\w\p\f\k\d\o\0\h\q\o\y\s\2\l\b\c\a\w\o\e\o\z\e\5\3\o\9\2\v\q\c\s\0\s\u\n\e\y\6\o\l\b\q\a\w\r\x\e\u\9\g\m\w\l\w\f\2\v\d\3\g\9\h\o\8\1\m\8\r\q\x\o\8\g\3\u\7\u\1\u\w\b\l\d\w\q\8\x\a\2\a\i\k\8\c\z\e\m\3\d\4\e\4\0\t\a\d\6\s\6\3\w\g\0\x\m\i\w\v\a\k\2\u\o\v\w\3\n\k\f\x\3\9\i\5\s\n\2\2\1\h\9\t\h\b\s\m\n\f\1\e\t\9\l\x\v\s\o\b\q\t\7\8\s\s\t\t\f\j\c\1\p\h\v\b\7\r\u\9\j\q\d\2\i\5\h\k\z\2\g\k\4\c\d\e\a\p\x\1\v\e\y\s\1\z\b\w\f\f\z\s\a\v\x\t\a\a\b\0\b\i\9\9\y\e\8\h\5\7\h\q\9\r\z\b\v\q\4\y\8\m\7\1\z\i\u\e\p\l\f\x\l\z\k\2\c\c\e\n\c\1\6\3\7\h\5\i\r\o\3\9\u\k\d\r\m\m\b\4\d\h\9\x\v\c\6\h\3\s\i\b\i\g\k\2\v\e\s\o\8\h\8\g\7\2\p\h\g\a\3\r\t\w\v\b\b\w\l\i\f\8\e\3\z\q\q\r\h\1\3\9\i\0\8\k\f\5\l\o\w\m\t\i\5\z\c ]] 00:42:04.436 21:55:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:42:04.436 21:55:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:42:04.436 [2024-07-15 21:55:37.488460] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:42:04.436 [2024-07-15 21:55:37.488676] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169265 ] 00:42:04.436 [2024-07-15 21:55:37.643871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:04.695 [2024-07-15 21:55:37.840155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:06.336  Copying: 512/512 [B] (average 250 kBps) 00:42:06.336 00:42:06.336 ************************************ 00:42:06.336 END TEST dd_flags_misc_forced_aio 00:42:06.336 ************************************ 00:42:06.336 21:55:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ r08ej2ne41kkxqhm4iwnwizoq03ptts6syurui4gxjzpzuonoqfi043j0rdlt828yqleo8qxpwi7rgtc8lglif4q9y69he51rk4va08kghtkprbazwibfi69mtq45da43w4znzo4uz5qxoaoj5pnwy0deyxr0namv1k4wbwpfkdo0hqoys2lbcawoeoze53o92vqcs0suney6olbqawrxeu9gmwlwf2vd3g9ho81m8rqxo8g3u7u1uwbldwq8xa2aik8czem3d4e40tad6s63wg0xmiwvak2uovw3nkfx39i5sn221h9thbsmnf1et9lxvsobqt78ssttfjc1phvb7ru9jqd2i5hkz2gk4cdeapx1veys1zbwffzsavxtaab0bi99ye8h57hq9rzbvq4y8m71ziueplfxlzk2ccenc1637h5iro39ukdrmmb4dh9xvc6h3sibigk2veso8h8g72phga3rtwvbbwlif8e3zqqrh139i08kf5lowmti5zc == \r\0\8\e\j\2\n\e\4\1\k\k\x\q\h\m\4\i\w\n\w\i\z\o\q\0\3\p\t\t\s\6\s\y\u\r\u\i\4\g\x\j\z\p\z\u\o\n\o\q\f\i\0\4\3\j\0\r\d\l\t\8\2\8\y\q\l\e\o\8\q\x\p\w\i\7\r\g\t\c\8\l\g\l\i\f\4\q\9\y\6\9\h\e\5\1\r\k\4\v\a\0\8\k\g\h\t\k\p\r\b\a\z\w\i\b\f\i\6\9\m\t\q\4\5\d\a\4\3\w\4\z\n\z\o\4\u\z\5\q\x\o\a\o\j\5\p\n\w\y\0\d\e\y\x\r\0\n\a\m\v\1\k\4\w\b\w\p\f\k\d\o\0\h\q\o\y\s\2\l\b\c\a\w\o\e\o\z\e\5\3\o\9\2\v\q\c\s\0\s\u\n\e\y\6\o\l\b\q\a\w\r\x\e\u\9\g\m\w\l\w\f\2\v\d\3\g\9\h\o\8\1\m\8\r\q\x\o\8\g\3\u\7\u\1\u\w\b\l\d\w\q\8\x\a\2\a\i\k\8\c\z\e\m\3\d\4\e\4\0\t\a\d\6\s\6\3\w\g\0\x\m\i\w\v\a\k\2\u\o\v\w\3\n\k\f\x\3\9\i\5\s\n\2\2\1\h\9\t\h\b\s\m\n\f\1\e\t\9\l\x\v\s\o\b\q\t\7\8\s\s\t\t\f\j\c\1\p\h\v\b\7\r\u\9\j\q\d\2\i\5\h\k\z\2\g\k\4\c\d\e\a\p\x\1\v\e\y\s\1\z\b\w\f\f\z\s\a\v\x\t\a\a\b\0\b\i\9\9\y\e\8\h\5\7\h\q\9\r\z\b\v\q\4\y\8\m\7\1\z\i\u\e\p\l\f\x\l\z\k\2\c\c\e\n\c\1\6\3\7\h\5\i\r\o\3\9\u\k\d\r\m\m\b\4\d\h\9\x\v\c\6\h\3\s\i\b\i\g\k\2\v\e\s\o\8\h\8\g\7\2\p\h\g\a\3\r\t\w\v\b\b\w\l\i\f\8\e\3\z\q\q\r\h\1\3\9\i\0\8\k\f\5\l\o\w\m\t\i\5\z\c ]] 00:42:06.336 00:42:06.336 real 0m15.446s 00:42:06.336 user 0m12.755s 00:42:06.336 sys 0m1.604s 00:42:06.336 21:55:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:06.336 21:55:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:42:06.336 21:55:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:42:06.336 21:55:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:42:06.336 21:55:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:42:06.336 21:55:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:42:06.336 ************************************ 00:42:06.336 END TEST spdk_dd_posix 00:42:06.336 ************************************ 00:42:06.336 00:42:06.336 real 1m3.525s 00:42:06.336 user 0m50.595s 00:42:06.336 sys 0m6.837s 00:42:06.336 21:55:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:06.336 21:55:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:42:06.336 21:55:39 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:42:06.336 21:55:39 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:42:06.336 21:55:39 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:06.336 21:55:39 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:06.336 21:55:39 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:42:06.336 ************************************ 00:42:06.336 START TEST spdk_dd_malloc 00:42:06.336 ************************************ 00:42:06.336 21:55:39 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:42:06.336 * Looking for test storage... 00:42:06.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:42:06.336 21:55:39 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:06.336 21:55:39 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:06.336 21:55:39 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:06.336 21:55:39 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:06.336 21:55:39 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:06.336 21:55:39 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:06.336 21:55:39 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:06.336 21:55:39 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:42:06.336 21:55:39 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:06.336 21:55:39 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:42:06.336 21:55:39 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:06.336 21:55:39 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:06.336 21:55:39 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:42:06.336 ************************************ 00:42:06.336 START TEST dd_malloc_copy 00:42:06.336 ************************************ 00:42:06.336 21:55:39 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:42:06.336 21:55:39 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:42:06.336 21:55:39 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:42:06.336 21:55:39 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(["name"]=$mbdev0 ["num_blocks"]=$mbdev0_b ["block_size"]=$mbdev0_bs) 00:42:06.336 21:55:39 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:42:06.336 21:55:39 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(["name"]=$mbdev1 ["num_blocks"]=$mbdev1_b ["block_size"]=$mbdev1_bs) 00:42:06.337 21:55:39 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:42:06.337 21:55:39 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:42:06.337 21:55:39 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:42:06.337 21:55:39 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:42:06.337 21:55:39 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:42:06.337 { 00:42:06.337 "subsystems": [ 00:42:06.337 { 00:42:06.337 "subsystem": "bdev", 00:42:06.337 "config": [ 00:42:06.337 { 00:42:06.337 "params": { 00:42:06.337 "num_blocks": 1048576, 00:42:06.337 "block_size": 512, 00:42:06.337 "name": "malloc0" 00:42:06.337 }, 00:42:06.337 "method": "bdev_malloc_create" 00:42:06.337 }, 00:42:06.337 { 00:42:06.337 "params": { 00:42:06.337 "num_blocks": 1048576, 00:42:06.337 "block_size": 512, 00:42:06.337 "name": "malloc1" 00:42:06.337 }, 00:42:06.337 "method": "bdev_malloc_create" 00:42:06.337 }, 00:42:06.337 { 00:42:06.337 "method": "bdev_wait_for_examine" 00:42:06.337 } 00:42:06.337 ] 00:42:06.337 } 00:42:06.337 ] 00:42:06.337 } 00:42:06.337 [2024-07-15 21:55:39.638413] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:42:06.337 [2024-07-15 21:55:39.638586] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169364 ] 00:42:06.596 [2024-07-15 21:55:39.802656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:06.854 [2024-07-15 21:55:39.999843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:14.586  Copying: 220/512 [MB] (220 MBps) Copying: 442/512 [MB] (222 MBps) Copying: 512/512 [MB] (average 220 MBps) 00:42:14.586 00:42:14.586 21:55:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:42:14.586 21:55:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:42:14.586 21:55:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:42:14.586 21:55:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:42:14.586 [2024-07-15 21:55:47.659432] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:42:14.586 [2024-07-15 21:55:47.659590] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169472 ] 00:42:14.586 { 00:42:14.586 "subsystems": [ 00:42:14.586 { 00:42:14.586 "subsystem": "bdev", 00:42:14.586 "config": [ 00:42:14.586 { 00:42:14.586 "params": { 00:42:14.586 "num_blocks": 1048576, 00:42:14.586 "block_size": 512, 00:42:14.586 "name": "malloc0" 00:42:14.586 }, 00:42:14.586 "method": "bdev_malloc_create" 00:42:14.586 }, 00:42:14.586 { 00:42:14.586 "params": { 00:42:14.586 "num_blocks": 1048576, 00:42:14.586 "block_size": 512, 00:42:14.586 "name": "malloc1" 00:42:14.586 }, 00:42:14.586 "method": "bdev_malloc_create" 00:42:14.586 }, 00:42:14.586 { 00:42:14.586 "method": "bdev_wait_for_examine" 00:42:14.586 } 00:42:14.586 ] 00:42:14.586 } 00:42:14.586 ] 00:42:14.586 } 00:42:14.586 [2024-07-15 21:55:47.814508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:14.845 [2024-07-15 21:55:48.026695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:22.859  Copying: 221/512 [MB] (221 MBps) Copying: 436/512 [MB] (214 MBps) Copying: 512/512 [MB] (average 219 MBps) 00:42:22.859 00:42:22.859 ************************************ 00:42:22.859 END TEST dd_malloc_copy 00:42:22.859 ************************************ 00:42:22.859 00:42:22.859 real 0m16.289s 00:42:22.859 user 0m15.221s 00:42:22.859 sys 0m0.954s 00:42:22.859 21:55:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:22.859 21:55:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:42:22.859 21:55:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1142 -- # return 0 00:42:22.859 00:42:22.859 real 0m16.474s 00:42:22.859 user 0m15.302s 00:42:22.859 sys 0m1.068s 00:42:22.859 21:55:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:22.859 21:55:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:42:22.859 ************************************ 00:42:22.859 END TEST spdk_dd_malloc 00:42:22.859 ************************************ 00:42:22.859 21:55:55 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:42:22.859 21:55:55 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:42:22.859 21:55:55 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:42:22.859 21:55:55 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:22.859 21:55:55 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:42:22.859 ************************************ 00:42:22.859 START TEST spdk_dd_bdev_to_bdev 00:42:22.859 ************************************ 00:42:22.859 21:55:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:42:22.859 * Looking for test storage... 00:42:22.859 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:42:22.859 21:55:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:22.859 21:55:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:22.859 21:55:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:22.859 21:55:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:22.859 21:55:56 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:22.859 21:55:56 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:22.859 21:55:56 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:22.859 21:55:56 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:42:22.859 21:55:56 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:22.859 21:55:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:42:22.859 21:55:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:42:22.859 21:55:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:42:22.859 21:55:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:42:22.859 21:55:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:42:22.859 21:55:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:42:22.859 21:55:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:10.0 00:42:22.859 21:55:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:42:22.859 21:55:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:42:22.859 21:55:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:42:22.859 21:55:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:42:22.859 21:55:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(["name"]=$bdev1 ["filename"]=$aio1 ["block_size"]=4096) 00:42:22.859 21:55:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:42:22.859 21:55:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:42:22.859 [2024-07-15 21:55:56.107647] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:42:22.859 [2024-07-15 21:55:56.107824] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169648 ] 00:42:23.117 [2024-07-15 21:55:56.270258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:23.118 [2024-07-15 21:55:56.482926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:25.072  Copying: 256/256 [MB] (average 1471 MBps) 00:42:25.072 00:42:25.072 21:55:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:42:25.072 21:55:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:42:25.072 21:55:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:42:25.072 21:55:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:42:25.072 21:55:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:42:25.072 21:55:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:42:25.072 21:55:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:25.072 21:55:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:42:25.072 ************************************ 00:42:25.072 START TEST dd_inflate_file 00:42:25.072 ************************************ 00:42:25.072 21:55:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:42:25.072 [2024-07-15 21:55:58.293226] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:42:25.072 [2024-07-15 21:55:58.293802] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169686 ] 00:42:25.329 [2024-07-15 21:55:58.455875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:25.329 [2024-07-15 21:55:58.658924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:27.272  Copying: 64/64 [MB] (average 1361 MBps) 00:42:27.272 00:42:27.272 ************************************ 00:42:27.272 END TEST dd_inflate_file 00:42:27.272 ************************************ 00:42:27.272 00:42:27.272 real 0m2.109s 00:42:27.272 user 0m1.688s 00:42:27.272 sys 0m0.288s 00:42:27.272 21:56:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:27.272 21:56:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:42:27.272 21:56:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:42:27.272 21:56:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:42:27.272 21:56:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:42:27.272 21:56:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:42:27.272 21:56:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:42:27.272 21:56:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:42:27.272 21:56:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:42:27.272 21:56:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:42:27.272 21:56:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:27.272 21:56:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:42:27.272 ************************************ 00:42:27.272 START TEST dd_copy_to_out_bdev 00:42:27.272 ************************************ 00:42:27.272 21:56:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:42:27.272 { 00:42:27.272 "subsystems": [ 00:42:27.272 { 00:42:27.272 "subsystem": "bdev", 00:42:27.272 "config": [ 00:42:27.272 { 00:42:27.272 "params": { 00:42:27.272 "block_size": 4096, 00:42:27.272 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:42:27.272 "name": "aio1" 00:42:27.272 }, 00:42:27.272 "method": "bdev_aio_create" 00:42:27.272 }, 00:42:27.272 { 00:42:27.272 "params": { 00:42:27.272 "trtype": "pcie", 00:42:27.272 "traddr": "0000:00:10.0", 00:42:27.272 "name": "Nvme0" 00:42:27.272 }, 00:42:27.272 "method": "bdev_nvme_attach_controller" 00:42:27.272 }, 00:42:27.272 { 00:42:27.272 "method": "bdev_wait_for_examine" 00:42:27.272 } 00:42:27.272 ] 00:42:27.272 } 00:42:27.272 ] 00:42:27.272 } 00:42:27.272 [2024-07-15 21:56:00.472334] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:42:27.272 [2024-07-15 21:56:00.472480] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169738 ] 00:42:27.272 [2024-07-15 21:56:00.630113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:27.531 [2024-07-15 21:56:00.848571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:30.339  Copying: 63/64 [MB] (63 MBps) Copying: 64/64 [MB] (average 63 MBps) 00:42:30.339 00:42:30.339 00:42:30.339 real 0m3.242s 00:42:30.339 user 0m2.903s 00:42:30.339 sys 0m0.245s 00:42:30.339 21:56:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:30.340 21:56:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:42:30.340 ************************************ 00:42:30.340 END TEST dd_copy_to_out_bdev 00:42:30.340 ************************************ 00:42:30.340 21:56:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:42:30.340 21:56:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:42:30.340 21:56:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:42:30.340 21:56:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:30.340 21:56:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:30.340 21:56:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:42:30.608 ************************************ 00:42:30.608 START TEST dd_offset_magic 00:42:30.608 ************************************ 00:42:30.608 21:56:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:42:30.608 21:56:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:42:30.608 21:56:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:42:30.608 21:56:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:42:30.608 21:56:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:42:30.608 21:56:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:42:30.608 21:56:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:42:30.608 21:56:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:42:30.608 21:56:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:42:30.608 [2024-07-15 21:56:03.776921] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:42:30.608 [2024-07-15 21:56:03.777468] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169818 ] 00:42:30.608 { 00:42:30.608 "subsystems": [ 00:42:30.608 { 00:42:30.608 "subsystem": "bdev", 00:42:30.608 "config": [ 00:42:30.608 { 00:42:30.608 "params": { 00:42:30.608 "block_size": 4096, 00:42:30.608 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:42:30.608 "name": "aio1" 00:42:30.608 }, 00:42:30.608 "method": "bdev_aio_create" 00:42:30.608 }, 00:42:30.608 { 00:42:30.608 "params": { 00:42:30.608 "trtype": "pcie", 00:42:30.608 "traddr": "0000:00:10.0", 00:42:30.608 "name": "Nvme0" 00:42:30.608 }, 00:42:30.608 "method": "bdev_nvme_attach_controller" 00:42:30.608 }, 00:42:30.608 { 00:42:30.608 "method": "bdev_wait_for_examine" 00:42:30.608 } 00:42:30.608 ] 00:42:30.608 } 00:42:30.608 ] 00:42:30.608 } 00:42:30.608 [2024-07-15 21:56:03.939789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:30.867 [2024-07-15 21:56:04.139542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:32.741  Copying: 65/65 [MB] (average 282 MBps) 00:42:32.741 00:42:32.741 21:56:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:42:32.741 21:56:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:42:32.741 21:56:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:42:32.741 21:56:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:42:32.741 [2024-07-15 21:56:05.976270] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:42:32.741 [2024-07-15 21:56:05.976481] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169850 ] 00:42:32.741 { 00:42:32.741 "subsystems": [ 00:42:32.741 { 00:42:32.741 "subsystem": "bdev", 00:42:32.741 "config": [ 00:42:32.741 { 00:42:32.741 "params": { 00:42:32.741 "block_size": 4096, 00:42:32.741 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:42:32.741 "name": "aio1" 00:42:32.741 }, 00:42:32.741 "method": "bdev_aio_create" 00:42:32.741 }, 00:42:32.741 { 00:42:32.741 "params": { 00:42:32.741 "trtype": "pcie", 00:42:32.741 "traddr": "0000:00:10.0", 00:42:32.741 "name": "Nvme0" 00:42:32.741 }, 00:42:32.741 "method": "bdev_nvme_attach_controller" 00:42:32.741 }, 00:42:32.741 { 00:42:32.741 "method": "bdev_wait_for_examine" 00:42:32.741 } 00:42:32.741 ] 00:42:32.741 } 00:42:32.741 ] 00:42:32.741 } 00:42:33.000 [2024-07-15 21:56:06.146446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:33.000 [2024-07-15 21:56:06.356239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:34.953  Copying: 1024/1024 [kB] (average 500 MBps) 00:42:34.953 00:42:34.953 21:56:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:42:34.953 21:56:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:42:34.953 21:56:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:42:34.953 21:56:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:42:34.953 21:56:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:42:34.953 21:56:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:42:34.953 21:56:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:42:34.953 [2024-07-15 21:56:08.226514] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:42:34.953 [2024-07-15 21:56:08.226653] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169883 ] 00:42:34.953 { 00:42:34.953 "subsystems": [ 00:42:34.953 { 00:42:34.953 "subsystem": "bdev", 00:42:34.953 "config": [ 00:42:34.953 { 00:42:34.953 "params": { 00:42:34.953 "block_size": 4096, 00:42:34.953 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:42:34.953 "name": "aio1" 00:42:34.953 }, 00:42:34.953 "method": "bdev_aio_create" 00:42:34.953 }, 00:42:34.953 { 00:42:34.953 "params": { 00:42:34.953 "trtype": "pcie", 00:42:34.953 "traddr": "0000:00:10.0", 00:42:34.953 "name": "Nvme0" 00:42:34.953 }, 00:42:34.953 "method": "bdev_nvme_attach_controller" 00:42:34.953 }, 00:42:34.953 { 00:42:34.953 "method": "bdev_wait_for_examine" 00:42:34.953 } 00:42:34.953 ] 00:42:34.953 } 00:42:34.953 ] 00:42:34.953 } 00:42:35.213 [2024-07-15 21:56:08.389077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:35.471 [2024-07-15 21:56:08.600646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:37.412  Copying: 65/65 [MB] (average 280 MBps) 00:42:37.412 00:42:37.412 21:56:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:42:37.412 21:56:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:42:37.412 21:56:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:42:37.412 21:56:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:42:37.412 [2024-07-15 21:56:10.515227] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:42:37.412 [2024-07-15 21:56:10.515401] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169920 ] 00:42:37.412 { 00:42:37.412 "subsystems": [ 00:42:37.412 { 00:42:37.412 "subsystem": "bdev", 00:42:37.412 "config": [ 00:42:37.412 { 00:42:37.412 "params": { 00:42:37.412 "block_size": 4096, 00:42:37.412 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:42:37.412 "name": "aio1" 00:42:37.412 }, 00:42:37.412 "method": "bdev_aio_create" 00:42:37.412 }, 00:42:37.412 { 00:42:37.412 "params": { 00:42:37.412 "trtype": "pcie", 00:42:37.412 "traddr": "0000:00:10.0", 00:42:37.412 "name": "Nvme0" 00:42:37.412 }, 00:42:37.412 "method": "bdev_nvme_attach_controller" 00:42:37.412 }, 00:42:37.412 { 00:42:37.412 "method": "bdev_wait_for_examine" 00:42:37.412 } 00:42:37.412 ] 00:42:37.412 } 00:42:37.412 ] 00:42:37.412 } 00:42:37.412 [2024-07-15 21:56:10.674405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:37.671 [2024-07-15 21:56:10.899282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:39.614  Copying: 1024/1024 [kB] (average 1000 MBps) 00:42:39.614 00:42:39.614 21:56:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:42:39.614 21:56:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:42:39.614 00:42:39.614 real 0m8.982s 00:42:39.614 user 0m7.330s 00:42:39.614 sys 0m0.947s 00:42:39.614 21:56:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:39.614 21:56:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:42:39.614 ************************************ 00:42:39.614 END TEST dd_offset_magic 00:42:39.614 ************************************ 00:42:39.614 21:56:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:42:39.614 21:56:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:42:39.614 21:56:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:42:39.614 21:56:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:42:39.614 21:56:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:42:39.614 21:56:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:42:39.614 21:56:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:42:39.614 21:56:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:42:39.614 21:56:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:42:39.614 21:56:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:42:39.614 21:56:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:42:39.614 21:56:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:42:39.614 [2024-07-15 21:56:12.813223] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:42:39.614 [2024-07-15 21:56:12.813438] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169985 ] 00:42:39.614 { 00:42:39.614 "subsystems": [ 00:42:39.614 { 00:42:39.614 "subsystem": "bdev", 00:42:39.614 "config": [ 00:42:39.614 { 00:42:39.614 "params": { 00:42:39.614 "block_size": 4096, 00:42:39.614 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:42:39.614 "name": "aio1" 00:42:39.614 }, 00:42:39.614 "method": "bdev_aio_create" 00:42:39.614 }, 00:42:39.614 { 00:42:39.614 "params": { 00:42:39.614 "trtype": "pcie", 00:42:39.614 "traddr": "0000:00:10.0", 00:42:39.614 "name": "Nvme0" 00:42:39.614 }, 00:42:39.614 "method": "bdev_nvme_attach_controller" 00:42:39.614 }, 00:42:39.614 { 00:42:39.614 "method": "bdev_wait_for_examine" 00:42:39.614 } 00:42:39.614 ] 00:42:39.614 } 00:42:39.614 ] 00:42:39.614 } 00:42:39.614 [2024-07-15 21:56:12.979867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:39.873 [2024-07-15 21:56:13.224525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:41.814  Copying: 5120/5120 [kB] (average 1250 MBps) 00:42:41.814 00:42:41.814 21:56:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:42:41.814 21:56:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=aio1 00:42:41.814 21:56:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:42:41.814 21:56:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:42:41.814 21:56:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:42:41.814 21:56:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:42:41.815 21:56:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:42:41.815 21:56:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:42:41.815 21:56:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:42:41.815 21:56:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:42:41.815 [2024-07-15 21:56:14.860773] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:42:41.815 [2024-07-15 21:56:14.860960] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170015 ] 00:42:41.815 { 00:42:41.815 "subsystems": [ 00:42:41.815 { 00:42:41.815 "subsystem": "bdev", 00:42:41.815 "config": [ 00:42:41.815 { 00:42:41.815 "params": { 00:42:41.815 "block_size": 4096, 00:42:41.815 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:42:41.815 "name": "aio1" 00:42:41.815 }, 00:42:41.815 "method": "bdev_aio_create" 00:42:41.815 }, 00:42:41.815 { 00:42:41.815 "params": { 00:42:41.815 "trtype": "pcie", 00:42:41.815 "traddr": "0000:00:10.0", 00:42:41.815 "name": "Nvme0" 00:42:41.815 }, 00:42:41.815 "method": "bdev_nvme_attach_controller" 00:42:41.815 }, 00:42:41.815 { 00:42:41.815 "method": "bdev_wait_for_examine" 00:42:41.815 } 00:42:41.815 ] 00:42:41.815 } 00:42:41.815 ] 00:42:41.815 } 00:42:41.815 [2024-07-15 21:56:15.017823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:42.073 [2024-07-15 21:56:15.215716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:43.708  Copying: 5120/5120 [kB] (average 384 MBps) 00:42:43.708 00:42:43.708 21:56:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:42:43.967 00:42:43.967 real 0m21.173s 00:42:43.967 user 0m17.357s 00:42:43.967 sys 0m2.527s 00:42:43.967 21:56:17 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:43.967 21:56:17 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:42:43.967 ************************************ 00:42:43.967 END TEST spdk_dd_bdev_to_bdev 00:42:43.967 ************************************ 00:42:43.967 21:56:17 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:42:43.967 21:56:17 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:42:43.967 21:56:17 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:42:43.967 21:56:17 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:43.967 21:56:17 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:43.967 21:56:17 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:42:43.967 ************************************ 00:42:43.967 START TEST spdk_dd_sparse 00:42:43.967 ************************************ 00:42:43.967 21:56:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:42:43.967 * Looking for test storage... 00:42:43.967 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:42:43.967 21:56:17 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:43.967 21:56:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:43.967 21:56:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:43.967 21:56:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:43.967 21:56:17 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:43.967 21:56:17 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:43.967 21:56:17 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:43.967 21:56:17 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:42:43.967 21:56:17 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:43.967 21:56:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:42:43.967 21:56:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:42:43.967 21:56:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:42:43.967 21:56:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:42:43.967 21:56:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:42:43.967 21:56:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:42:43.967 21:56:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:42:43.967 21:56:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:42:43.967 21:56:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:42:43.967 21:56:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:42:43.967 21:56:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:42:43.967 1+0 records in 00:42:43.967 1+0 records out 00:42:43.967 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00712685 s, 589 MB/s 00:42:43.967 21:56:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:42:43.967 1+0 records in 00:42:43.967 1+0 records out 00:42:43.967 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0137647 s, 305 MB/s 00:42:44.226 21:56:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:42:44.226 1+0 records in 00:42:44.226 1+0 records out 00:42:44.226 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0115863 s, 362 MB/s 00:42:44.226 21:56:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:42:44.226 21:56:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:44.226 21:56:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:44.226 21:56:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:42:44.226 ************************************ 00:42:44.226 START TEST dd_sparse_file_to_file 00:42:44.226 ************************************ 00:42:44.226 21:56:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:42:44.226 21:56:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:42:44.226 21:56:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:42:44.226 21:56:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:42:44.226 21:56:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:42:44.226 21:56:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(["bdev_name"]=$aio_bdev ["lvs_name"]=$lvstore) 00:42:44.226 21:56:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:42:44.226 21:56:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:42:44.226 21:56:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:42:44.226 21:56:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:42:44.226 21:56:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:42:44.226 [2024-07-15 21:56:17.437293] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:42:44.226 [2024-07-15 21:56:17.437521] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170103 ] 00:42:44.226 { 00:42:44.226 "subsystems": [ 00:42:44.226 { 00:42:44.226 "subsystem": "bdev", 00:42:44.226 "config": [ 00:42:44.226 { 00:42:44.226 "params": { 00:42:44.226 "block_size": 4096, 00:42:44.226 "filename": "dd_sparse_aio_disk", 00:42:44.226 "name": "dd_aio" 00:42:44.226 }, 00:42:44.226 "method": "bdev_aio_create" 00:42:44.226 }, 00:42:44.226 { 00:42:44.226 "params": { 00:42:44.226 "lvs_name": "dd_lvstore", 00:42:44.226 "bdev_name": "dd_aio" 00:42:44.226 }, 00:42:44.226 "method": "bdev_lvol_create_lvstore" 00:42:44.226 }, 00:42:44.226 { 00:42:44.226 "method": "bdev_wait_for_examine" 00:42:44.226 } 00:42:44.226 ] 00:42:44.226 } 00:42:44.226 ] 00:42:44.226 } 00:42:44.226 [2024-07-15 21:56:17.597176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:44.485 [2024-07-15 21:56:17.798240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:46.429  Copying: 12/36 [MB] (average 1090 MBps) 00:42:46.429 00:42:46.429 21:56:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:42:46.429 21:56:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:42:46.429 21:56:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:42:46.429 21:56:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:42:46.429 21:56:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:42:46.429 21:56:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:42:46.429 21:56:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:42:46.429 21:56:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:42:46.429 21:56:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:42:46.429 21:56:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:42:46.429 00:42:46.429 real 0m2.275s 00:42:46.429 user 0m1.898s 00:42:46.429 sys 0m0.249s 00:42:46.429 21:56:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:46.429 21:56:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:42:46.429 ************************************ 00:42:46.429 END TEST dd_sparse_file_to_file 00:42:46.429 ************************************ 00:42:46.429 21:56:19 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:42:46.429 21:56:19 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:42:46.429 21:56:19 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:46.429 21:56:19 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:46.429 21:56:19 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:42:46.429 ************************************ 00:42:46.429 START TEST dd_sparse_file_to_bdev 00:42:46.429 ************************************ 00:42:46.429 21:56:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:42:46.429 21:56:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:42:46.430 21:56:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:42:46.430 21:56:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(["lvs_name"]=$lvstore ["lvol_name"]=$lvol ["size_in_mib"]=36 ["thin_provision"]=true) 00:42:46.430 21:56:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:42:46.430 21:56:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:42:46.430 21:56:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:42:46.430 21:56:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:42:46.430 21:56:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:42:46.430 [2024-07-15 21:56:19.766026] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:42:46.430 [2024-07-15 21:56:19.766270] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170176 ] 00:42:46.430 { 00:42:46.430 "subsystems": [ 00:42:46.430 { 00:42:46.430 "subsystem": "bdev", 00:42:46.430 "config": [ 00:42:46.430 { 00:42:46.430 "params": { 00:42:46.430 "block_size": 4096, 00:42:46.430 "filename": "dd_sparse_aio_disk", 00:42:46.430 "name": "dd_aio" 00:42:46.430 }, 00:42:46.430 "method": "bdev_aio_create" 00:42:46.430 }, 00:42:46.430 { 00:42:46.430 "params": { 00:42:46.430 "size_in_mib": 36, 00:42:46.430 "lvs_name": "dd_lvstore", 00:42:46.430 "thin_provision": true, 00:42:46.430 "lvol_name": "dd_lvol" 00:42:46.430 }, 00:42:46.430 "method": "bdev_lvol_create" 00:42:46.430 }, 00:42:46.430 { 00:42:46.430 "method": "bdev_wait_for_examine" 00:42:46.430 } 00:42:46.430 ] 00:42:46.430 } 00:42:46.430 ] 00:42:46.430 } 00:42:46.689 [2024-07-15 21:56:19.931542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:46.947 [2024-07-15 21:56:20.136633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:48.584  Copying: 12/36 [MB] (average 521 MBps) 00:42:48.584 00:42:48.584 00:42:48.584 real 0m2.131s 00:42:48.584 user 0m1.807s 00:42:48.584 sys 0m0.242s 00:42:48.584 21:56:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:48.584 21:56:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:42:48.584 ************************************ 00:42:48.584 END TEST dd_sparse_file_to_bdev 00:42:48.584 ************************************ 00:42:48.584 21:56:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:42:48.584 21:56:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:42:48.584 21:56:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:48.584 21:56:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:48.584 21:56:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:42:48.584 ************************************ 00:42:48.584 START TEST dd_sparse_bdev_to_file 00:42:48.584 ************************************ 00:42:48.584 21:56:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:42:48.584 21:56:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:42:48.584 21:56:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:42:48.584 21:56:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:42:48.584 21:56:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:42:48.584 21:56:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:42:48.584 21:56:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:42:48.584 21:56:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:42:48.584 21:56:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:42:48.584 [2024-07-15 21:56:21.956783] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:42:48.584 [2024-07-15 21:56:21.956973] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170227 ] 00:42:48.584 { 00:42:48.584 "subsystems": [ 00:42:48.584 { 00:42:48.584 "subsystem": "bdev", 00:42:48.584 "config": [ 00:42:48.584 { 00:42:48.584 "params": { 00:42:48.584 "block_size": 4096, 00:42:48.584 "filename": "dd_sparse_aio_disk", 00:42:48.584 "name": "dd_aio" 00:42:48.584 }, 00:42:48.584 "method": "bdev_aio_create" 00:42:48.584 }, 00:42:48.584 { 00:42:48.584 "method": "bdev_wait_for_examine" 00:42:48.584 } 00:42:48.584 ] 00:42:48.584 } 00:42:48.584 ] 00:42:48.584 } 00:42:48.843 [2024-07-15 21:56:22.120187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:49.101 [2024-07-15 21:56:22.322929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:50.756  Copying: 12/36 [MB] (average 1200 MBps) 00:42:50.756 00:42:50.756 21:56:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:42:50.756 21:56:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:42:50.756 21:56:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:42:50.756 21:56:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:42:50.756 21:56:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:42:50.756 21:56:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:42:50.756 21:56:24 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:42:50.756 21:56:24 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:42:50.756 21:56:24 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:42:50.756 21:56:24 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:42:50.756 00:42:50.756 real 0m2.118s 00:42:50.756 user 0m1.793s 00:42:50.756 sys 0m0.228s 00:42:50.756 21:56:24 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:50.756 21:56:24 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:42:50.756 ************************************ 00:42:50.756 END TEST dd_sparse_bdev_to_file 00:42:50.756 ************************************ 00:42:50.756 21:56:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:42:50.756 21:56:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:42:50.756 21:56:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:42:50.756 21:56:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:42:50.756 21:56:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:42:50.756 21:56:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:42:50.756 00:42:50.756 real 0m6.881s 00:42:50.756 user 0m5.670s 00:42:50.756 sys 0m0.925s 00:42:50.756 21:56:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:50.756 21:56:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:42:50.756 ************************************ 00:42:50.756 END TEST spdk_dd_sparse 00:42:50.756 ************************************ 00:42:50.756 21:56:24 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:42:50.756 21:56:24 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:42:50.756 21:56:24 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:50.756 21:56:24 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:50.756 21:56:24 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:42:50.756 ************************************ 00:42:50.756 START TEST spdk_dd_negative 00:42:50.756 ************************************ 00:42:50.756 21:56:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:42:51.016 * Looking for test storage... 00:42:51.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:42:51.016 21:56:24 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:51.016 21:56:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:51.016 21:56:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:51.016 21:56:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:51.016 21:56:24 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:51.016 21:56:24 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:51.016 21:56:24 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:51.016 21:56:24 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:42:51.016 21:56:24 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:51.016 21:56:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:42:51.016 21:56:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:42:51.016 21:56:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:42:51.016 21:56:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:42:51.016 21:56:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:42:51.016 21:56:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:51.016 21:56:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:51.016 21:56:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:42:51.016 ************************************ 00:42:51.016 START TEST dd_invalid_arguments 00:42:51.016 ************************************ 00:42:51.016 21:56:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:42:51.016 21:56:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:42:51.016 21:56:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:42:51.016 21:56:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:42:51.016 21:56:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:51.016 21:56:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:51.016 21:56:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:51.016 21:56:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:51.016 21:56:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:51.016 21:56:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:51.016 21:56:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:51.016 21:56:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:42:51.016 21:56:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:42:51.016 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:42:51.016 00:42:51.016 CPU options: 00:42:51.016 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:42:51.017 (like [0,1,10]) 00:42:51.017 --lcores lcore to CPU mapping list. The list is in the format: 00:42:51.017 [<,lcores[@CPUs]>...] 00:42:51.017 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:42:51.017 Within the group, '-' is used for range separator, 00:42:51.017 ',' is used for single number separator. 00:42:51.017 '( )' can be omitted for single element group, 00:42:51.017 '@' can be omitted if cpus and lcores have the same value 00:42:51.017 --disable-cpumask-locks Disable CPU core lock files. 00:42:51.017 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:42:51.017 pollers in the app support interrupt mode) 00:42:51.017 -p, --main-core main (primary) core for DPDK 00:42:51.017 00:42:51.017 Configuration options: 00:42:51.017 -c, --config, --json JSON config file 00:42:51.017 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:42:51.017 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:42:51.017 --wait-for-rpc wait for RPCs to initialize subsystems 00:42:51.017 --rpcs-allowed comma-separated list of permitted RPCS 00:42:51.017 --json-ignore-init-errors don't exit on invalid config entry 00:42:51.017 00:42:51.017 Memory options: 00:42:51.017 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:42:51.017 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:42:51.017 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:42:51.017 -R, --huge-unlink unlink huge files after initialization 00:42:51.017 -n, --mem-channels number of memory channels used for DPDK 00:42:51.017 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:42:51.017 --msg-mempool-size global message memory pool size in count (default: 262143) 00:42:51.017 --no-huge run without using hugepages 00:42:51.017 -i, --shm-id shared memory ID (optional) 00:42:51.017 -g, --single-file-segments force creating just one hugetlbfs file 00:42:51.017 00:42:51.017 PCI options: 00:42:51.017 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:42:51.017 -B, --pci-blocked pci addr to block (can be used more than once) 00:42:51.017 -u, --no-pci disable PCI access 00:42:51.017 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:42:51.017 00:42:51.017 Log options: 00:42:51.017 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:42:51.017 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:42:51.017 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, 00:42:51.017 bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, 00:42:51.017 blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:42:51.017 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:42:51.017 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:42:51.017 sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, 00:42:51.017 vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, 00:42:51.017 vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:42:51.017 virtio_vfio_user, vmd) 00:42:51.017 --silence-noticelog disable notice level logging to stderr 00:42:51.017 00:42:51.017 Trace options: 00:42:51.017 --num-trace-entries number of trace entries for each core, must be power of 2, 00:42:51.017 setting 0 to disable trace (default 32768) 00:42:51.017 Tracepoints vary in size and can use more than one trace entry. 00:42:51.017 -e, --tpoint-group [:] 00:42:51.017 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:42:51.017 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:42:51.017 [2024-07-15 21:56:24.292352] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:42:51.017 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:42:51.017 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:42:51.017 a tracepoint group. First tpoint inside a group can be enabled by 00:42:51.017 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:42:51.017 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:42:51.017 in /include/spdk_internal/trace_defs.h 00:42:51.017 00:42:51.017 Other options: 00:42:51.017 -h, --help show this usage 00:42:51.017 -v, --version print SPDK version 00:42:51.017 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:42:51.017 --env-context Opaque context for use of the env implementation 00:42:51.017 00:42:51.017 Application specific: 00:42:51.017 [--------- DD Options ---------] 00:42:51.017 --if Input file. Must specify either --if or --ib. 00:42:51.017 --ib Input bdev. Must specifier either --if or --ib 00:42:51.017 --of Output file. Must specify either --of or --ob. 00:42:51.017 --ob Output bdev. Must specify either --of or --ob. 00:42:51.017 --iflag Input file flags. 00:42:51.017 --oflag Output file flags. 00:42:51.017 --bs I/O unit size (default: 4096) 00:42:51.017 --qd Queue depth (default: 2) 00:42:51.017 --count I/O unit count. The number of I/O units to copy. (default: all) 00:42:51.017 --skip Skip this many I/O units at start of input. (default: 0) 00:42:51.017 --seek Skip this many I/O units at start of output. (default: 0) 00:42:51.017 --aio Force usage of AIO. (by default io_uring is used if available) 00:42:51.017 --sparse Enable hole skipping in input target 00:42:51.017 Available iflag and oflag values: 00:42:51.017 append - append mode 00:42:51.017 direct - use direct I/O for data 00:42:51.017 directory - fail unless a directory 00:42:51.017 dsync - use synchronized I/O for data 00:42:51.017 noatime - do not update access time 00:42:51.017 noctty - do not assign controlling terminal from file 00:42:51.017 nofollow - do not follow symlinks 00:42:51.017 nonblock - use non-blocking I/O 00:42:51.017 sync - use synchronized I/O for data and metadata 00:42:51.017 21:56:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:42:51.017 21:56:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:51.017 21:56:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:51.017 21:56:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:51.017 00:42:51.017 real 0m0.107s 00:42:51.017 user 0m0.050s 00:42:51.017 ************************************ 00:42:51.017 END TEST dd_invalid_arguments 00:42:51.017 ************************************ 00:42:51.017 sys 0m0.057s 00:42:51.017 21:56:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:51.017 21:56:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:42:51.017 21:56:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:42:51.017 21:56:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:42:51.017 21:56:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:51.017 21:56:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:51.017 21:56:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:42:51.017 ************************************ 00:42:51.017 START TEST dd_double_input 00:42:51.017 ************************************ 00:42:51.017 21:56:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1123 -- # double_input 00:42:51.017 21:56:24 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:42:51.017 21:56:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:42:51.017 21:56:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:42:51.017 21:56:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:51.017 21:56:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:51.017 21:56:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:51.017 21:56:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:51.017 21:56:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:51.017 21:56:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:51.017 21:56:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:51.017 21:56:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:42:51.017 21:56:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:42:51.277 [2024-07-15 21:56:24.447928] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:42:51.277 ************************************ 00:42:51.277 END TEST dd_double_input 00:42:51.277 ************************************ 00:42:51.278 21:56:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:42:51.278 21:56:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:51.278 21:56:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:51.278 21:56:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:51.278 00:42:51.278 real 0m0.100s 00:42:51.278 user 0m0.053s 00:42:51.278 sys 0m0.047s 00:42:51.278 21:56:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:51.278 21:56:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:42:51.278 21:56:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:42:51.278 21:56:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:42:51.278 21:56:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:51.278 21:56:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:51.278 21:56:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:42:51.278 ************************************ 00:42:51.278 START TEST dd_double_output 00:42:51.278 ************************************ 00:42:51.278 21:56:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:42:51.278 21:56:24 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:42:51.278 21:56:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:42:51.278 21:56:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:42:51.278 21:56:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:51.278 21:56:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:51.278 21:56:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:51.278 21:56:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:51.278 21:56:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:51.278 21:56:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:51.278 21:56:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:51.278 21:56:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:42:51.278 21:56:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:42:51.278 [2024-07-15 21:56:24.599616] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:42:51.278 21:56:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:42:51.278 21:56:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:51.278 21:56:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:51.278 21:56:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:51.278 00:42:51.278 real 0m0.114s 00:42:51.278 user 0m0.058s 00:42:51.278 sys 0m0.056s 00:42:51.278 ************************************ 00:42:51.278 END TEST dd_double_output 00:42:51.278 ************************************ 00:42:51.278 21:56:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:51.278 21:56:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:42:51.538 21:56:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:42:51.538 21:56:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:42:51.538 21:56:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:51.538 21:56:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:42:51.539 ************************************ 00:42:51.539 START TEST dd_no_input 00:42:51.539 ************************************ 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:42:51.539 [2024-07-15 21:56:24.761869] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:51.539 00:42:51.539 real 0m0.105s 00:42:51.539 user 0m0.066s 00:42:51.539 sys 0m0.038s 00:42:51.539 ************************************ 00:42:51.539 END TEST dd_no_input 00:42:51.539 ************************************ 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:42:51.539 ************************************ 00:42:51.539 START TEST dd_no_output 00:42:51.539 ************************************ 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:42:51.539 21:56:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:42:51.798 [2024-07-15 21:56:24.929779] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:42:51.798 21:56:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:42:51.798 21:56:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:51.798 21:56:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:51.798 21:56:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:51.798 00:42:51.798 real 0m0.112s 00:42:51.798 user 0m0.054s 00:42:51.798 sys 0m0.058s 00:42:51.798 21:56:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:51.798 21:56:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:42:51.798 ************************************ 00:42:51.798 END TEST dd_no_output 00:42:51.798 ************************************ 00:42:51.798 21:56:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:42:51.798 21:56:25 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:42:51.798 21:56:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:51.798 21:56:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:51.798 21:56:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:42:51.798 ************************************ 00:42:51.798 START TEST dd_wrong_blocksize 00:42:51.798 ************************************ 00:42:51.798 21:56:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:42:51.798 21:56:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:42:51.798 21:56:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:42:51.798 21:56:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:42:51.798 21:56:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:51.798 21:56:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:51.798 21:56:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:51.798 21:56:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:51.798 21:56:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:51.799 21:56:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:51.799 21:56:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:51.799 21:56:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:42:51.799 21:56:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:42:51.799 [2024-07-15 21:56:25.098356] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:42:51.799 21:56:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:42:51.799 21:56:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:51.799 21:56:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:51.799 21:56:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:51.799 00:42:51.799 real 0m0.114s 00:42:51.799 user 0m0.054s 00:42:51.799 sys 0m0.061s 00:42:51.799 ************************************ 00:42:51.799 END TEST dd_wrong_blocksize 00:42:51.799 ************************************ 00:42:51.799 21:56:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:51.799 21:56:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:42:52.058 21:56:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:42:52.058 21:56:25 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:42:52.058 21:56:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:52.058 21:56:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:52.058 21:56:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:42:52.058 ************************************ 00:42:52.058 START TEST dd_smaller_blocksize 00:42:52.058 ************************************ 00:42:52.058 21:56:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:42:52.058 21:56:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:42:52.058 21:56:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:42:52.058 21:56:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:42:52.058 21:56:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:52.058 21:56:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:52.058 21:56:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:52.058 21:56:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:52.058 21:56:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:52.058 21:56:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:52.058 21:56:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:52.058 21:56:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:42:52.058 21:56:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:42:52.058 [2024-07-15 21:56:25.270909] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:42:52.058 [2024-07-15 21:56:25.271100] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170509 ] 00:42:52.058 [2024-07-15 21:56:25.430017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:52.319 [2024-07-15 21:56:25.643473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:52.888 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:42:52.888 [2024-07-15 21:56:26.238598] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:42:52.888 [2024-07-15 21:56:26.238705] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:42:53.879 [2024-07-15 21:56:27.080823] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:42:54.138 21:56:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:42:54.138 21:56:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:54.138 21:56:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:42:54.138 21:56:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:42:54.138 21:56:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:42:54.138 21:56:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:54.138 00:42:54.138 real 0m2.276s 00:42:54.138 user 0m1.778s 00:42:54.138 sys 0m0.397s 00:42:54.138 21:56:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:54.138 21:56:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:42:54.138 ************************************ 00:42:54.139 END TEST dd_smaller_blocksize 00:42:54.139 ************************************ 00:42:54.398 21:56:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:42:54.398 21:56:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:42:54.398 21:56:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:54.398 21:56:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:54.398 21:56:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:42:54.398 ************************************ 00:42:54.398 START TEST dd_invalid_count 00:42:54.398 ************************************ 00:42:54.398 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:42:54.398 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:42:54.398 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:42:54.398 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:42:54.398 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:54.398 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:54.399 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:54.399 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:54.399 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:54.399 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:54.399 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:54.399 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:42:54.399 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:42:54.399 [2024-07-15 21:56:27.603801] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:42:54.399 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:42:54.399 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:54.399 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:54.399 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:54.399 00:42:54.399 real 0m0.114s 00:42:54.399 user 0m0.055s 00:42:54.399 sys 0m0.060s 00:42:54.399 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:54.399 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:42:54.399 ************************************ 00:42:54.399 END TEST dd_invalid_count 00:42:54.399 ************************************ 00:42:54.399 21:56:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:42:54.399 21:56:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:42:54.399 21:56:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:54.399 21:56:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:54.399 21:56:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:42:54.399 ************************************ 00:42:54.399 START TEST dd_invalid_oflag 00:42:54.399 ************************************ 00:42:54.399 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:42:54.399 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:42:54.399 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:42:54.399 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:42:54.399 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:54.399 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:54.399 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:54.399 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:54.399 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:54.399 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:54.399 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:54.399 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:42:54.399 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:42:54.659 [2024-07-15 21:56:27.788157] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:42:54.659 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:42:54.659 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:54.659 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:54.659 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:54.659 00:42:54.659 real 0m0.125s 00:42:54.659 user 0m0.074s 00:42:54.659 sys 0m0.050s 00:42:54.659 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:54.659 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:42:54.659 ************************************ 00:42:54.659 END TEST dd_invalid_oflag 00:42:54.659 ************************************ 00:42:54.659 21:56:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:42:54.659 21:56:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:42:54.659 21:56:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:54.659 21:56:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:54.659 21:56:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:42:54.659 ************************************ 00:42:54.659 START TEST dd_invalid_iflag 00:42:54.659 ************************************ 00:42:54.659 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:42:54.659 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:42:54.659 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:42:54.659 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:42:54.659 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:54.659 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:54.659 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:54.659 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:54.659 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:54.659 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:54.659 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:54.659 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:42:54.659 21:56:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:42:54.659 [2024-07-15 21:56:27.966006] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:42:54.659 21:56:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:42:54.659 21:56:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:54.659 21:56:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:54.659 21:56:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:54.659 00:42:54.659 real 0m0.120s 00:42:54.659 user 0m0.070s 00:42:54.659 sys 0m0.051s 00:42:54.659 21:56:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:54.659 21:56:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:42:54.659 ************************************ 00:42:54.659 END TEST dd_invalid_iflag 00:42:54.659 ************************************ 00:42:54.919 21:56:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:42:54.919 21:56:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:42:54.919 21:56:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:54.919 21:56:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:54.919 21:56:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:42:54.919 ************************************ 00:42:54.919 START TEST dd_unknown_flag 00:42:54.919 ************************************ 00:42:54.919 21:56:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:42:54.919 21:56:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:42:54.919 21:56:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:42:54.919 21:56:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:42:54.919 21:56:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:54.919 21:56:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:54.919 21:56:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:54.919 21:56:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:54.919 21:56:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:54.919 21:56:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:54.919 21:56:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:54.919 21:56:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:42:54.919 21:56:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:42:54.919 [2024-07-15 21:56:28.144836] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:42:54.919 [2024-07-15 21:56:28.145020] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170639 ] 00:42:55.178 [2024-07-15 21:56:28.305709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:55.178 [2024-07-15 21:56:28.520802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:55.746  Copying: 0/0 [B] (average 0 Bps)[2024-07-15 21:56:28.846396] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:42:55.746 [2024-07-15 21:56:28.846488] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:42:55.746 [2024-07-15 21:56:28.846658] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:42:56.313 [2024-07-15 21:56:29.606210] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:42:56.879 00:42:56.879 00:42:56.879 21:56:30 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:42:56.879 21:56:30 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:56.879 21:56:30 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:42:56.879 21:56:30 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:42:56.879 21:56:30 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:42:56.879 21:56:30 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:56.879 00:42:56.879 real 0m1.989s 00:42:56.879 user 0m1.614s 00:42:56.879 sys 0m0.223s 00:42:56.879 21:56:30 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:56.879 21:56:30 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:42:56.879 ************************************ 00:42:56.879 END TEST dd_unknown_flag 00:42:56.879 ************************************ 00:42:56.879 21:56:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:42:56.879 21:56:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:42:56.879 21:56:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:56.879 21:56:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:56.879 21:56:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:42:56.879 ************************************ 00:42:56.879 START TEST dd_invalid_json 00:42:56.879 ************************************ 00:42:56.879 21:56:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:42:56.879 21:56:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:42:56.879 21:56:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:42:56.879 21:56:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:42:56.879 21:56:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:42:56.879 21:56:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:56.879 21:56:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:56.879 21:56:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:56.879 21:56:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:56.879 21:56:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:56.879 21:56:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:56.879 21:56:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:56.879 21:56:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:42:56.879 21:56:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:42:56.879 [2024-07-15 21:56:30.178285] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:42:56.879 [2024-07-15 21:56:30.178468] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170692 ] 00:42:57.137 [2024-07-15 21:56:30.346270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:57.394 [2024-07-15 21:56:30.557516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:57.394 [2024-07-15 21:56:30.557606] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:42:57.394 [2024-07-15 21:56:30.557645] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:42:57.394 [2024-07-15 21:56:30.557668] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:42:57.394 [2024-07-15 21:56:30.557721] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:42:57.651 21:56:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:42:57.651 21:56:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:57.651 21:56:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:42:57.651 21:56:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:42:57.651 21:56:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:42:57.651 21:56:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:57.651 00:42:57.651 real 0m0.837s 00:42:57.651 user 0m0.607s 00:42:57.651 sys 0m0.131s 00:42:57.651 21:56:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:57.651 21:56:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:42:57.651 ************************************ 00:42:57.651 END TEST dd_invalid_json 00:42:57.651 ************************************ 00:42:57.651 21:56:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:42:57.651 00:42:57.651 real 0m6.886s 00:42:57.651 user 0m4.933s 00:42:57.651 sys 0m1.634s 00:42:57.651 21:56:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:57.651 21:56:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:42:57.651 ************************************ 00:42:57.651 END TEST spdk_dd_negative 00:42:57.651 ************************************ 00:42:57.909 21:56:31 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:42:57.909 00:42:57.909 real 2m42.402s 00:42:57.909 user 2m13.455s 00:42:57.909 sys 0m19.414s 00:42:57.909 21:56:31 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:57.909 21:56:31 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:42:57.909 ************************************ 00:42:57.909 END TEST spdk_dd 00:42:57.909 ************************************ 00:42:57.909 21:56:31 -- common/autotest_common.sh@1142 -- # return 0 00:42:57.909 21:56:31 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:42:57.909 21:56:31 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:42:57.909 21:56:31 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:42:57.909 21:56:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:57.909 21:56:31 -- common/autotest_common.sh@10 -- # set +x 00:42:57.909 ************************************ 00:42:57.909 START TEST blockdev_nvme 00:42:57.909 ************************************ 00:42:57.909 21:56:31 blockdev_nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:42:57.909 * Looking for test storage... 00:42:57.909 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:42:57.909 21:56:31 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:42:57.909 21:56:31 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:42:57.909 21:56:31 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:42:57.909 21:56:31 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:42:57.909 21:56:31 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:42:57.909 21:56:31 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:42:57.909 21:56:31 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:42:57.909 21:56:31 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:42:57.909 21:56:31 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:42:57.909 21:56:31 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:42:57.909 21:56:31 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:42:57.909 21:56:31 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:42:57.909 21:56:31 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:42:57.909 21:56:31 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:42:57.909 21:56:31 blockdev_nvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:42:57.909 21:56:31 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:42:57.909 21:56:31 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:42:57.909 21:56:31 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:42:57.909 21:56:31 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:42:57.909 21:56:31 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:42:57.909 21:56:31 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:42:57.909 21:56:31 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:42:57.910 21:56:31 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:42:57.910 21:56:31 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:42:57.910 21:56:31 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=170789 00:42:57.910 21:56:31 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:42:57.910 21:56:31 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 170789 00:42:57.910 21:56:31 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:42:57.910 21:56:31 blockdev_nvme -- common/autotest_common.sh@829 -- # '[' -z 170789 ']' 00:42:57.910 21:56:31 blockdev_nvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:57.910 21:56:31 blockdev_nvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:57.910 21:56:31 blockdev_nvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:57.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:57.910 21:56:31 blockdev_nvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:57.910 21:56:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:58.167 [2024-07-15 21:56:31.314155] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:42:58.167 [2024-07-15 21:56:31.314388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170789 ] 00:42:58.167 [2024-07-15 21:56:31.475723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:58.424 [2024-07-15 21:56:31.709493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:59.795 21:56:32 blockdev_nvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:59.795 21:56:32 blockdev_nvme -- common/autotest_common.sh@862 -- # return 0 00:42:59.795 21:56:32 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:42:59.795 21:56:32 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:42:59.795 21:56:32 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:42:59.795 21:56:32 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:42:59.795 21:56:32 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:42:59.795 21:56:32 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:42:59.795 21:56:32 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:59.795 21:56:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:59.795 21:56:32 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:59.795 21:56:32 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:42:59.795 21:56:32 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:59.795 21:56:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:59.795 21:56:32 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:59.795 21:56:32 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:42:59.795 21:56:32 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:42:59.795 21:56:32 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:59.795 21:56:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:59.795 21:56:32 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:59.795 21:56:32 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:42:59.795 21:56:32 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:59.795 21:56:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:59.795 21:56:32 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:59.795 21:56:32 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:42:59.795 21:56:32 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:59.795 21:56:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:59.795 21:56:32 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:59.795 21:56:32 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:42:59.795 21:56:32 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:42:59.795 21:56:32 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:59.795 21:56:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:59.795 21:56:32 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:42:59.795 21:56:32 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:59.795 21:56:33 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:42:59.795 21:56:33 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:42:59.795 21:56:33 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "57b83935-c784-4962-a5ba-1eb9dbd77844"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "57b83935-c784-4962-a5ba-1eb9dbd77844",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:42:59.795 21:56:33 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:42:59.795 21:56:33 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:42:59.795 21:56:33 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:42:59.795 21:56:33 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 170789 00:42:59.795 21:56:33 blockdev_nvme -- common/autotest_common.sh@948 -- # '[' -z 170789 ']' 00:42:59.795 21:56:33 blockdev_nvme -- common/autotest_common.sh@952 -- # kill -0 170789 00:42:59.795 21:56:33 blockdev_nvme -- common/autotest_common.sh@953 -- # uname 00:42:59.795 21:56:33 blockdev_nvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:59.795 21:56:33 blockdev_nvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 170789 00:42:59.795 21:56:33 blockdev_nvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:42:59.795 21:56:33 blockdev_nvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:42:59.795 killing process with pid 170789 00:42:59.795 21:56:33 blockdev_nvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 170789' 00:42:59.795 21:56:33 blockdev_nvme -- common/autotest_common.sh@967 -- # kill 170789 00:42:59.795 21:56:33 blockdev_nvme -- common/autotest_common.sh@972 -- # wait 170789 00:43:03.081 21:56:35 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:43:03.081 21:56:35 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:43:03.081 21:56:35 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:43:03.081 21:56:35 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:03.081 21:56:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:43:03.081 ************************************ 00:43:03.081 START TEST bdev_hello_world 00:43:03.081 ************************************ 00:43:03.081 21:56:35 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:43:03.081 [2024-07-15 21:56:35.973605] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:43:03.081 [2024-07-15 21:56:35.973830] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170896 ] 00:43:03.081 [2024-07-15 21:56:36.133794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:03.081 [2024-07-15 21:56:36.344334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:03.648 [2024-07-15 21:56:36.789715] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:43:03.648 [2024-07-15 21:56:36.789799] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:43:03.648 [2024-07-15 21:56:36.789827] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:43:03.648 [2024-07-15 21:56:36.792395] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:43:03.648 [2024-07-15 21:56:36.792933] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:43:03.648 [2024-07-15 21:56:36.792974] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:43:03.648 [2024-07-15 21:56:36.793270] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:43:03.648 00:43:03.648 [2024-07-15 21:56:36.793344] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:43:05.044 00:43:05.044 real 0m2.107s 00:43:05.044 user 0m1.792s 00:43:05.044 sys 0m0.216s 00:43:05.044 21:56:38 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:05.044 21:56:38 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:43:05.044 ************************************ 00:43:05.044 END TEST bdev_hello_world 00:43:05.044 ************************************ 00:43:05.044 21:56:38 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:43:05.044 21:56:38 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:43:05.044 21:56:38 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:43:05.044 21:56:38 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:05.044 21:56:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:43:05.044 ************************************ 00:43:05.044 START TEST bdev_bounds 00:43:05.044 ************************************ 00:43:05.044 21:56:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:43:05.044 21:56:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=170948 00:43:05.044 21:56:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:43:05.044 21:56:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:43:05.044 21:56:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 170948' 00:43:05.044 Process bdevio pid: 170948 00:43:05.044 21:56:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 170948 00:43:05.044 21:56:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 170948 ']' 00:43:05.044 21:56:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:05.044 21:56:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:05.044 21:56:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:05.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:05.044 21:56:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:05.044 21:56:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:43:05.044 [2024-07-15 21:56:38.151165] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:43:05.044 [2024-07-15 21:56:38.151395] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170948 ] 00:43:05.044 [2024-07-15 21:56:38.320928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:05.304 [2024-07-15 21:56:38.559067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:05.304 [2024-07-15 21:56:38.559067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:43:05.304 [2024-07-15 21:56:38.559048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:05.872 21:56:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:43:05.872 21:56:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:43:05.872 21:56:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:43:05.872 I/O targets: 00:43:05.872 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:43:05.872 00:43:05.872 00:43:05.872 CUnit - A unit testing framework for C - Version 2.1-3 00:43:05.872 http://cunit.sourceforge.net/ 00:43:05.872 00:43:05.872 00:43:05.872 Suite: bdevio tests on: Nvme0n1 00:43:05.872 Test: blockdev write read block ...passed 00:43:05.872 Test: blockdev write zeroes read block ...passed 00:43:05.872 Test: blockdev write zeroes read no split ...passed 00:43:06.131 Test: blockdev write zeroes read split ...passed 00:43:06.131 Test: blockdev write zeroes read split partial ...passed 00:43:06.131 Test: blockdev reset ...[2024-07-15 21:56:39.312038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:43:06.131 [2024-07-15 21:56:39.316276] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:43:06.131 passed 00:43:06.131 Test: blockdev write read 8 blocks ...passed 00:43:06.131 Test: blockdev write read size > 128k ...passed 00:43:06.131 Test: blockdev write read invalid size ...passed 00:43:06.131 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:06.131 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:06.131 Test: blockdev write read max offset ...passed 00:43:06.131 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:06.131 Test: blockdev writev readv 8 blocks ...passed 00:43:06.131 Test: blockdev writev readv 30 x 1block ...passed 00:43:06.131 Test: blockdev writev readv block ...passed 00:43:06.131 Test: blockdev writev readv size > 128k ...passed 00:43:06.131 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:06.131 Test: blockdev comparev and writev ...[2024-07-15 21:56:39.325619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0xb860d000 len:0x1000 00:43:06.131 [2024-07-15 21:56:39.325707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:43:06.131 passed 00:43:06.131 Test: blockdev nvme passthru rw ...passed 00:43:06.131 Test: blockdev nvme passthru vendor specific ...[2024-07-15 21:56:39.326466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:43:06.131 [2024-07-15 21:56:39.326521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:43:06.131 passed 00:43:06.131 Test: blockdev nvme admin passthru ...passed 00:43:06.131 Test: blockdev copy ...passed 00:43:06.131 00:43:06.131 Run Summary: Type Total Ran Passed Failed Inactive 00:43:06.131 suites 1 1 n/a 0 0 00:43:06.131 tests 23 23 23 0 0 00:43:06.131 asserts 152 152 152 0 n/a 00:43:06.131 00:43:06.131 Elapsed time = 0.305 seconds 00:43:06.131 0 00:43:06.131 21:56:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 170948 00:43:06.131 21:56:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 170948 ']' 00:43:06.131 21:56:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 170948 00:43:06.131 21:56:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:43:06.131 21:56:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:06.131 21:56:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 170948 00:43:06.131 21:56:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:43:06.131 21:56:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:43:06.131 21:56:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 170948' 00:43:06.131 killing process with pid 170948 00:43:06.131 21:56:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 170948 00:43:06.131 21:56:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 170948 00:43:08.037 21:56:40 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:43:08.037 00:43:08.037 real 0m2.856s 00:43:08.037 user 0m6.706s 00:43:08.037 sys 0m0.334s 00:43:08.037 21:56:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:08.037 ************************************ 00:43:08.037 END TEST bdev_bounds 00:43:08.037 ************************************ 00:43:08.037 21:56:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:43:08.037 21:56:40 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:43:08.037 21:56:40 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:43:08.037 21:56:40 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:43:08.037 21:56:40 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:08.037 21:56:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:43:08.037 ************************************ 00:43:08.037 START TEST bdev_nbd 00:43:08.037 ************************************ 00:43:08.037 21:56:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:43:08.037 21:56:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:43:08.037 21:56:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:43:08.037 21:56:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:08.037 21:56:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:43:08.037 21:56:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=($2) 00:43:08.037 21:56:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:43:08.037 21:56:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=1 00:43:08.037 21:56:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:43:08.037 21:56:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=(/dev/nbd+([0-9])) 00:43:08.037 21:56:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:43:08.037 21:56:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=1 00:43:08.037 21:56:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:43:08.037 21:56:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:43:08.037 21:56:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:43:08.037 21:56:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:43:08.037 21:56:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=171010 00:43:08.037 21:56:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:43:08.037 21:56:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 171010 /var/tmp/spdk-nbd.sock 00:43:08.037 21:56:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:43:08.037 21:56:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 171010 ']' 00:43:08.037 21:56:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:43:08.037 21:56:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:08.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:43:08.037 21:56:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:43:08.037 21:56:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:08.037 21:56:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:43:08.037 [2024-07-15 21:56:41.068925] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:43:08.037 [2024-07-15 21:56:41.069118] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:08.037 [2024-07-15 21:56:41.232950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:08.296 [2024-07-15 21:56:41.444705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:08.555 21:56:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:43:08.555 21:56:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:43:08.555 21:56:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:43:08.555 21:56:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:08.555 21:56:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:43:08.555 21:56:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:43:08.555 21:56:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:43:08.555 21:56:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:08.555 21:56:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:43:08.555 21:56:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:43:08.555 21:56:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:43:08.555 21:56:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:43:08.555 21:56:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:43:08.555 21:56:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:43:08.555 21:56:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:43:08.816 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:43:08.816 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:43:08.816 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:43:08.816 21:56:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:43:08.816 21:56:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:43:08.816 21:56:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:43:08.816 21:56:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:43:08.816 21:56:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:43:08.816 21:56:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:43:08.816 21:56:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:43:08.816 21:56:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:43:08.816 21:56:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:08.816 1+0 records in 00:43:08.816 1+0 records out 00:43:08.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000590725 s, 6.9 MB/s 00:43:08.816 21:56:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:08.816 21:56:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:43:08.816 21:56:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:08.816 21:56:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:43:08.816 21:56:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:43:08.816 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:43:08.816 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:43:08.816 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:09.076 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:43:09.076 { 00:43:09.076 "nbd_device": "/dev/nbd0", 00:43:09.076 "bdev_name": "Nvme0n1" 00:43:09.076 } 00:43:09.076 ]' 00:43:09.076 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:43:09.076 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:43:09.076 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:43:09.076 { 00:43:09.076 "nbd_device": "/dev/nbd0", 00:43:09.076 "bdev_name": "Nvme0n1" 00:43:09.076 } 00:43:09.076 ]' 00:43:09.408 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:43:09.408 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:09.408 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:43:09.408 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:09.408 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:43:09.408 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:09.408 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:43:09.408 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:09.408 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:09.409 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:09.409 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:09.409 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:09.409 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:09.409 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:09.409 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:09.409 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:43:09.409 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:09.409 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:09.668 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:43:09.668 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:43:09.668 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:43:09.668 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:43:09.668 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:43:09.668 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:43:09.668 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:43:09.668 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:43:09.668 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:43:09.668 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:43:09.668 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:43:09.668 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:43:09.668 21:56:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:43:09.668 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:09.668 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:43:09.668 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:43:09.668 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:43:09.668 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:43:09.668 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:43:09.668 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:09.668 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:43:09.668 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:43:09.668 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:43:09.668 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:43:09.668 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:43:09.668 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:43:09.668 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:43:09.668 21:56:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:43:09.927 /dev/nbd0 00:43:09.927 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:43:09.927 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:43:09.927 21:56:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:43:09.927 21:56:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:43:09.927 21:56:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:43:09.927 21:56:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:43:09.927 21:56:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:43:09.927 21:56:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:43:09.927 21:56:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:43:09.927 21:56:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:43:09.927 21:56:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:09.927 1+0 records in 00:43:09.927 1+0 records out 00:43:09.927 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000537207 s, 7.6 MB/s 00:43:09.927 21:56:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:09.927 21:56:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:43:09.927 21:56:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:09.927 21:56:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:43:09.927 21:56:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:43:09.927 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:09.927 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:43:09.927 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:43:09.927 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:09.927 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:10.186 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:43:10.186 { 00:43:10.186 "nbd_device": "/dev/nbd0", 00:43:10.186 "bdev_name": "Nvme0n1" 00:43:10.186 } 00:43:10.186 ]' 00:43:10.186 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:43:10.186 { 00:43:10.186 "nbd_device": "/dev/nbd0", 00:43:10.186 "bdev_name": "Nvme0n1" 00:43:10.186 } 00:43:10.186 ]' 00:43:10.186 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:43:10.186 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:43:10.186 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:43:10.186 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:43:10.186 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:43:10.186 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:43:10.186 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:43:10.186 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:43:10.186 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:43:10.186 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:43:10.186 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:43:10.186 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:43:10.186 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:43:10.186 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:43:10.186 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:43:10.186 256+0 records in 00:43:10.186 256+0 records out 00:43:10.186 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132061 s, 79.4 MB/s 00:43:10.186 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:43:10.186 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:43:10.444 256+0 records in 00:43:10.444 256+0 records out 00:43:10.444 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0591352 s, 17.7 MB/s 00:43:10.444 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:43:10.444 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:43:10.444 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:43:10.444 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:43:10.444 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:43:10.444 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:43:10.444 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:43:10.444 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:43:10.444 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:43:10.444 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:43:10.444 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:43:10.444 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:10.444 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:43:10.444 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:10.444 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:43:10.444 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:10.444 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:43:10.444 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:10.703 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:10.703 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:10.703 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:10.703 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:10.703 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:10.703 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:10.703 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:10.703 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:43:10.703 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:10.703 21:56:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:10.703 21:56:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:43:10.703 21:56:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:43:10.703 21:56:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:43:10.703 21:56:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:43:10.703 21:56:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:43:10.703 21:56:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:43:10.703 21:56:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:43:10.703 21:56:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:43:10.703 21:56:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:43:10.703 21:56:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:43:10.703 21:56:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:43:10.703 21:56:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:43:10.703 21:56:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:43:10.703 21:56:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:10.703 21:56:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:43:10.703 21:56:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:43:10.703 21:56:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:43:10.703 21:56:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:43:10.963 malloc_lvol_verify 00:43:10.963 21:56:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:43:11.221 4b77f442-ff8d-40de-8ff6-1ec971d937f0 00:43:11.221 21:56:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:43:11.480 850140a9-db4f-49c3-82bb-935de5a9f677 00:43:11.480 21:56:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:43:11.739 /dev/nbd0 00:43:11.739 21:56:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:43:11.739 mke2fs 1.45.5 (07-Jan-2020) 00:43:11.739 Creating filesystem with 1024 4k blocks and 1024 inodes 00:43:11.739 00:43:11.739 00:43:11.739 Filesystem too small for a journal 00:43:11.739 Allocating group tables: 0/1 done 00:43:11.739 Writing inode tables: 0/1 done 00:43:11.739 Writing superblocks and filesystem accounting information: 0/1 done 00:43:11.739 00:43:11.739 21:56:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:43:11.739 21:56:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:43:11.739 21:56:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:11.739 21:56:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:43:11.739 21:56:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:11.739 21:56:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:43:11.739 21:56:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:11.739 21:56:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:43:11.997 21:56:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:11.997 21:56:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:11.997 21:56:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:11.997 21:56:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:11.997 21:56:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:11.997 21:56:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:11.997 21:56:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:11.997 21:56:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:11.997 21:56:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:43:11.997 21:56:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:43:11.997 21:56:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 171010 00:43:11.997 21:56:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 171010 ']' 00:43:11.997 21:56:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 171010 00:43:11.997 21:56:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:43:11.997 21:56:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:11.997 21:56:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 171010 00:43:11.997 21:56:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:43:11.997 21:56:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:43:11.997 21:56:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 171010' 00:43:11.997 killing process with pid 171010 00:43:11.997 21:56:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 171010 00:43:11.997 21:56:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 171010 00:43:13.373 21:56:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:43:13.373 00:43:13.374 real 0m5.549s 00:43:13.374 user 0m7.847s 00:43:13.374 sys 0m1.039s 00:43:13.374 21:56:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:13.374 21:56:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:43:13.374 ************************************ 00:43:13.374 END TEST bdev_nbd 00:43:13.374 ************************************ 00:43:13.374 21:56:46 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:43:13.374 21:56:46 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:43:13.374 21:56:46 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:43:13.374 skipping fio tests on NVMe due to multi-ns failures. 00:43:13.374 21:56:46 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:43:13.374 21:56:46 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:43:13.374 21:56:46 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:43:13.374 21:56:46 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:43:13.374 21:56:46 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:13.374 21:56:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:43:13.374 ************************************ 00:43:13.374 START TEST bdev_verify 00:43:13.374 ************************************ 00:43:13.374 21:56:46 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:43:13.374 [2024-07-15 21:56:46.681500] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:43:13.374 [2024-07-15 21:56:46.681670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171223 ] 00:43:13.632 [2024-07-15 21:56:46.849930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:13.891 [2024-07-15 21:56:47.074532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:13.891 [2024-07-15 21:56:47.074533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:14.458 Running I/O for 5 seconds... 00:43:19.767 00:43:19.767 Latency(us) 00:43:19.767 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:19.767 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:43:19.767 Verification LBA range: start 0x0 length 0xa0000 00:43:19.767 Nvme0n1 : 5.01 11127.07 43.47 0.00 0.00 11445.45 729.77 26214.40 00:43:19.767 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:43:19.767 Verification LBA range: start 0xa0000 length 0xa0000 00:43:19.767 Nvme0n1 : 5.01 11174.34 43.65 0.00 0.00 11396.15 1080.34 25413.09 00:43:19.767 =================================================================================================================== 00:43:19.767 Total : 22301.41 87.11 0.00 0.00 11420.76 729.77 26214.40 00:43:21.668 00:43:21.668 real 0m8.241s 00:43:21.668 user 0m15.224s 00:43:21.668 sys 0m0.232s 00:43:21.668 21:56:54 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:21.668 21:56:54 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:43:21.668 ************************************ 00:43:21.668 END TEST bdev_verify 00:43:21.668 ************************************ 00:43:21.668 21:56:54 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:43:21.668 21:56:54 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:43:21.668 21:56:54 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:43:21.668 21:56:54 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:21.668 21:56:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:43:21.668 ************************************ 00:43:21.668 START TEST bdev_verify_big_io 00:43:21.668 ************************************ 00:43:21.668 21:56:54 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:43:21.668 [2024-07-15 21:56:54.961421] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:43:21.668 [2024-07-15 21:56:54.961590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171357 ] 00:43:21.927 [2024-07-15 21:56:55.123886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:22.185 [2024-07-15 21:56:55.329138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:22.185 [2024-07-15 21:56:55.329141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:22.443 Running I/O for 5 seconds... 00:43:27.719 00:43:27.719 Latency(us) 00:43:27.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:27.719 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:43:27.719 Verification LBA range: start 0x0 length 0xa000 00:43:27.719 Nvme0n1 : 5.03 1211.51 75.72 0.00 0.00 103226.24 479.36 152020.63 00:43:27.719 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:43:27.719 Verification LBA range: start 0xa000 length 0xa000 00:43:27.719 Nvme0n1 : 5.05 610.15 38.13 0.00 0.00 205242.08 915.79 241767.74 00:43:27.719 =================================================================================================================== 00:43:27.719 Total : 1821.65 113.85 0.00 0.00 137487.12 479.36 241767.74 00:43:29.617 00:43:29.617 real 0m7.756s 00:43:29.617 user 0m14.317s 00:43:29.617 sys 0m0.216s 00:43:29.617 21:57:02 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:29.617 ************************************ 00:43:29.617 END TEST bdev_verify_big_io 00:43:29.617 ************************************ 00:43:29.617 21:57:02 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:43:29.617 21:57:02 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:43:29.617 21:57:02 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:29.617 21:57:02 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:43:29.617 21:57:02 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:29.617 21:57:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:43:29.617 ************************************ 00:43:29.617 START TEST bdev_write_zeroes 00:43:29.617 ************************************ 00:43:29.617 21:57:02 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:29.617 [2024-07-15 21:57:02.786977] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:43:29.617 [2024-07-15 21:57:02.787245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171464 ] 00:43:29.617 [2024-07-15 21:57:02.952613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:29.875 [2024-07-15 21:57:03.165384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:30.452 Running I/O for 1 seconds... 00:43:31.387 00:43:31.387 Latency(us) 00:43:31.387 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:31.387 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:43:31.388 Nvme0n1 : 1.00 36457.97 142.41 0.00 0.00 3501.87 958.71 18888.10 00:43:31.388 =================================================================================================================== 00:43:31.388 Total : 36457.97 142.41 0.00 0.00 3501.87 958.71 18888.10 00:43:32.762 00:43:32.762 real 0m3.310s 00:43:32.762 user 0m2.994s 00:43:32.762 sys 0m0.217s 00:43:32.762 21:57:06 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:32.762 ************************************ 00:43:32.762 END TEST bdev_write_zeroes 00:43:32.762 ************************************ 00:43:32.762 21:57:06 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:43:32.762 21:57:06 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:43:32.762 21:57:06 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:32.762 21:57:06 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:43:32.762 21:57:06 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:32.762 21:57:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:43:32.762 ************************************ 00:43:32.762 START TEST bdev_json_nonenclosed 00:43:32.762 ************************************ 00:43:32.762 21:57:06 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:33.019 [2024-07-15 21:57:06.171347] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:43:33.019 [2024-07-15 21:57:06.171529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171535 ] 00:43:33.019 [2024-07-15 21:57:06.338632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:33.277 [2024-07-15 21:57:06.559344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:33.277 [2024-07-15 21:57:06.559459] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:43:33.277 [2024-07-15 21:57:06.559511] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:43:33.277 [2024-07-15 21:57:06.559534] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:43:33.843 00:43:33.843 real 0m0.914s 00:43:33.843 user 0m0.650s 00:43:33.843 sys 0m0.164s 00:43:33.843 21:57:07 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:43:33.843 21:57:07 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:33.843 21:57:07 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:43:33.843 ************************************ 00:43:33.843 END TEST bdev_json_nonenclosed 00:43:33.843 ************************************ 00:43:33.843 21:57:07 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:43:33.843 21:57:07 blockdev_nvme -- bdev/blockdev.sh@782 -- # true 00:43:33.843 21:57:07 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:33.843 21:57:07 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:43:33.843 21:57:07 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:33.843 21:57:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:43:33.843 ************************************ 00:43:33.843 START TEST bdev_json_nonarray 00:43:33.843 ************************************ 00:43:33.843 21:57:07 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:33.843 [2024-07-15 21:57:07.143606] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:43:33.843 [2024-07-15 21:57:07.143817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171573 ] 00:43:34.102 [2024-07-15 21:57:07.305350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:34.361 [2024-07-15 21:57:07.517081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:34.361 [2024-07-15 21:57:07.517232] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:43:34.361 [2024-07-15 21:57:07.517320] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:43:34.361 [2024-07-15 21:57:07.517355] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:43:34.619 00:43:34.619 real 0m0.892s 00:43:34.619 user 0m0.666s 00:43:34.619 sys 0m0.126s 00:43:34.619 21:57:07 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:43:34.619 21:57:07 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:34.619 21:57:07 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:43:34.619 ************************************ 00:43:34.619 END TEST bdev_json_nonarray 00:43:34.619 ************************************ 00:43:34.877 21:57:08 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:43:34.877 21:57:08 blockdev_nvme -- bdev/blockdev.sh@785 -- # true 00:43:34.877 21:57:08 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:43:34.877 21:57:08 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:43:34.877 21:57:08 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:43:34.877 21:57:08 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:43:34.877 21:57:08 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:43:34.877 21:57:08 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:43:34.877 21:57:08 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:43:34.877 21:57:08 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:43:34.877 21:57:08 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:43:34.877 21:57:08 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:43:34.877 21:57:08 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:43:34.877 00:43:34.877 real 0m36.919s 00:43:34.877 user 0m55.224s 00:43:34.877 sys 0m3.392s 00:43:34.877 21:57:08 blockdev_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:34.877 21:57:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:43:34.877 ************************************ 00:43:34.877 END TEST blockdev_nvme 00:43:34.877 ************************************ 00:43:34.877 21:57:08 -- common/autotest_common.sh@1142 -- # return 0 00:43:34.877 21:57:08 -- spdk/autotest.sh@213 -- # uname -s 00:43:34.877 21:57:08 -- spdk/autotest.sh@213 -- # [[ Linux == Linux ]] 00:43:34.877 21:57:08 -- spdk/autotest.sh@214 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:43:34.877 21:57:08 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:43:34.877 21:57:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:34.877 21:57:08 -- common/autotest_common.sh@10 -- # set +x 00:43:34.877 ************************************ 00:43:34.877 START TEST blockdev_nvme_gpt 00:43:34.877 ************************************ 00:43:34.878 21:57:08 blockdev_nvme_gpt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:43:34.878 * Looking for test storage... 00:43:34.878 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:43:34.878 21:57:08 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:43:34.878 21:57:08 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:43:34.878 21:57:08 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:43:34.878 21:57:08 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:43:34.878 21:57:08 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:43:34.878 21:57:08 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:43:34.878 21:57:08 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:43:34.878 21:57:08 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:43:34.878 21:57:08 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:43:34.878 21:57:08 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:43:34.878 21:57:08 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:43:34.878 21:57:08 blockdev_nvme_gpt -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:43:34.878 21:57:08 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # uname -s 00:43:34.878 21:57:08 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:43:34.878 21:57:08 blockdev_nvme_gpt -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:43:34.878 21:57:08 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # test_type=gpt 00:43:34.878 21:57:08 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # crypto_device= 00:43:34.878 21:57:08 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # dek= 00:43:34.878 21:57:08 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # env_ctx= 00:43:34.878 21:57:08 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:43:34.878 21:57:08 blockdev_nvme_gpt -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:43:34.878 21:57:08 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:43:34.878 21:57:08 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:43:34.878 21:57:08 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:43:34.878 21:57:08 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=171659 00:43:34.878 21:57:08 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:43:34.878 21:57:08 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:43:34.878 21:57:08 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 171659 00:43:34.878 21:57:08 blockdev_nvme_gpt -- common/autotest_common.sh@829 -- # '[' -z 171659 ']' 00:43:34.878 21:57:08 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:34.878 21:57:08 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:34.878 21:57:08 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:34.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:34.878 21:57:08 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:34.878 21:57:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:35.136 [2024-07-15 21:57:08.277953] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:43:35.136 [2024-07-15 21:57:08.278622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171659 ] 00:43:35.136 [2024-07-15 21:57:08.427971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:35.394 [2024-07-15 21:57:08.673276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:36.770 21:57:09 blockdev_nvme_gpt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:43:36.770 21:57:09 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # return 0 00:43:36.770 21:57:09 blockdev_nvme_gpt -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:43:36.770 21:57:09 blockdev_nvme_gpt -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:43:36.770 21:57:09 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:43:36.770 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:43:36.770 Waiting for block devices as requested 00:43:37.043 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:43:37.043 21:57:10 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:43:37.043 21:57:10 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:43:37.043 21:57:10 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:43:37.043 21:57:10 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # local nvme bdf 00:43:37.043 21:57:10 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:43:37.043 21:57:10 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:43:37.043 21:57:10 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:43:37.043 21:57:10 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:43:37.043 21:57:10 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:43:37.043 21:57:10 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # nvme_devs=(/sys/bus/pci/drivers/nvme/*/nvme/nvme*/nvme*n*) 00:43:37.043 21:57:10 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:43:37.043 21:57:10 blockdev_nvme_gpt -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:43:37.043 21:57:10 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:43:37.043 21:57:10 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:43:37.043 21:57:10 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # dev=/dev/nvme0n1 00:43:37.043 21:57:10 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # parted /dev/nvme0n1 -ms print 00:43:37.043 21:57:10 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:43:37.043 BYT; 00:43:37.043 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:43:37.043 21:57:10 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:43:37.043 BYT; 00:43:37.043 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:43:37.043 21:57:10 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme0n1 00:43:37.043 21:57:10 blockdev_nvme_gpt -- bdev/blockdev.sh@116 -- # break 00:43:37.043 21:57:10 blockdev_nvme_gpt -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme0n1 ]] 00:43:37.043 21:57:10 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:43:37.043 21:57:10 blockdev_nvme_gpt -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:43:37.043 21:57:10 blockdev_nvme_gpt -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:43:37.979 21:57:11 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:43:37.979 21:57:11 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:43:37.979 21:57:11 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:43:37.979 21:57:11 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:43:37.979 21:57:11 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:43:37.979 21:57:11 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:43:37.979 21:57:11 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:43:37.979 21:57:11 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:43:37.979 21:57:11 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:43:37.979 21:57:11 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:43:37.979 21:57:11 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:43:37.979 21:57:11 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:43:37.979 21:57:11 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:43:37.979 21:57:11 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:43:37.979 21:57:11 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:43:37.979 21:57:11 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:43:37.979 21:57:11 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:43:37.979 21:57:11 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:43:37.979 21:57:11 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:43:37.979 21:57:11 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:43:37.979 21:57:11 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:43:37.979 21:57:11 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:43:37.979 21:57:11 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:43:38.918 The operation has completed successfully. 00:43:38.918 21:57:12 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:43:40.295 The operation has completed successfully. 00:43:40.295 21:57:13 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:43:40.295 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:43:40.553 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:43:41.490 21:57:14 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:43:41.490 21:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:41.490 21:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:41.490 [] 00:43:41.490 21:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:41.490 21:57:14 blockdev_nvme_gpt -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:43:41.490 21:57:14 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:43:41.490 21:57:14 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:43:41.490 21:57:14 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:43:41.490 21:57:14 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:43:41.490 21:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:41.490 21:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:41.490 21:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:41.490 21:57:14 blockdev_nvme_gpt -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:43:41.490 21:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:41.490 21:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:41.490 21:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:41.490 21:57:14 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # cat 00:43:41.490 21:57:14 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:43:41.490 21:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:41.490 21:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:41.490 21:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:41.490 21:57:14 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:43:41.490 21:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:41.490 21:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:41.490 21:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:41.490 21:57:14 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:43:41.490 21:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:41.490 21:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:41.490 21:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:41.490 21:57:14 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:43:41.490 21:57:14 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:43:41.490 21:57:14 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:43:41.490 21:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:41.490 21:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:41.490 21:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:41.490 21:57:14 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:43:41.490 21:57:14 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # jq -r .name 00:43:41.491 21:57:14 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:43:41.491 21:57:14 blockdev_nvme_gpt -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:43:41.491 21:57:14 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:43:41.491 21:57:14 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:43:41.491 21:57:14 blockdev_nvme_gpt -- bdev/blockdev.sh@754 -- # killprocess 171659 00:43:41.491 21:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@948 -- # '[' -z 171659 ']' 00:43:41.491 21:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # kill -0 171659 00:43:41.491 21:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # uname 00:43:41.491 21:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:41.491 21:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 171659 00:43:41.491 killing process with pid 171659 00:43:41.491 21:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:43:41.491 21:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:43:41.491 21:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 171659' 00:43:41.491 21:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@967 -- # kill 171659 00:43:41.491 21:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # wait 171659 00:43:44.784 21:57:18 blockdev_nvme_gpt -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:43:44.784 21:57:18 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:43:44.784 21:57:18 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:43:44.784 21:57:18 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:44.784 21:57:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:44.784 ************************************ 00:43:44.784 START TEST bdev_hello_world 00:43:44.784 ************************************ 00:43:44.784 21:57:18 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:43:44.784 [2024-07-15 21:57:18.085260] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:43:44.784 [2024-07-15 21:57:18.085957] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172230 ] 00:43:45.042 [2024-07-15 21:57:18.240310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:45.301 [2024-07-15 21:57:18.466862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:45.870 [2024-07-15 21:57:18.940385] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:43:45.870 [2024-07-15 21:57:18.940471] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:43:45.870 [2024-07-15 21:57:18.940508] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:43:45.870 [2024-07-15 21:57:18.943517] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:43:45.870 [2024-07-15 21:57:18.944001] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:43:45.870 [2024-07-15 21:57:18.944034] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:43:45.870 [2024-07-15 21:57:18.944233] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:43:45.870 00:43:45.870 [2024-07-15 21:57:18.944281] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:43:47.242 00:43:47.242 real 0m2.313s 00:43:47.242 user 0m2.025s 00:43:47.242 sys 0m0.188s 00:43:47.242 21:57:20 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:47.242 ************************************ 00:43:47.242 END TEST bdev_hello_world 00:43:47.242 ************************************ 00:43:47.242 21:57:20 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:43:47.242 21:57:20 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:43:47.242 21:57:20 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:43:47.242 21:57:20 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:43:47.242 21:57:20 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:47.242 21:57:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:47.242 ************************************ 00:43:47.242 START TEST bdev_bounds 00:43:47.242 ************************************ 00:43:47.242 21:57:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:43:47.242 21:57:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=172281 00:43:47.242 21:57:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:43:47.242 21:57:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 172281' 00:43:47.242 Process bdevio pid: 172281 00:43:47.242 21:57:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 172281 00:43:47.242 21:57:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:43:47.242 21:57:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 172281 ']' 00:43:47.242 21:57:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:47.242 21:57:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:47.242 21:57:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:47.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:47.243 21:57:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:47.243 21:57:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:43:47.243 [2024-07-15 21:57:20.447046] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:43:47.243 [2024-07-15 21:57:20.447739] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172281 ] 00:43:47.501 [2024-07-15 21:57:20.631130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:47.758 [2024-07-15 21:57:20.881706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:47.758 [2024-07-15 21:57:20.881900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:43:47.758 [2024-07-15 21:57:20.881893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:48.367 21:57:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:43:48.367 21:57:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:43:48.367 21:57:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:43:48.367 I/O targets: 00:43:48.367 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:43:48.367 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:43:48.367 00:43:48.367 00:43:48.367 CUnit - A unit testing framework for C - Version 2.1-3 00:43:48.367 http://cunit.sourceforge.net/ 00:43:48.367 00:43:48.367 00:43:48.367 Suite: bdevio tests on: Nvme0n1p2 00:43:48.367 Test: blockdev write read block ...passed 00:43:48.367 Test: blockdev write zeroes read block ...passed 00:43:48.367 Test: blockdev write zeroes read no split ...passed 00:43:48.367 Test: blockdev write zeroes read split ...passed 00:43:48.367 Test: blockdev write zeroes read split partial ...passed 00:43:48.367 Test: blockdev reset ...[2024-07-15 21:57:21.644480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:43:48.367 [2024-07-15 21:57:21.648670] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:43:48.367 passed 00:43:48.367 Test: blockdev write read 8 blocks ...passed 00:43:48.367 Test: blockdev write read size > 128k ...passed 00:43:48.367 Test: blockdev write read invalid size ...passed 00:43:48.367 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:48.367 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:48.367 Test: blockdev write read max offset ...passed 00:43:48.367 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:48.367 Test: blockdev writev readv 8 blocks ...passed 00:43:48.367 Test: blockdev writev readv 30 x 1block ...passed 00:43:48.367 Test: blockdev writev readv block ...passed 00:43:48.367 Test: blockdev writev readv size > 128k ...passed 00:43:48.367 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:48.367 Test: blockdev comparev and writev ...[2024-07-15 21:57:21.656943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0xb8a0d000 len:0x1000 00:43:48.367 [2024-07-15 21:57:21.657117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:43:48.367 passed 00:43:48.367 Test: blockdev nvme passthru rw ...passed 00:43:48.367 Test: blockdev nvme passthru vendor specific ...passed 00:43:48.367 Test: blockdev nvme admin passthru ...passed 00:43:48.367 Test: blockdev copy ...passed 00:43:48.367 Suite: bdevio tests on: Nvme0n1p1 00:43:48.367 Test: blockdev write read block ...passed 00:43:48.367 Test: blockdev write zeroes read block ...passed 00:43:48.367 Test: blockdev write zeroes read no split ...passed 00:43:48.367 Test: blockdev write zeroes read split ...passed 00:43:48.626 Test: blockdev write zeroes read split partial ...passed 00:43:48.626 Test: blockdev reset ...[2024-07-15 21:57:21.751277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:43:48.626 [2024-07-15 21:57:21.755354] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:43:48.626 passed 00:43:48.626 Test: blockdev write read 8 blocks ...passed 00:43:48.626 Test: blockdev write read size > 128k ...passed 00:43:48.626 Test: blockdev write read invalid size ...passed 00:43:48.626 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:48.626 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:48.626 Test: blockdev write read max offset ...passed 00:43:48.626 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:48.626 Test: blockdev writev readv 8 blocks ...passed 00:43:48.626 Test: blockdev writev readv 30 x 1block ...passed 00:43:48.626 Test: blockdev writev readv block ...passed 00:43:48.626 Test: blockdev writev readv size > 128k ...passed 00:43:48.626 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:48.626 Test: blockdev comparev and writev ...[2024-07-15 21:57:21.762489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0xb8a09000 len:0x1000 00:43:48.626 [2024-07-15 21:57:21.762646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:43:48.626 passed 00:43:48.626 Test: blockdev nvme passthru rw ...passed 00:43:48.626 Test: blockdev nvme passthru vendor specific ...passed 00:43:48.626 Test: blockdev nvme admin passthru ...passed 00:43:48.626 Test: blockdev copy ...passed 00:43:48.626 00:43:48.626 Run Summary: Type Total Ran Passed Failed Inactive 00:43:48.626 suites 2 2 n/a 0 0 00:43:48.626 tests 46 46 46 0 0 00:43:48.626 asserts 284 284 284 0 n/a 00:43:48.626 00:43:48.626 Elapsed time = 0.603 seconds 00:43:48.626 0 00:43:48.626 21:57:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 172281 00:43:48.626 21:57:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 172281 ']' 00:43:48.626 21:57:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 172281 00:43:48.626 21:57:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:43:48.626 21:57:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:48.626 21:57:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 172281 00:43:48.627 21:57:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:43:48.627 21:57:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:43:48.627 killing process with pid 172281 00:43:48.627 21:57:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 172281' 00:43:48.627 21:57:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@967 -- # kill 172281 00:43:48.627 21:57:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # wait 172281 00:43:50.530 ************************************ 00:43:50.530 END TEST bdev_bounds 00:43:50.530 ************************************ 00:43:50.530 21:57:23 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:43:50.530 00:43:50.530 real 0m3.049s 00:43:50.530 user 0m7.262s 00:43:50.530 sys 0m0.287s 00:43:50.530 21:57:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:50.530 21:57:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:43:50.530 21:57:23 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:43:50.530 21:57:23 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:43:50.530 21:57:23 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:43:50.530 21:57:23 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:50.530 21:57:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:50.530 ************************************ 00:43:50.530 START TEST bdev_nbd 00:43:50.530 ************************************ 00:43:50.530 21:57:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:43:50.530 21:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:43:50.530 21:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:43:50.530 21:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:50.530 21:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:43:50.530 21:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=($2) 00:43:50.530 21:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:43:50.530 21:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=2 00:43:50.530 21:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:43:50.530 21:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=(/dev/nbd+([0-9])) 00:43:50.530 21:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:43:50.531 21:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=2 00:43:50.531 21:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:43:50.531 21:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:43:50.531 21:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:43:50.531 21:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:43:50.531 21:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=172350 00:43:50.531 21:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:43:50.531 21:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:43:50.531 21:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 172350 /var/tmp/spdk-nbd.sock 00:43:50.531 21:57:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 172350 ']' 00:43:50.531 21:57:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:43:50.531 21:57:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:50.531 21:57:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:43:50.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:43:50.531 21:57:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:50.531 21:57:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:43:50.531 [2024-07-15 21:57:23.557917] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:43:50.531 [2024-07-15 21:57:23.558167] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:50.531 [2024-07-15 21:57:23.726010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:50.788 [2024-07-15 21:57:23.935575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:51.046 21:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:43:51.046 21:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:43:51.046 21:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:43:51.046 21:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:51.046 21:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:43:51.046 21:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:43:51.046 21:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:43:51.046 21:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:51.046 21:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:43:51.046 21:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:43:51.046 21:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:43:51.046 21:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:43:51.046 21:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:43:51.046 21:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:43:51.046 21:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:43:51.304 21:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:43:51.304 21:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:43:51.304 21:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:43:51.304 21:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:43:51.304 21:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:43:51.304 21:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:43:51.304 21:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:43:51.304 21:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:43:51.562 21:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:43:51.562 21:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:43:51.562 21:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:43:51.562 21:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:51.562 1+0 records in 00:43:51.562 1+0 records out 00:43:51.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000560095 s, 7.3 MB/s 00:43:51.562 21:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:51.562 21:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:43:51.562 21:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:51.562 21:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:43:51.562 21:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:43:51.562 21:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:43:51.562 21:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:43:51.562 21:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:43:51.821 21:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:43:51.821 21:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:43:51.821 21:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:43:51.821 21:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:43:51.821 21:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:43:51.821 21:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:43:51.821 21:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:43:51.821 21:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:43:51.821 21:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:43:51.821 21:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:43:51.821 21:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:43:51.821 21:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:51.821 1+0 records in 00:43:51.821 1+0 records out 00:43:51.821 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532722 s, 7.7 MB/s 00:43:51.821 21:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:51.821 21:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:43:51.821 21:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:51.821 21:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:43:51.821 21:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:43:51.822 21:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:43:51.822 21:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:43:51.822 21:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:52.080 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:43:52.080 { 00:43:52.080 "nbd_device": "/dev/nbd0", 00:43:52.080 "bdev_name": "Nvme0n1p1" 00:43:52.080 }, 00:43:52.080 { 00:43:52.080 "nbd_device": "/dev/nbd1", 00:43:52.080 "bdev_name": "Nvme0n1p2" 00:43:52.081 } 00:43:52.081 ]' 00:43:52.081 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:43:52.081 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:43:52.081 { 00:43:52.081 "nbd_device": "/dev/nbd0", 00:43:52.081 "bdev_name": "Nvme0n1p1" 00:43:52.081 }, 00:43:52.081 { 00:43:52.081 "nbd_device": "/dev/nbd1", 00:43:52.081 "bdev_name": "Nvme0n1p2" 00:43:52.081 } 00:43:52.081 ]' 00:43:52.081 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:43:52.081 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:43:52.081 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:52.081 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:43:52.081 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:52.081 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:43:52.081 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:52.081 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:43:52.339 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:52.339 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:52.339 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:52.339 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:52.339 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:52.339 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:52.339 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:52.339 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:52.339 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:52.339 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:43:52.339 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:43:52.339 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:43:52.339 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:43:52.339 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:52.339 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:52.339 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:43:52.339 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:52.339 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:52.339 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:43:52.339 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:52.339 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:52.904 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:43:52.904 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:43:52.904 21:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:43:52.904 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:43:52.904 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:43:52.904 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:43:52.904 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:43:52.904 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:43:52.904 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:43:52.904 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:43:52.904 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:43:52.904 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:43:52.904 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:43:52.904 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:52.904 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:43:52.904 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:43:52.904 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:43:52.904 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:43:52.904 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:43:52.904 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:52.904 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:43:52.904 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:43:52.904 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:43:52.904 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:43:52.904 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:43:52.904 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:43:52.904 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:43:52.904 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:43:53.162 /dev/nbd0 00:43:53.162 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:43:53.162 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:43:53.162 21:57:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:43:53.162 21:57:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:43:53.162 21:57:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:43:53.162 21:57:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:43:53.162 21:57:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:43:53.162 21:57:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:43:53.162 21:57:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:43:53.162 21:57:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:43:53.162 21:57:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:53.162 1+0 records in 00:43:53.162 1+0 records out 00:43:53.162 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000708165 s, 5.8 MB/s 00:43:53.162 21:57:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:53.162 21:57:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:43:53.162 21:57:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:53.162 21:57:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:43:53.162 21:57:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:43:53.162 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:53.162 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:43:53.162 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:43:53.162 /dev/nbd1 00:43:53.420 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:43:53.420 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:43:53.420 21:57:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:43:53.420 21:57:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:43:53.420 21:57:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:43:53.420 21:57:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:43:53.420 21:57:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:43:53.420 21:57:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:43:53.420 21:57:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:43:53.420 21:57:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:43:53.420 21:57:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:53.420 1+0 records in 00:43:53.420 1+0 records out 00:43:53.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527284 s, 7.8 MB/s 00:43:53.420 21:57:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:53.420 21:57:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:43:53.420 21:57:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:53.420 21:57:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:43:53.420 21:57:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:43:53.420 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:53.420 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:43:53.420 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:43:53.420 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:53.420 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:53.679 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:43:53.679 { 00:43:53.679 "nbd_device": "/dev/nbd0", 00:43:53.679 "bdev_name": "Nvme0n1p1" 00:43:53.679 }, 00:43:53.679 { 00:43:53.679 "nbd_device": "/dev/nbd1", 00:43:53.679 "bdev_name": "Nvme0n1p2" 00:43:53.679 } 00:43:53.679 ]' 00:43:53.679 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:43:53.679 { 00:43:53.679 "nbd_device": "/dev/nbd0", 00:43:53.679 "bdev_name": "Nvme0n1p1" 00:43:53.679 }, 00:43:53.679 { 00:43:53.679 "nbd_device": "/dev/nbd1", 00:43:53.679 "bdev_name": "Nvme0n1p2" 00:43:53.679 } 00:43:53.679 ]' 00:43:53.679 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:43:53.679 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:43:53.679 /dev/nbd1' 00:43:53.679 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:43:53.679 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:43:53.679 /dev/nbd1' 00:43:53.679 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2 00:43:53.679 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2 00:43:53.679 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2 00:43:53.679 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:43:53.679 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:43:53.679 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:43:53.679 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:43:53.679 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:43:53.679 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:43:53.679 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:43:53.679 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:43:53.679 256+0 records in 00:43:53.679 256+0 records out 00:43:53.679 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00406772 s, 258 MB/s 00:43:53.679 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:43:53.679 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:43:53.679 256+0 records in 00:43:53.679 256+0 records out 00:43:53.679 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0562716 s, 18.6 MB/s 00:43:53.679 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:43:53.679 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:43:53.679 256+0 records in 00:43:53.679 256+0 records out 00:43:53.679 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0589106 s, 17.8 MB/s 00:43:53.679 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:43:53.679 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:43:53.679 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:43:53.679 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:43:53.679 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:43:53.679 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:43:53.679 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:43:53.679 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:43:53.679 21:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:43:53.679 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:43:53.679 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:43:53.679 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:43:53.679 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:43:53.679 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:53.679 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:43:53.679 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:53.679 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:43:53.679 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:53.679 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:43:53.937 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:53.937 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:53.937 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:53.937 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:53.937 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:53.937 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:53.937 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:53.937 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:53.937 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:53.937 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:43:54.195 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:43:54.195 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:43:54.195 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:43:54.195 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:54.195 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:54.195 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:43:54.195 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:54.195 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:54.195 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:43:54.195 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:54.195 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:54.474 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:43:54.474 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:43:54.474 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:43:54.474 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:43:54.474 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:43:54.474 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:43:54.474 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:43:54.474 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:43:54.474 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:43:54.474 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:43:54.474 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:43:54.474 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:43:54.474 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:43:54.474 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:54.474 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:43:54.474 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:43:54.474 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:43:54.474 21:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:43:54.757 malloc_lvol_verify 00:43:54.757 21:57:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:43:55.016 9f557ed8-ca21-46b4-a4c2-e20d4259993d 00:43:55.016 21:57:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:43:55.273 2064e801-76fb-4fa0-8aea-8565833b98ed 00:43:55.273 21:57:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:43:55.530 /dev/nbd0 00:43:55.530 21:57:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:43:55.530 Creating filesystem with 1024 4k blocks and 1024 inodes 00:43:55.530 00:43:55.530 Allocating group tables: 0/1 done 00:43:55.530 Writing inode tables: 0/1 done 00:43:55.530 Writing superblocks and filesystem accounting information: 0/1 mke2fs 1.45.5 (07-Jan-2020) 00:43:55.530 00:43:55.530 Filesystem too small for a journal 00:43:55.530 done 00:43:55.530 00:43:55.530 21:57:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:43:55.530 21:57:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:43:55.530 21:57:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:55.530 21:57:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:43:55.530 21:57:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:55.530 21:57:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:43:55.530 21:57:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:55.530 21:57:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:43:55.787 21:57:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:55.787 21:57:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:55.787 21:57:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:55.787 21:57:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:55.787 21:57:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:55.787 21:57:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:55.787 21:57:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:55.787 21:57:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:55.787 21:57:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:43:55.787 21:57:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:43:55.787 21:57:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 172350 00:43:55.787 21:57:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 172350 ']' 00:43:55.787 21:57:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 172350 00:43:55.787 21:57:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:43:55.787 21:57:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:55.787 21:57:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 172350 00:43:55.787 21:57:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:43:55.787 21:57:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:43:55.787 21:57:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 172350' 00:43:55.787 killing process with pid 172350 00:43:55.787 21:57:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@967 -- # kill 172350 00:43:55.787 21:57:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # wait 172350 00:43:57.689 21:57:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:43:57.689 00:43:57.689 real 0m7.102s 00:43:57.689 user 0m10.195s 00:43:57.689 sys 0m1.481s 00:43:57.689 21:57:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:57.689 21:57:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:43:57.689 ************************************ 00:43:57.689 END TEST bdev_nbd 00:43:57.689 ************************************ 00:43:57.689 21:57:30 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:43:57.689 21:57:30 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:43:57.689 21:57:30 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:43:57.689 21:57:30 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:43:57.689 21:57:30 blockdev_nvme_gpt -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:43:57.689 skipping fio tests on NVMe due to multi-ns failures. 00:43:57.689 21:57:30 blockdev_nvme_gpt -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:43:57.689 21:57:30 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:43:57.689 21:57:30 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:43:57.689 21:57:30 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:57.689 21:57:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:57.689 ************************************ 00:43:57.689 START TEST bdev_verify 00:43:57.689 ************************************ 00:43:57.689 21:57:30 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:43:57.689 [2024-07-15 21:57:30.701491] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:43:57.689 [2024-07-15 21:57:30.701678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172616 ] 00:43:57.689 [2024-07-15 21:57:30.860687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:57.948 [2024-07-15 21:57:31.106323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:57.948 [2024-07-15 21:57:31.106328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:58.206 Running I/O for 5 seconds... 00:44:03.492 00:44:03.492 Latency(us) 00:44:03.492 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:03.492 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:44:03.492 Verification LBA range: start 0x0 length 0x4ff80 00:44:03.492 Nvme0n1p1 : 5.02 5125.45 20.02 0.00 0.00 24901.74 4922.35 36402.53 00:44:03.492 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:44:03.492 Verification LBA range: start 0x4ff80 length 0x4ff80 00:44:03.492 Nvme0n1p1 : 5.03 5067.89 19.80 0.00 0.00 25187.06 3033.54 38920.94 00:44:03.492 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:44:03.492 Verification LBA range: start 0x0 length 0x4ff7f 00:44:03.492 Nvme0n1p2 : 5.02 5123.67 20.01 0.00 0.00 24867.10 2976.31 40065.68 00:44:03.492 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:44:03.492 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:44:03.492 Nvme0n1p2 : 5.03 5066.47 19.79 0.00 0.00 25151.82 3019.23 36631.48 00:44:03.492 =================================================================================================================== 00:44:03.492 Total : 20383.49 79.62 0.00 0.00 25026.22 2976.31 40065.68 00:44:06.021 ************************************ 00:44:06.021 END TEST bdev_verify 00:44:06.021 ************************************ 00:44:06.021 00:44:06.021 real 0m8.188s 00:44:06.021 user 0m15.095s 00:44:06.021 sys 0m0.219s 00:44:06.021 21:57:38 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:06.021 21:57:38 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:44:06.021 21:57:38 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:44:06.021 21:57:38 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:44:06.021 21:57:38 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:44:06.021 21:57:38 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:06.021 21:57:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:44:06.021 ************************************ 00:44:06.021 START TEST bdev_verify_big_io 00:44:06.021 ************************************ 00:44:06.021 21:57:38 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:44:06.021 [2024-07-15 21:57:38.941585] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:44:06.021 [2024-07-15 21:57:38.941765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172746 ] 00:44:06.021 [2024-07-15 21:57:39.112387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:06.021 [2024-07-15 21:57:39.333219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:06.021 [2024-07-15 21:57:39.333226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:44:06.589 Running I/O for 5 seconds... 00:44:11.884 00:44:11.884 Latency(us) 00:44:11.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:11.884 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:44:11.884 Verification LBA range: start 0x0 length 0x4ff8 00:44:11.884 Nvme0n1p1 : 5.19 394.91 24.68 0.00 0.00 316241.96 5437.48 346167.45 00:44:11.884 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:44:11.884 Verification LBA range: start 0x4ff8 length 0x4ff8 00:44:11.884 Nvme0n1p1 : 5.18 395.46 24.72 0.00 0.00 316810.73 13794.04 337009.58 00:44:11.884 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:44:11.884 Verification LBA range: start 0x0 length 0x4ff7 00:44:11.884 Nvme0n1p2 : 5.24 413.34 25.83 0.00 0.00 292548.56 1287.83 349830.60 00:44:11.884 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:44:11.884 Verification LBA range: start 0x4ff7 length 0x4ff7 00:44:11.884 Nvme0n1p2 : 5.24 403.32 25.21 0.00 0.00 301200.87 2146.38 291220.23 00:44:11.884 =================================================================================================================== 00:44:11.884 Total : 1607.04 100.44 0.00 0.00 306457.42 1287.83 349830.60 00:44:13.891 ************************************ 00:44:13.891 END TEST bdev_verify_big_io 00:44:13.891 ************************************ 00:44:13.891 00:44:13.891 real 0m8.119s 00:44:13.891 user 0m14.958s 00:44:13.891 sys 0m0.251s 00:44:13.891 21:57:46 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:13.891 21:57:46 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:44:13.891 21:57:47 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:44:13.891 21:57:47 blockdev_nvme_gpt -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:13.891 21:57:47 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:44:13.891 21:57:47 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:13.891 21:57:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:44:13.891 ************************************ 00:44:13.891 START TEST bdev_write_zeroes 00:44:13.891 ************************************ 00:44:13.891 21:57:47 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:13.891 [2024-07-15 21:57:47.115666] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:44:13.891 [2024-07-15 21:57:47.115827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172865 ] 00:44:14.149 [2024-07-15 21:57:47.276818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:14.149 [2024-07-15 21:57:47.488016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:14.716 Running I/O for 1 seconds... 00:44:15.650 00:44:15.650 Latency(us) 00:44:15.650 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:15.650 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:44:15.650 Nvme0n1p1 : 1.01 22474.99 87.79 0.00 0.00 5673.28 2318.09 16942.06 00:44:15.650 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:44:15.650 Nvme0n1p2 : 1.01 22494.78 87.87 0.00 0.00 5670.24 2260.85 14881.54 00:44:15.650 =================================================================================================================== 00:44:15.650 Total : 44969.77 175.66 0.00 0.00 5671.76 2260.85 16942.06 00:44:17.024 ************************************ 00:44:17.024 END TEST bdev_write_zeroes 00:44:17.024 ************************************ 00:44:17.024 00:44:17.024 real 0m3.228s 00:44:17.024 user 0m2.900s 00:44:17.024 sys 0m0.229s 00:44:17.024 21:57:50 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:17.024 21:57:50 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:44:17.024 21:57:50 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:44:17.024 21:57:50 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:17.024 21:57:50 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:44:17.024 21:57:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:17.024 21:57:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:44:17.024 ************************************ 00:44:17.024 START TEST bdev_json_nonenclosed 00:44:17.024 ************************************ 00:44:17.024 21:57:50 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:17.282 [2024-07-15 21:57:50.413115] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:44:17.282 [2024-07-15 21:57:50.413339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172922 ] 00:44:17.282 [2024-07-15 21:57:50.599424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:17.540 [2024-07-15 21:57:50.813567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:17.540 [2024-07-15 21:57:50.813685] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:44:17.540 [2024-07-15 21:57:50.813734] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:44:17.540 [2024-07-15 21:57:50.813759] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:44:18.106 00:44:18.106 real 0m0.891s 00:44:18.106 user 0m0.645s 00:44:18.106 sys 0m0.138s 00:44:18.106 21:57:51 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:44:18.106 21:57:51 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:18.106 21:57:51 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:44:18.106 ************************************ 00:44:18.106 END TEST bdev_json_nonenclosed 00:44:18.106 ************************************ 00:44:18.106 21:57:51 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:44:18.106 21:57:51 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # true 00:44:18.106 21:57:51 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:18.106 21:57:51 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:44:18.106 21:57:51 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:18.106 21:57:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:44:18.106 ************************************ 00:44:18.106 START TEST bdev_json_nonarray 00:44:18.106 ************************************ 00:44:18.107 21:57:51 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:18.107 [2024-07-15 21:57:51.363163] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:44:18.107 [2024-07-15 21:57:51.363332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172960 ] 00:44:18.365 [2024-07-15 21:57:51.526722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:18.365 [2024-07-15 21:57:51.736705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:18.365 [2024-07-15 21:57:51.736861] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:44:18.365 [2024-07-15 21:57:51.736913] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:44:18.365 [2024-07-15 21:57:51.736939] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:44:18.933 00:44:18.934 real 0m0.862s 00:44:18.934 user 0m0.641s 00:44:18.934 sys 0m0.121s 00:44:18.934 21:57:52 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:44:18.934 21:57:52 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:18.934 21:57:52 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:44:18.934 ************************************ 00:44:18.934 END TEST bdev_json_nonarray 00:44:18.934 ************************************ 00:44:18.934 21:57:52 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:44:18.934 21:57:52 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # true 00:44:18.934 21:57:52 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:44:18.934 21:57:52 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:44:18.934 21:57:52 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:44:18.934 21:57:52 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:18.934 21:57:52 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:18.934 21:57:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:44:18.934 ************************************ 00:44:18.934 START TEST bdev_gpt_uuid 00:44:18.934 ************************************ 00:44:18.934 21:57:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1123 -- # bdev_gpt_uuid 00:44:18.934 21:57:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@614 -- # local bdev 00:44:18.934 21:57:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:44:18.934 21:57:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=172999 00:44:18.934 21:57:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:44:18.934 21:57:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:44:18.934 21:57:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 172999 00:44:18.934 21:57:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@829 -- # '[' -z 172999 ']' 00:44:18.934 21:57:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:18.934 21:57:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # local max_retries=100 00:44:18.934 21:57:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:18.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:18.934 21:57:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # xtrace_disable 00:44:18.934 21:57:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:44:18.934 [2024-07-15 21:57:52.304544] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:44:18.934 [2024-07-15 21:57:52.304785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172999 ] 00:44:19.193 [2024-07-15 21:57:52.464168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:19.450 [2024-07-15 21:57:52.701424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:20.386 21:57:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:44:20.386 21:57:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # return 0 00:44:20.386 21:57:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:44:20.386 21:57:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:20.386 21:57:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:44:20.651 Some configs were skipped because the RPC state that can call them passed over. 00:44:20.651 21:57:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:20.651 21:57:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:44:20.651 21:57:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:20.651 21:57:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:44:20.651 21:57:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:20.651 21:57:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:44:20.651 21:57:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:20.651 21:57:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:44:20.651 21:57:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:20.651 21:57:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # bdev='[ 00:44:20.651 { 00:44:20.651 "name": "Nvme0n1p1", 00:44:20.651 "aliases": [ 00:44:20.651 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:44:20.651 ], 00:44:20.651 "product_name": "GPT Disk", 00:44:20.651 "block_size": 4096, 00:44:20.651 "num_blocks": 655104, 00:44:20.651 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:44:20.651 "assigned_rate_limits": { 00:44:20.651 "rw_ios_per_sec": 0, 00:44:20.651 "rw_mbytes_per_sec": 0, 00:44:20.651 "r_mbytes_per_sec": 0, 00:44:20.651 "w_mbytes_per_sec": 0 00:44:20.651 }, 00:44:20.651 "claimed": false, 00:44:20.651 "zoned": false, 00:44:20.651 "supported_io_types": { 00:44:20.651 "read": true, 00:44:20.651 "write": true, 00:44:20.651 "unmap": true, 00:44:20.651 "flush": true, 00:44:20.651 "reset": true, 00:44:20.651 "nvme_admin": false, 00:44:20.651 "nvme_io": false, 00:44:20.651 "nvme_io_md": false, 00:44:20.651 "write_zeroes": true, 00:44:20.651 "zcopy": false, 00:44:20.651 "get_zone_info": false, 00:44:20.651 "zone_management": false, 00:44:20.651 "zone_append": false, 00:44:20.651 "compare": true, 00:44:20.651 "compare_and_write": false, 00:44:20.651 "abort": true, 00:44:20.651 "seek_hole": false, 00:44:20.651 "seek_data": false, 00:44:20.651 "copy": true, 00:44:20.651 "nvme_iov_md": false 00:44:20.651 }, 00:44:20.651 "driver_specific": { 00:44:20.651 "gpt": { 00:44:20.651 "base_bdev": "Nvme0n1", 00:44:20.651 "offset_blocks": 256, 00:44:20.651 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:44:20.651 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:44:20.651 "partition_name": "SPDK_TEST_first" 00:44:20.651 } 00:44:20.651 } 00:44:20.651 } 00:44:20.651 ]' 00:44:20.651 21:57:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r length 00:44:20.651 21:57:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:44:20.651 21:57:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:44:20.651 21:57:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:44:20.651 21:57:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:44:20.915 21:57:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:44:20.915 21:57:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:44:20.915 21:57:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:20.915 21:57:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:44:20.915 21:57:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:20.915 21:57:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # bdev='[ 00:44:20.915 { 00:44:20.915 "name": "Nvme0n1p2", 00:44:20.915 "aliases": [ 00:44:20.915 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:44:20.915 ], 00:44:20.915 "product_name": "GPT Disk", 00:44:20.915 "block_size": 4096, 00:44:20.915 "num_blocks": 655103, 00:44:20.915 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:44:20.915 "assigned_rate_limits": { 00:44:20.915 "rw_ios_per_sec": 0, 00:44:20.915 "rw_mbytes_per_sec": 0, 00:44:20.915 "r_mbytes_per_sec": 0, 00:44:20.915 "w_mbytes_per_sec": 0 00:44:20.915 }, 00:44:20.915 "claimed": false, 00:44:20.915 "zoned": false, 00:44:20.915 "supported_io_types": { 00:44:20.915 "read": true, 00:44:20.915 "write": true, 00:44:20.915 "unmap": true, 00:44:20.915 "flush": true, 00:44:20.915 "reset": true, 00:44:20.915 "nvme_admin": false, 00:44:20.915 "nvme_io": false, 00:44:20.915 "nvme_io_md": false, 00:44:20.915 "write_zeroes": true, 00:44:20.915 "zcopy": false, 00:44:20.915 "get_zone_info": false, 00:44:20.915 "zone_management": false, 00:44:20.915 "zone_append": false, 00:44:20.915 "compare": true, 00:44:20.915 "compare_and_write": false, 00:44:20.915 "abort": true, 00:44:20.915 "seek_hole": false, 00:44:20.915 "seek_data": false, 00:44:20.915 "copy": true, 00:44:20.915 "nvme_iov_md": false 00:44:20.915 }, 00:44:20.915 "driver_specific": { 00:44:20.915 "gpt": { 00:44:20.915 "base_bdev": "Nvme0n1", 00:44:20.915 "offset_blocks": 655360, 00:44:20.915 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:44:20.915 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:44:20.915 "partition_name": "SPDK_TEST_second" 00:44:20.915 } 00:44:20.915 } 00:44:20.915 } 00:44:20.915 ]' 00:44:20.915 21:57:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r length 00:44:20.915 21:57:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:44:20.915 21:57:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:44:20.915 21:57:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:44:20.915 21:57:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:44:20.915 21:57:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:44:20.915 21:57:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@631 -- # killprocess 172999 00:44:20.915 21:57:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@948 -- # '[' -z 172999 ']' 00:44:20.915 21:57:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # kill -0 172999 00:44:20.915 21:57:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # uname 00:44:20.915 21:57:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:44:20.915 21:57:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 172999 00:44:20.915 21:57:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:44:20.915 21:57:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:44:20.915 21:57:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 172999' 00:44:20.915 killing process with pid 172999 00:44:20.915 21:57:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@967 -- # kill 172999 00:44:20.915 21:57:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # wait 172999 00:44:24.204 ************************************ 00:44:24.204 END TEST bdev_gpt_uuid 00:44:24.204 ************************************ 00:44:24.204 00:44:24.204 real 0m4.854s 00:44:24.204 user 0m5.136s 00:44:24.204 sys 0m0.418s 00:44:24.204 21:57:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:24.204 21:57:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:44:24.204 21:57:57 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:44:24.204 21:57:57 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:44:24.204 21:57:57 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:44:24.204 21:57:57 blockdev_nvme_gpt -- bdev/blockdev.sh@811 -- # cleanup 00:44:24.204 21:57:57 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:44:24.204 21:57:57 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:44:24.204 21:57:57 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:44:24.204 21:57:57 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:44:24.204 21:57:57 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:44:24.204 21:57:57 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:44:24.204 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:44:24.204 Waiting for block devices as requested 00:44:24.204 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:44:24.463 21:57:57 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:44:24.463 21:57:57 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:44:24.463 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:44:24.463 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:44:24.463 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:44:24.463 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:44:24.463 21:57:57 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:44:24.463 ************************************ 00:44:24.463 END TEST blockdev_nvme_gpt 00:44:24.463 ************************************ 00:44:24.463 00:44:24.463 real 0m49.600s 00:44:24.463 user 1m9.666s 00:44:24.463 sys 0m5.932s 00:44:24.463 21:57:57 blockdev_nvme_gpt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:24.463 21:57:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:44:24.463 21:57:57 -- common/autotest_common.sh@1142 -- # return 0 00:44:24.463 21:57:57 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:44:24.463 21:57:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:24.463 21:57:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:24.463 21:57:57 -- common/autotest_common.sh@10 -- # set +x 00:44:24.463 ************************************ 00:44:24.463 START TEST nvme 00:44:24.463 ************************************ 00:44:24.463 21:57:57 nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:44:24.725 * Looking for test storage... 00:44:24.725 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:44:24.725 21:57:57 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:44:24.987 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:44:25.247 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:44:26.184 21:57:59 nvme -- nvme/nvme.sh@79 -- # uname 00:44:26.184 21:57:59 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:44:26.184 21:57:59 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:44:26.184 21:57:59 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:44:26.184 21:57:59 nvme -- common/autotest_common.sh@1080 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:44:26.184 Waiting for stub to ready for secondary processes... 00:44:26.184 21:57:59 nvme -- common/autotest_common.sh@1066 -- # _randomize_va_space=2 00:44:26.184 21:57:59 nvme -- common/autotest_common.sh@1067 -- # echo 0 00:44:26.184 21:57:59 nvme -- common/autotest_common.sh@1069 -- # stubpid=173470 00:44:26.184 21:57:59 nvme -- common/autotest_common.sh@1070 -- # echo Waiting for stub to ready for secondary processes... 00:44:26.184 21:57:59 nvme -- common/autotest_common.sh@1068 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:44:26.184 21:57:59 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:44:26.184 21:57:59 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/173470 ]] 00:44:26.184 21:57:59 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:44:26.184 [2024-07-15 21:57:59.430214] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:44:26.184 [2024-07-15 21:57:59.430847] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:44:27.121 21:58:00 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:44:27.121 21:58:00 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/173470 ]] 00:44:27.121 21:58:00 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:44:27.121 [2024-07-15 21:58:00.450505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:44:27.379 [2024-07-15 21:58:00.659677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:44:27.379 [2024-07-15 21:58:00.659829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:44:27.379 [2024-07-15 21:58:00.659838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:44:27.379 [2024-07-15 21:58:00.668578] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:44:27.379 [2024-07-15 21:58:00.668743] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:44:27.379 [2024-07-15 21:58:00.676481] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:44:27.379 [2024-07-15 21:58:00.677123] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:44:28.313 done. 00:44:28.313 21:58:01 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:44:28.313 21:58:01 nvme -- common/autotest_common.sh@1076 -- # echo done. 00:44:28.313 21:58:01 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:44:28.313 21:58:01 nvme -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:44:28.313 21:58:01 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:28.313 21:58:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:28.313 ************************************ 00:44:28.313 START TEST nvme_reset 00:44:28.313 ************************************ 00:44:28.313 21:58:01 nvme.nvme_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:44:28.313 Initializing NVMe Controllers 00:44:28.313 Skipping QEMU NVMe SSD at 0000:00:10.0 00:44:28.313 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:44:28.313 00:44:28.313 real 0m0.267s 00:44:28.313 user 0m0.092s 00:44:28.313 sys 0m0.113s 00:44:28.313 21:58:01 nvme.nvme_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:28.313 21:58:01 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:44:28.313 ************************************ 00:44:28.313 END TEST nvme_reset 00:44:28.313 ************************************ 00:44:28.571 21:58:01 nvme -- common/autotest_common.sh@1142 -- # return 0 00:44:28.571 21:58:01 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:44:28.571 21:58:01 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:28.571 21:58:01 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:28.571 21:58:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:28.571 ************************************ 00:44:28.571 START TEST nvme_identify 00:44:28.571 ************************************ 00:44:28.571 21:58:01 nvme.nvme_identify -- common/autotest_common.sh@1123 -- # nvme_identify 00:44:28.571 21:58:01 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:44:28.571 21:58:01 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:44:28.571 21:58:01 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:44:28.571 21:58:01 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:44:28.571 21:58:01 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:44:28.571 21:58:01 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:44:28.571 21:58:01 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:44:28.571 21:58:01 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:44:28.571 21:58:01 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:44:28.571 21:58:01 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:44:28.571 21:58:01 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:44:28.571 21:58:01 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:44:28.830 [2024-07-15 21:58:02.021617] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 173503 terminated unexpected 00:44:28.830 ===================================================== 00:44:28.830 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:44:28.830 ===================================================== 00:44:28.830 Controller Capabilities/Features 00:44:28.830 ================================ 00:44:28.830 Vendor ID: 1b36 00:44:28.830 Subsystem Vendor ID: 1af4 00:44:28.830 Serial Number: 12340 00:44:28.830 Model Number: QEMU NVMe Ctrl 00:44:28.830 Firmware Version: 8.0.0 00:44:28.830 Recommended Arb Burst: 6 00:44:28.830 IEEE OUI Identifier: 00 54 52 00:44:28.830 Multi-path I/O 00:44:28.830 May have multiple subsystem ports: No 00:44:28.830 May have multiple controllers: No 00:44:28.830 Associated with SR-IOV VF: No 00:44:28.830 Max Data Transfer Size: 524288 00:44:28.830 Max Number of Namespaces: 256 00:44:28.830 Max Number of I/O Queues: 64 00:44:28.830 NVMe Specification Version (VS): 1.4 00:44:28.830 NVMe Specification Version (Identify): 1.4 00:44:28.830 Maximum Queue Entries: 2048 00:44:28.830 Contiguous Queues Required: Yes 00:44:28.830 Arbitration Mechanisms Supported 00:44:28.830 Weighted Round Robin: Not Supported 00:44:28.830 Vendor Specific: Not Supported 00:44:28.830 Reset Timeout: 7500 ms 00:44:28.830 Doorbell Stride: 4 bytes 00:44:28.830 NVM Subsystem Reset: Not Supported 00:44:28.830 Command Sets Supported 00:44:28.830 NVM Command Set: Supported 00:44:28.830 Boot Partition: Not Supported 00:44:28.830 Memory Page Size Minimum: 4096 bytes 00:44:28.830 Memory Page Size Maximum: 65536 bytes 00:44:28.830 Persistent Memory Region: Not Supported 00:44:28.830 Optional Asynchronous Events Supported 00:44:28.830 Namespace Attribute Notices: Supported 00:44:28.830 Firmware Activation Notices: Not Supported 00:44:28.830 ANA Change Notices: Not Supported 00:44:28.830 PLE Aggregate Log Change Notices: Not Supported 00:44:28.830 LBA Status Info Alert Notices: Not Supported 00:44:28.830 EGE Aggregate Log Change Notices: Not Supported 00:44:28.830 Normal NVM Subsystem Shutdown event: Not Supported 00:44:28.830 Zone Descriptor Change Notices: Not Supported 00:44:28.830 Discovery Log Change Notices: Not Supported 00:44:28.830 Controller Attributes 00:44:28.830 128-bit Host Identifier: Not Supported 00:44:28.830 Non-Operational Permissive Mode: Not Supported 00:44:28.830 NVM Sets: Not Supported 00:44:28.830 Read Recovery Levels: Not Supported 00:44:28.830 Endurance Groups: Not Supported 00:44:28.830 Predictable Latency Mode: Not Supported 00:44:28.830 Traffic Based Keep ALive: Not Supported 00:44:28.830 Namespace Granularity: Not Supported 00:44:28.830 SQ Associations: Not Supported 00:44:28.830 UUID List: Not Supported 00:44:28.830 Multi-Domain Subsystem: Not Supported 00:44:28.830 Fixed Capacity Management: Not Supported 00:44:28.830 Variable Capacity Management: Not Supported 00:44:28.830 Delete Endurance Group: Not Supported 00:44:28.830 Delete NVM Set: Not Supported 00:44:28.830 Extended LBA Formats Supported: Supported 00:44:28.830 Flexible Data Placement Supported: Not Supported 00:44:28.830 00:44:28.830 Controller Memory Buffer Support 00:44:28.830 ================================ 00:44:28.830 Supported: No 00:44:28.830 00:44:28.830 Persistent Memory Region Support 00:44:28.830 ================================ 00:44:28.830 Supported: No 00:44:28.830 00:44:28.830 Admin Command Set Attributes 00:44:28.830 ============================ 00:44:28.830 Security Send/Receive: Not Supported 00:44:28.830 Format NVM: Supported 00:44:28.830 Firmware Activate/Download: Not Supported 00:44:28.830 Namespace Management: Supported 00:44:28.830 Device Self-Test: Not Supported 00:44:28.830 Directives: Supported 00:44:28.830 NVMe-MI: Not Supported 00:44:28.830 Virtualization Management: Not Supported 00:44:28.830 Doorbell Buffer Config: Supported 00:44:28.830 Get LBA Status Capability: Not Supported 00:44:28.830 Command & Feature Lockdown Capability: Not Supported 00:44:28.830 Abort Command Limit: 4 00:44:28.830 Async Event Request Limit: 4 00:44:28.830 Number of Firmware Slots: N/A 00:44:28.830 Firmware Slot 1 Read-Only: N/A 00:44:28.830 Firmware Activation Without Reset: N/A 00:44:28.830 Multiple Update Detection Support: N/A 00:44:28.830 Firmware Update Granularity: No Information Provided 00:44:28.830 Per-Namespace SMART Log: Yes 00:44:28.830 Asymmetric Namespace Access Log Page: Not Supported 00:44:28.830 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:44:28.830 Command Effects Log Page: Supported 00:44:28.830 Get Log Page Extended Data: Supported 00:44:28.830 Telemetry Log Pages: Not Supported 00:44:28.830 Persistent Event Log Pages: Not Supported 00:44:28.830 Supported Log Pages Log Page: May Support 00:44:28.830 Commands Supported & Effects Log Page: Not Supported 00:44:28.830 Feature Identifiers & Effects Log Page:May Support 00:44:28.830 NVMe-MI Commands & Effects Log Page: May Support 00:44:28.830 Data Area 4 for Telemetry Log: Not Supported 00:44:28.830 Error Log Page Entries Supported: 1 00:44:28.830 Keep Alive: Not Supported 00:44:28.830 00:44:28.830 NVM Command Set Attributes 00:44:28.830 ========================== 00:44:28.830 Submission Queue Entry Size 00:44:28.830 Max: 64 00:44:28.830 Min: 64 00:44:28.830 Completion Queue Entry Size 00:44:28.830 Max: 16 00:44:28.830 Min: 16 00:44:28.830 Number of Namespaces: 256 00:44:28.830 Compare Command: Supported 00:44:28.830 Write Uncorrectable Command: Not Supported 00:44:28.830 Dataset Management Command: Supported 00:44:28.831 Write Zeroes Command: Supported 00:44:28.831 Set Features Save Field: Supported 00:44:28.831 Reservations: Not Supported 00:44:28.831 Timestamp: Supported 00:44:28.831 Copy: Supported 00:44:28.831 Volatile Write Cache: Present 00:44:28.831 Atomic Write Unit (Normal): 1 00:44:28.831 Atomic Write Unit (PFail): 1 00:44:28.831 Atomic Compare & Write Unit: 1 00:44:28.831 Fused Compare & Write: Not Supported 00:44:28.831 Scatter-Gather List 00:44:28.831 SGL Command Set: Supported 00:44:28.831 SGL Keyed: Not Supported 00:44:28.831 SGL Bit Bucket Descriptor: Not Supported 00:44:28.831 SGL Metadata Pointer: Not Supported 00:44:28.831 Oversized SGL: Not Supported 00:44:28.831 SGL Metadata Address: Not Supported 00:44:28.831 SGL Offset: Not Supported 00:44:28.831 Transport SGL Data Block: Not Supported 00:44:28.831 Replay Protected Memory Block: Not Supported 00:44:28.831 00:44:28.831 Firmware Slot Information 00:44:28.831 ========================= 00:44:28.831 Active slot: 1 00:44:28.831 Slot 1 Firmware Revision: 1.0 00:44:28.831 00:44:28.831 00:44:28.831 Commands Supported and Effects 00:44:28.831 ============================== 00:44:28.831 Admin Commands 00:44:28.831 -------------- 00:44:28.831 Delete I/O Submission Queue (00h): Supported 00:44:28.831 Create I/O Submission Queue (01h): Supported 00:44:28.831 Get Log Page (02h): Supported 00:44:28.831 Delete I/O Completion Queue (04h): Supported 00:44:28.831 Create I/O Completion Queue (05h): Supported 00:44:28.831 Identify (06h): Supported 00:44:28.831 Abort (08h): Supported 00:44:28.831 Set Features (09h): Supported 00:44:28.831 Get Features (0Ah): Supported 00:44:28.831 Asynchronous Event Request (0Ch): Supported 00:44:28.831 Namespace Attachment (15h): Supported NS-Inventory-Change 00:44:28.831 Directive Send (19h): Supported 00:44:28.831 Directive Receive (1Ah): Supported 00:44:28.831 Virtualization Management (1Ch): Supported 00:44:28.831 Doorbell Buffer Config (7Ch): Supported 00:44:28.831 Format NVM (80h): Supported LBA-Change 00:44:28.831 I/O Commands 00:44:28.831 ------------ 00:44:28.831 Flush (00h): Supported LBA-Change 00:44:28.831 Write (01h): Supported LBA-Change 00:44:28.831 Read (02h): Supported 00:44:28.831 Compare (05h): Supported 00:44:28.831 Write Zeroes (08h): Supported LBA-Change 00:44:28.831 Dataset Management (09h): Supported LBA-Change 00:44:28.831 Unknown (0Ch): Supported 00:44:28.831 Unknown (12h): Supported 00:44:28.831 Copy (19h): Supported LBA-Change 00:44:28.831 Unknown (1Dh): Supported LBA-Change 00:44:28.831 00:44:28.831 Error Log 00:44:28.831 ========= 00:44:28.831 00:44:28.831 Arbitration 00:44:28.831 =========== 00:44:28.831 Arbitration Burst: no limit 00:44:28.831 00:44:28.831 Power Management 00:44:28.831 ================ 00:44:28.831 Number of Power States: 1 00:44:28.831 Current Power State: Power State #0 00:44:28.831 Power State #0: 00:44:28.831 Max Power: 25.00 W 00:44:28.831 Non-Operational State: Operational 00:44:28.831 Entry Latency: 16 microseconds 00:44:28.831 Exit Latency: 4 microseconds 00:44:28.831 Relative Read Throughput: 0 00:44:28.831 Relative Read Latency: 0 00:44:28.831 Relative Write Throughput: 0 00:44:28.831 Relative Write Latency: 0 00:44:28.831 Idle Power: Not Reported 00:44:28.831 Active Power: Not Reported 00:44:28.831 Non-Operational Permissive Mode: Not Supported 00:44:28.831 00:44:28.831 Health Information 00:44:28.831 ================== 00:44:28.831 Critical Warnings: 00:44:28.831 Available Spare Space: OK 00:44:28.831 Temperature: OK 00:44:28.831 Device Reliability: OK 00:44:28.831 Read Only: No 00:44:28.831 Volatile Memory Backup: OK 00:44:28.831 Current Temperature: 323 Kelvin (50 Celsius) 00:44:28.831 Temperature Threshold: 343 Kelvin (70 Celsius) 00:44:28.831 Available Spare: 0% 00:44:28.831 Available Spare Threshold: 0% 00:44:28.831 Life Percentage Used: 0% 00:44:28.831 Data Units Read: 4481 00:44:28.831 Data Units Written: 4134 00:44:28.831 Host Read Commands: 236131 00:44:28.831 Host Write Commands: 249039 00:44:28.831 Controller Busy Time: 0 minutes 00:44:28.831 Power Cycles: 0 00:44:28.831 Power On Hours: 0 hours 00:44:28.831 Unsafe Shutdowns: 0 00:44:28.831 Unrecoverable Media Errors: 0 00:44:28.831 Lifetime Error Log Entries: 0 00:44:28.831 Warning Temperature Time: 0 minutes 00:44:28.831 Critical Temperature Time: 0 minutes 00:44:28.831 00:44:28.831 Number of Queues 00:44:28.831 ================ 00:44:28.831 Number of I/O Submission Queues: 64 00:44:28.831 Number of I/O Completion Queues: 64 00:44:28.831 00:44:28.831 ZNS Specific Controller Data 00:44:28.831 ============================ 00:44:28.831 Zone Append Size Limit: 0 00:44:28.831 00:44:28.831 00:44:28.831 Active Namespaces 00:44:28.831 ================= 00:44:28.831 Namespace ID:1 00:44:28.831 Error Recovery Timeout: Unlimited 00:44:28.831 Command Set Identifier: NVM (00h) 00:44:28.831 Deallocate: Supported 00:44:28.831 Deallocated/Unwritten Error: Supported 00:44:28.831 Deallocated Read Value: All 0x00 00:44:28.831 Deallocate in Write Zeroes: Not Supported 00:44:28.831 Deallocated Guard Field: 0xFFFF 00:44:28.831 Flush: Supported 00:44:28.831 Reservation: Not Supported 00:44:28.831 Namespace Sharing Capabilities: Private 00:44:28.831 Size (in LBAs): 1310720 (5GiB) 00:44:28.832 Capacity (in LBAs): 1310720 (5GiB) 00:44:28.832 Utilization (in LBAs): 1310720 (5GiB) 00:44:28.832 Thin Provisioning: Not Supported 00:44:28.832 Per-NS Atomic Units: No 00:44:28.832 Maximum Single Source Range Length: 128 00:44:28.832 Maximum Copy Length: 128 00:44:28.832 Maximum Source Range Count: 128 00:44:28.832 NGUID/EUI64 Never Reused: No 00:44:28.832 Namespace Write Protected: No 00:44:28.832 Number of LBA Formats: 8 00:44:28.832 Current LBA Format: LBA Format #04 00:44:28.832 LBA Format #00: Data Size: 512 Metadata Size: 0 00:44:28.832 LBA Format #01: Data Size: 512 Metadata Size: 8 00:44:28.832 LBA Format #02: Data Size: 512 Metadata Size: 16 00:44:28.832 LBA Format #03: Data Size: 512 Metadata Size: 64 00:44:28.832 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:44:28.832 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:44:28.832 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:44:28.832 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:44:28.832 00:44:28.832 NVM Specific Namespace Data 00:44:28.832 =========================== 00:44:28.832 Logical Block Storage Tag Mask: 0 00:44:28.832 Protection Information Capabilities: 00:44:28.832 16b Guard Protection Information Storage Tag Support: No 00:44:28.832 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:44:28.832 Storage Tag Check Read Support: No 00:44:28.832 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:28.832 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:28.832 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:28.832 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:28.832 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:28.832 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:28.832 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:28.832 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:28.832 21:58:02 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:44:28.832 21:58:02 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:44:29.091 ===================================================== 00:44:29.091 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:44:29.091 ===================================================== 00:44:29.091 Controller Capabilities/Features 00:44:29.091 ================================ 00:44:29.091 Vendor ID: 1b36 00:44:29.091 Subsystem Vendor ID: 1af4 00:44:29.091 Serial Number: 12340 00:44:29.091 Model Number: QEMU NVMe Ctrl 00:44:29.091 Firmware Version: 8.0.0 00:44:29.091 Recommended Arb Burst: 6 00:44:29.091 IEEE OUI Identifier: 00 54 52 00:44:29.091 Multi-path I/O 00:44:29.091 May have multiple subsystem ports: No 00:44:29.091 May have multiple controllers: No 00:44:29.091 Associated with SR-IOV VF: No 00:44:29.091 Max Data Transfer Size: 524288 00:44:29.091 Max Number of Namespaces: 256 00:44:29.091 Max Number of I/O Queues: 64 00:44:29.091 NVMe Specification Version (VS): 1.4 00:44:29.091 NVMe Specification Version (Identify): 1.4 00:44:29.091 Maximum Queue Entries: 2048 00:44:29.091 Contiguous Queues Required: Yes 00:44:29.091 Arbitration Mechanisms Supported 00:44:29.091 Weighted Round Robin: Not Supported 00:44:29.091 Vendor Specific: Not Supported 00:44:29.091 Reset Timeout: 7500 ms 00:44:29.091 Doorbell Stride: 4 bytes 00:44:29.091 NVM Subsystem Reset: Not Supported 00:44:29.091 Command Sets Supported 00:44:29.091 NVM Command Set: Supported 00:44:29.091 Boot Partition: Not Supported 00:44:29.091 Memory Page Size Minimum: 4096 bytes 00:44:29.091 Memory Page Size Maximum: 65536 bytes 00:44:29.091 Persistent Memory Region: Not Supported 00:44:29.091 Optional Asynchronous Events Supported 00:44:29.091 Namespace Attribute Notices: Supported 00:44:29.091 Firmware Activation Notices: Not Supported 00:44:29.091 ANA Change Notices: Not Supported 00:44:29.091 PLE Aggregate Log Change Notices: Not Supported 00:44:29.091 LBA Status Info Alert Notices: Not Supported 00:44:29.091 EGE Aggregate Log Change Notices: Not Supported 00:44:29.091 Normal NVM Subsystem Shutdown event: Not Supported 00:44:29.091 Zone Descriptor Change Notices: Not Supported 00:44:29.091 Discovery Log Change Notices: Not Supported 00:44:29.091 Controller Attributes 00:44:29.091 128-bit Host Identifier: Not Supported 00:44:29.091 Non-Operational Permissive Mode: Not Supported 00:44:29.091 NVM Sets: Not Supported 00:44:29.091 Read Recovery Levels: Not Supported 00:44:29.091 Endurance Groups: Not Supported 00:44:29.091 Predictable Latency Mode: Not Supported 00:44:29.091 Traffic Based Keep ALive: Not Supported 00:44:29.091 Namespace Granularity: Not Supported 00:44:29.091 SQ Associations: Not Supported 00:44:29.091 UUID List: Not Supported 00:44:29.091 Multi-Domain Subsystem: Not Supported 00:44:29.091 Fixed Capacity Management: Not Supported 00:44:29.091 Variable Capacity Management: Not Supported 00:44:29.091 Delete Endurance Group: Not Supported 00:44:29.091 Delete NVM Set: Not Supported 00:44:29.091 Extended LBA Formats Supported: Supported 00:44:29.091 Flexible Data Placement Supported: Not Supported 00:44:29.091 00:44:29.091 Controller Memory Buffer Support 00:44:29.091 ================================ 00:44:29.091 Supported: No 00:44:29.091 00:44:29.091 Persistent Memory Region Support 00:44:29.091 ================================ 00:44:29.091 Supported: No 00:44:29.091 00:44:29.091 Admin Command Set Attributes 00:44:29.091 ============================ 00:44:29.091 Security Send/Receive: Not Supported 00:44:29.091 Format NVM: Supported 00:44:29.091 Firmware Activate/Download: Not Supported 00:44:29.091 Namespace Management: Supported 00:44:29.091 Device Self-Test: Not Supported 00:44:29.091 Directives: Supported 00:44:29.091 NVMe-MI: Not Supported 00:44:29.091 Virtualization Management: Not Supported 00:44:29.091 Doorbell Buffer Config: Supported 00:44:29.091 Get LBA Status Capability: Not Supported 00:44:29.091 Command & Feature Lockdown Capability: Not Supported 00:44:29.091 Abort Command Limit: 4 00:44:29.091 Async Event Request Limit: 4 00:44:29.091 Number of Firmware Slots: N/A 00:44:29.091 Firmware Slot 1 Read-Only: N/A 00:44:29.091 Firmware Activation Without Reset: N/A 00:44:29.091 Multiple Update Detection Support: N/A 00:44:29.091 Firmware Update Granularity: No Information Provided 00:44:29.091 Per-Namespace SMART Log: Yes 00:44:29.091 Asymmetric Namespace Access Log Page: Not Supported 00:44:29.091 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:44:29.091 Command Effects Log Page: Supported 00:44:29.091 Get Log Page Extended Data: Supported 00:44:29.091 Telemetry Log Pages: Not Supported 00:44:29.091 Persistent Event Log Pages: Not Supported 00:44:29.091 Supported Log Pages Log Page: May Support 00:44:29.091 Commands Supported & Effects Log Page: Not Supported 00:44:29.091 Feature Identifiers & Effects Log Page:May Support 00:44:29.091 NVMe-MI Commands & Effects Log Page: May Support 00:44:29.091 Data Area 4 for Telemetry Log: Not Supported 00:44:29.091 Error Log Page Entries Supported: 1 00:44:29.091 Keep Alive: Not Supported 00:44:29.091 00:44:29.091 NVM Command Set Attributes 00:44:29.091 ========================== 00:44:29.091 Submission Queue Entry Size 00:44:29.091 Max: 64 00:44:29.091 Min: 64 00:44:29.091 Completion Queue Entry Size 00:44:29.091 Max: 16 00:44:29.091 Min: 16 00:44:29.091 Number of Namespaces: 256 00:44:29.091 Compare Command: Supported 00:44:29.091 Write Uncorrectable Command: Not Supported 00:44:29.091 Dataset Management Command: Supported 00:44:29.091 Write Zeroes Command: Supported 00:44:29.091 Set Features Save Field: Supported 00:44:29.091 Reservations: Not Supported 00:44:29.091 Timestamp: Supported 00:44:29.091 Copy: Supported 00:44:29.091 Volatile Write Cache: Present 00:44:29.091 Atomic Write Unit (Normal): 1 00:44:29.091 Atomic Write Unit (PFail): 1 00:44:29.091 Atomic Compare & Write Unit: 1 00:44:29.091 Fused Compare & Write: Not Supported 00:44:29.091 Scatter-Gather List 00:44:29.091 SGL Command Set: Supported 00:44:29.091 SGL Keyed: Not Supported 00:44:29.091 SGL Bit Bucket Descriptor: Not Supported 00:44:29.091 SGL Metadata Pointer: Not Supported 00:44:29.091 Oversized SGL: Not Supported 00:44:29.091 SGL Metadata Address: Not Supported 00:44:29.091 SGL Offset: Not Supported 00:44:29.091 Transport SGL Data Block: Not Supported 00:44:29.091 Replay Protected Memory Block: Not Supported 00:44:29.091 00:44:29.091 Firmware Slot Information 00:44:29.091 ========================= 00:44:29.091 Active slot: 1 00:44:29.091 Slot 1 Firmware Revision: 1.0 00:44:29.091 00:44:29.091 00:44:29.091 Commands Supported and Effects 00:44:29.091 ============================== 00:44:29.091 Admin Commands 00:44:29.091 -------------- 00:44:29.091 Delete I/O Submission Queue (00h): Supported 00:44:29.091 Create I/O Submission Queue (01h): Supported 00:44:29.091 Get Log Page (02h): Supported 00:44:29.091 Delete I/O Completion Queue (04h): Supported 00:44:29.091 Create I/O Completion Queue (05h): Supported 00:44:29.091 Identify (06h): Supported 00:44:29.091 Abort (08h): Supported 00:44:29.091 Set Features (09h): Supported 00:44:29.091 Get Features (0Ah): Supported 00:44:29.091 Asynchronous Event Request (0Ch): Supported 00:44:29.091 Namespace Attachment (15h): Supported NS-Inventory-Change 00:44:29.091 Directive Send (19h): Supported 00:44:29.091 Directive Receive (1Ah): Supported 00:44:29.091 Virtualization Management (1Ch): Supported 00:44:29.091 Doorbell Buffer Config (7Ch): Supported 00:44:29.091 Format NVM (80h): Supported LBA-Change 00:44:29.091 I/O Commands 00:44:29.091 ------------ 00:44:29.091 Flush (00h): Supported LBA-Change 00:44:29.091 Write (01h): Supported LBA-Change 00:44:29.091 Read (02h): Supported 00:44:29.091 Compare (05h): Supported 00:44:29.091 Write Zeroes (08h): Supported LBA-Change 00:44:29.091 Dataset Management (09h): Supported LBA-Change 00:44:29.091 Unknown (0Ch): Supported 00:44:29.091 Unknown (12h): Supported 00:44:29.091 Copy (19h): Supported LBA-Change 00:44:29.091 Unknown (1Dh): Supported LBA-Change 00:44:29.091 00:44:29.092 Error Log 00:44:29.092 ========= 00:44:29.092 00:44:29.092 Arbitration 00:44:29.092 =========== 00:44:29.092 Arbitration Burst: no limit 00:44:29.092 00:44:29.092 Power Management 00:44:29.092 ================ 00:44:29.092 Number of Power States: 1 00:44:29.092 Current Power State: Power State #0 00:44:29.092 Power State #0: 00:44:29.092 Max Power: 25.00 W 00:44:29.092 Non-Operational State: Operational 00:44:29.092 Entry Latency: 16 microseconds 00:44:29.092 Exit Latency: 4 microseconds 00:44:29.092 Relative Read Throughput: 0 00:44:29.092 Relative Read Latency: 0 00:44:29.092 Relative Write Throughput: 0 00:44:29.092 Relative Write Latency: 0 00:44:29.092 Idle Power: Not Reported 00:44:29.092 Active Power: Not Reported 00:44:29.092 Non-Operational Permissive Mode: Not Supported 00:44:29.092 00:44:29.092 Health Information 00:44:29.092 ================== 00:44:29.092 Critical Warnings: 00:44:29.092 Available Spare Space: OK 00:44:29.092 Temperature: OK 00:44:29.092 Device Reliability: OK 00:44:29.092 Read Only: No 00:44:29.092 Volatile Memory Backup: OK 00:44:29.092 Current Temperature: 323 Kelvin (50 Celsius) 00:44:29.092 Temperature Threshold: 343 Kelvin (70 Celsius) 00:44:29.092 Available Spare: 0% 00:44:29.092 Available Spare Threshold: 0% 00:44:29.092 Life Percentage Used: 0% 00:44:29.092 Data Units Read: 4481 00:44:29.092 Data Units Written: 4134 00:44:29.092 Host Read Commands: 236131 00:44:29.092 Host Write Commands: 249039 00:44:29.092 Controller Busy Time: 0 minutes 00:44:29.092 Power Cycles: 0 00:44:29.092 Power On Hours: 0 hours 00:44:29.092 Unsafe Shutdowns: 0 00:44:29.092 Unrecoverable Media Errors: 0 00:44:29.092 Lifetime Error Log Entries: 0 00:44:29.092 Warning Temperature Time: 0 minutes 00:44:29.092 Critical Temperature Time: 0 minutes 00:44:29.092 00:44:29.092 Number of Queues 00:44:29.092 ================ 00:44:29.092 Number of I/O Submission Queues: 64 00:44:29.092 Number of I/O Completion Queues: 64 00:44:29.092 00:44:29.092 ZNS Specific Controller Data 00:44:29.092 ============================ 00:44:29.092 Zone Append Size Limit: 0 00:44:29.092 00:44:29.092 00:44:29.092 Active Namespaces 00:44:29.092 ================= 00:44:29.092 Namespace ID:1 00:44:29.092 Error Recovery Timeout: Unlimited 00:44:29.092 Command Set Identifier: NVM (00h) 00:44:29.092 Deallocate: Supported 00:44:29.092 Deallocated/Unwritten Error: Supported 00:44:29.092 Deallocated Read Value: All 0x00 00:44:29.092 Deallocate in Write Zeroes: Not Supported 00:44:29.092 Deallocated Guard Field: 0xFFFF 00:44:29.092 Flush: Supported 00:44:29.092 Reservation: Not Supported 00:44:29.092 Namespace Sharing Capabilities: Private 00:44:29.092 Size (in LBAs): 1310720 (5GiB) 00:44:29.092 Capacity (in LBAs): 1310720 (5GiB) 00:44:29.092 Utilization (in LBAs): 1310720 (5GiB) 00:44:29.092 Thin Provisioning: Not Supported 00:44:29.092 Per-NS Atomic Units: No 00:44:29.092 Maximum Single Source Range Length: 128 00:44:29.092 Maximum Copy Length: 128 00:44:29.092 Maximum Source Range Count: 128 00:44:29.092 NGUID/EUI64 Never Reused: No 00:44:29.092 Namespace Write Protected: No 00:44:29.092 Number of LBA Formats: 8 00:44:29.092 Current LBA Format: LBA Format #04 00:44:29.092 LBA Format #00: Data Size: 512 Metadata Size: 0 00:44:29.092 LBA Format #01: Data Size: 512 Metadata Size: 8 00:44:29.092 LBA Format #02: Data Size: 512 Metadata Size: 16 00:44:29.092 LBA Format #03: Data Size: 512 Metadata Size: 64 00:44:29.092 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:44:29.092 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:44:29.092 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:44:29.092 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:44:29.092 00:44:29.092 NVM Specific Namespace Data 00:44:29.092 =========================== 00:44:29.092 Logical Block Storage Tag Mask: 0 00:44:29.092 Protection Information Capabilities: 00:44:29.092 16b Guard Protection Information Storage Tag Support: No 00:44:29.092 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:44:29.092 Storage Tag Check Read Support: No 00:44:29.092 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:29.092 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:29.092 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:29.092 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:29.092 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:29.092 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:29.092 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:29.092 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:44:29.092 ************************************ 00:44:29.092 END TEST nvme_identify 00:44:29.092 ************************************ 00:44:29.092 00:44:29.092 real 0m0.688s 00:44:29.092 user 0m0.254s 00:44:29.092 sys 0m0.314s 00:44:29.092 21:58:02 nvme.nvme_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:29.092 21:58:02 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:44:29.092 21:58:02 nvme -- common/autotest_common.sh@1142 -- # return 0 00:44:29.092 21:58:02 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:44:29.092 21:58:02 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:29.092 21:58:02 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:29.092 21:58:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:29.092 ************************************ 00:44:29.092 START TEST nvme_perf 00:44:29.092 ************************************ 00:44:29.092 21:58:02 nvme.nvme_perf -- common/autotest_common.sh@1123 -- # nvme_perf 00:44:29.092 21:58:02 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:44:30.466 Initializing NVMe Controllers 00:44:30.466 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:44:30.466 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:44:30.466 Initialization complete. Launching workers. 00:44:30.466 ======================================================== 00:44:30.466 Latency(us) 00:44:30.466 Device Information : IOPS MiB/s Average min max 00:44:30.466 PCIE (0000:00:10.0) NSID 1 from core 0: 97142.35 1138.39 1316.98 565.57 6446.96 00:44:30.466 ======================================================== 00:44:30.466 Total : 97142.35 1138.39 1316.98 565.57 6446.96 00:44:30.466 00:44:30.466 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:44:30.466 ================================================================================= 00:44:30.466 1.00000% : 676.108us 00:44:30.466 10.00000% : 794.159us 00:44:30.466 25.00000% : 958.714us 00:44:30.466 50.00000% : 1209.125us 00:44:30.466 75.00000% : 1459.535us 00:44:30.466 90.00000% : 2003.284us 00:44:30.466 95.00000% : 2532.723us 00:44:30.466 98.00000% : 3105.090us 00:44:30.466 99.00000% : 3520.056us 00:44:30.466 99.50000% : 3949.331us 00:44:30.466 99.90000% : 4722.026us 00:44:30.466 99.99000% : 6181.562us 00:44:30.466 99.99900% : 6467.745us 00:44:30.466 99.99990% : 6467.745us 00:44:30.466 99.99999% : 6467.745us 00:44:30.466 00:44:30.466 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:44:30.466 ============================================================================== 00:44:30.466 Range in us Cumulative IO count 00:44:30.466 565.212 - 568.790: 0.0010% ( 1) 00:44:30.466 568.790 - 572.367: 0.0021% ( 1) 00:44:30.466 572.367 - 575.944: 0.0051% ( 3) 00:44:30.466 575.944 - 579.521: 0.0072% ( 2) 00:44:30.466 583.099 - 586.676: 0.0082% ( 1) 00:44:30.466 586.676 - 590.253: 0.0103% ( 2) 00:44:30.466 590.253 - 593.831: 0.0175% ( 7) 00:44:30.466 593.831 - 597.408: 0.0247% ( 7) 00:44:30.466 597.408 - 600.985: 0.0309% ( 6) 00:44:30.466 600.985 - 604.562: 0.0442% ( 13) 00:44:30.466 604.562 - 608.140: 0.0525% ( 8) 00:44:30.466 608.140 - 611.717: 0.0648% ( 12) 00:44:30.466 611.717 - 615.294: 0.0772% ( 12) 00:44:30.466 615.294 - 618.872: 0.0978% ( 20) 00:44:30.466 618.872 - 622.449: 0.1163% ( 18) 00:44:30.466 622.449 - 626.026: 0.1430% ( 26) 00:44:30.466 626.026 - 629.603: 0.1791% ( 35) 00:44:30.466 629.603 - 633.181: 0.2058% ( 26) 00:44:30.466 633.181 - 636.758: 0.2346% ( 28) 00:44:30.466 636.758 - 640.335: 0.2676% ( 32) 00:44:30.466 640.335 - 643.913: 0.3231% ( 54) 00:44:30.466 643.913 - 647.490: 0.3725% ( 48) 00:44:30.466 647.490 - 651.067: 0.4240% ( 50) 00:44:30.466 651.067 - 654.645: 0.4867% ( 61) 00:44:30.466 654.645 - 658.222: 0.5598% ( 71) 00:44:30.466 658.222 - 661.799: 0.6504% ( 88) 00:44:30.466 661.799 - 665.376: 0.7327% ( 80) 00:44:30.466 665.376 - 668.954: 0.8335% ( 98) 00:44:30.466 668.954 - 672.531: 0.9704% ( 133) 00:44:30.466 672.531 - 676.108: 1.1042% ( 130) 00:44:30.466 676.108 - 679.686: 1.2287% ( 121) 00:44:30.466 679.686 - 683.263: 1.3594% ( 127) 00:44:30.466 683.263 - 686.840: 1.5333% ( 169) 00:44:30.466 686.840 - 690.417: 1.6897% ( 152) 00:44:30.466 690.417 - 693.995: 1.8790% ( 184) 00:44:30.466 693.995 - 697.572: 2.0673% ( 183) 00:44:30.466 697.572 - 701.149: 2.2742% ( 201) 00:44:30.466 701.149 - 704.727: 2.4779% ( 198) 00:44:30.466 704.727 - 708.304: 2.7403% ( 255) 00:44:30.466 708.304 - 711.881: 2.9400% ( 194) 00:44:30.466 711.881 - 715.459: 3.1859% ( 239) 00:44:30.466 715.459 - 719.036: 3.4442% ( 251) 00:44:30.466 719.036 - 722.613: 3.7138% ( 262) 00:44:30.466 722.613 - 726.190: 3.9639% ( 243) 00:44:30.466 726.190 - 729.768: 4.2540% ( 282) 00:44:30.466 729.768 - 733.345: 4.5566% ( 294) 00:44:30.466 733.345 - 736.922: 4.8643% ( 299) 00:44:30.466 736.922 - 740.500: 5.1658% ( 293) 00:44:30.466 740.500 - 744.077: 5.4673% ( 293) 00:44:30.466 744.077 - 747.654: 5.7976% ( 321) 00:44:30.466 747.654 - 751.231: 6.1310% ( 324) 00:44:30.466 751.231 - 754.809: 6.4500% ( 310) 00:44:30.466 754.809 - 758.386: 6.7701% ( 311) 00:44:30.466 758.386 - 761.963: 7.1261% ( 346) 00:44:30.466 761.963 - 765.541: 7.4523% ( 317) 00:44:30.466 765.541 - 769.118: 7.7991% ( 337) 00:44:30.466 769.118 - 772.695: 8.1428% ( 334) 00:44:30.466 772.695 - 776.272: 8.5081% ( 355) 00:44:30.466 776.272 - 779.850: 8.8559% ( 338) 00:44:30.466 779.850 - 783.427: 9.1790% ( 314) 00:44:30.466 783.427 - 787.004: 9.5166% ( 328) 00:44:30.466 787.004 - 790.582: 9.8819% ( 355) 00:44:30.466 790.582 - 794.159: 10.2585% ( 366) 00:44:30.466 794.159 - 797.736: 10.5909% ( 323) 00:44:30.466 797.736 - 801.314: 10.9274% ( 327) 00:44:30.466 801.314 - 804.891: 11.2917% ( 354) 00:44:30.466 804.891 - 808.468: 11.6487% ( 347) 00:44:30.466 808.468 - 812.045: 11.9718% ( 314) 00:44:30.466 812.045 - 815.623: 12.3392% ( 357) 00:44:30.466 815.623 - 819.200: 12.6809% ( 332) 00:44:30.466 819.200 - 822.777: 13.0081% ( 318) 00:44:30.466 822.777 - 826.355: 13.3652% ( 347) 00:44:30.466 826.355 - 829.932: 13.6780% ( 304) 00:44:30.466 829.932 - 833.509: 14.0238% ( 336) 00:44:30.466 833.509 - 837.086: 14.3716% ( 338) 00:44:30.466 837.086 - 840.664: 14.7307% ( 349) 00:44:30.466 840.664 - 844.241: 15.0178% ( 279) 00:44:30.466 844.241 - 847.818: 15.3409% ( 314) 00:44:30.466 847.818 - 851.396: 15.6918% ( 341) 00:44:30.466 851.396 - 854.973: 16.0067% ( 306) 00:44:30.466 854.973 - 858.550: 16.3092% ( 294) 00:44:30.466 858.550 - 862.128: 16.6715% ( 352) 00:44:30.466 862.128 - 865.705: 16.9987% ( 318) 00:44:30.466 865.705 - 869.282: 17.3012% ( 294) 00:44:30.466 869.282 - 872.859: 17.6357% ( 325) 00:44:30.466 872.859 - 876.437: 17.9753% ( 330) 00:44:30.466 876.437 - 880.014: 18.3179% ( 333) 00:44:30.466 880.014 - 883.591: 18.6277% ( 301) 00:44:30.466 883.591 - 887.169: 18.9477% ( 311) 00:44:30.466 887.169 - 890.746: 19.3027% ( 345) 00:44:30.466 890.746 - 894.323: 19.5960% ( 285) 00:44:30.466 894.323 - 897.900: 19.9284% ( 323) 00:44:30.466 897.900 - 901.478: 20.2772% ( 339) 00:44:30.466 901.478 - 905.055: 20.6168% ( 330) 00:44:30.466 905.055 - 908.632: 20.8916% ( 267) 00:44:30.466 908.632 - 912.210: 21.2507% ( 349) 00:44:30.466 912.210 - 915.787: 21.5913% ( 331) 00:44:30.466 915.787 - 922.941: 22.1954% ( 587) 00:44:30.466 922.941 - 930.096: 22.8982% ( 683) 00:44:30.466 930.096 - 937.251: 23.5382% ( 622) 00:44:30.466 937.251 - 944.405: 24.2277% ( 670) 00:44:30.466 944.405 - 951.560: 24.8637% ( 618) 00:44:30.466 951.560 - 958.714: 25.5603% ( 677) 00:44:30.466 958.714 - 965.869: 26.2271% ( 648) 00:44:30.466 965.869 - 973.024: 26.8929% ( 647) 00:44:30.466 973.024 - 980.178: 27.5916% ( 679) 00:44:30.466 980.178 - 987.333: 28.2513% ( 641) 00:44:30.466 987.333 - 994.487: 28.9664% ( 695) 00:44:30.466 994.487 - 1001.642: 29.6230% ( 638) 00:44:30.466 1001.642 - 1008.797: 30.3649% ( 721) 00:44:30.466 1008.797 - 1015.951: 31.0163% ( 633) 00:44:30.466 1015.951 - 1023.106: 31.7418% ( 705) 00:44:30.466 1023.106 - 1030.260: 32.4281% ( 667) 00:44:30.466 1030.260 - 1037.415: 33.1464% ( 698) 00:44:30.466 1037.415 - 1044.569: 33.8616% ( 695) 00:44:30.466 1044.569 - 1051.724: 34.5572% ( 676) 00:44:30.466 1051.724 - 1058.879: 35.2919% ( 714) 00:44:30.466 1058.879 - 1066.033: 35.9999% ( 688) 00:44:30.466 1066.033 - 1073.188: 36.7336% ( 713) 00:44:30.466 1073.188 - 1080.342: 37.4529% ( 699) 00:44:30.466 1080.342 - 1087.497: 38.1640% ( 691) 00:44:30.466 1087.497 - 1094.652: 38.8771% ( 693) 00:44:30.466 1094.652 - 1101.806: 39.6016% ( 704) 00:44:30.466 1101.806 - 1108.961: 40.3157% ( 694) 00:44:30.466 1108.961 - 1116.115: 41.0381% ( 702) 00:44:30.466 1116.115 - 1123.270: 41.7646% ( 706) 00:44:30.466 1123.270 - 1130.424: 42.4839% ( 699) 00:44:30.466 1130.424 - 1137.579: 43.2083% ( 704) 00:44:30.466 1137.579 - 1144.734: 43.9348% ( 706) 00:44:30.466 1144.734 - 1151.888: 44.6696% ( 714) 00:44:30.466 1151.888 - 1159.043: 45.3817% ( 692) 00:44:30.466 1159.043 - 1166.197: 46.1246% ( 722) 00:44:30.466 1166.197 - 1173.352: 46.8316% ( 687) 00:44:30.466 1173.352 - 1180.507: 47.5715% ( 719) 00:44:30.466 1180.507 - 1187.661: 48.2702% ( 679) 00:44:30.466 1187.661 - 1194.816: 49.0234% ( 732) 00:44:30.466 1194.816 - 1201.970: 49.7376% ( 694) 00:44:30.466 1201.970 - 1209.125: 50.4569% ( 699) 00:44:30.466 1209.125 - 1216.279: 51.1988% ( 721) 00:44:30.466 1216.279 - 1223.434: 51.8965% ( 678) 00:44:30.466 1223.434 - 1230.589: 52.6333% ( 716) 00:44:30.466 1230.589 - 1237.743: 53.3526% ( 699) 00:44:30.466 1237.743 - 1244.898: 54.0740% ( 701) 00:44:30.466 1244.898 - 1252.052: 54.8025% ( 708) 00:44:30.466 1252.052 - 1259.207: 55.5259% ( 703) 00:44:30.466 1259.207 - 1266.362: 56.2566% ( 710) 00:44:30.466 1266.362 - 1273.516: 56.9676% ( 691) 00:44:30.466 1273.516 - 1280.671: 57.6972% ( 709) 00:44:30.466 1280.671 - 1287.825: 58.3826% ( 666) 00:44:30.466 1287.825 - 1294.980: 59.1163% ( 713) 00:44:30.466 1294.980 - 1302.134: 59.7944% ( 659) 00:44:30.466 1302.134 - 1309.289: 60.5353% ( 720) 00:44:30.466 1309.289 - 1316.444: 61.2361% ( 681) 00:44:30.466 1316.444 - 1323.598: 61.9698% ( 713) 00:44:30.466 1323.598 - 1330.753: 62.6747% ( 685) 00:44:30.466 1330.753 - 1337.907: 63.3960% ( 701) 00:44:30.466 1337.907 - 1345.062: 64.1215% ( 705) 00:44:30.466 1345.062 - 1352.217: 64.8243% ( 683) 00:44:30.466 1352.217 - 1359.371: 65.5714% ( 726) 00:44:30.466 1359.371 - 1366.526: 66.2516% ( 661) 00:44:30.466 1366.526 - 1373.680: 66.9966% ( 724) 00:44:30.466 1373.680 - 1380.835: 67.6954% ( 679) 00:44:30.466 1380.835 - 1387.990: 68.4342% ( 718) 00:44:30.466 1387.990 - 1395.144: 69.1237% ( 670) 00:44:30.466 1395.144 - 1402.299: 69.8687% ( 724) 00:44:30.466 1402.299 - 1409.453: 70.5839% ( 695) 00:44:30.466 1409.453 - 1416.608: 71.2949% ( 691) 00:44:30.466 1416.608 - 1423.762: 72.0276% ( 712) 00:44:30.466 1423.762 - 1430.917: 72.7212% ( 674) 00:44:30.466 1430.917 - 1438.072: 73.4611% ( 719) 00:44:30.466 1438.072 - 1445.226: 74.1423% ( 662) 00:44:30.466 1445.226 - 1452.381: 74.8564% ( 694) 00:44:30.466 1452.381 - 1459.535: 75.5274% ( 652) 00:44:30.466 1459.535 - 1466.690: 76.2384% ( 691) 00:44:30.466 1466.690 - 1473.845: 76.8765% ( 620) 00:44:30.466 1473.845 - 1480.999: 77.5494% ( 654) 00:44:30.466 1480.999 - 1488.154: 78.1679% ( 601) 00:44:30.466 1488.154 - 1495.308: 78.7678% ( 583) 00:44:30.466 1495.308 - 1502.463: 79.3636% ( 579) 00:44:30.466 1502.463 - 1509.617: 79.9039% ( 525) 00:44:30.466 1509.617 - 1516.772: 80.4441% ( 525) 00:44:30.466 1516.772 - 1523.927: 80.9288% ( 471) 00:44:30.466 1523.927 - 1531.081: 81.3919% ( 450) 00:44:30.466 1531.081 - 1538.236: 81.8076% ( 404) 00:44:30.466 1538.236 - 1545.390: 82.2192% ( 400) 00:44:30.466 1545.390 - 1552.545: 82.5629% ( 334) 00:44:30.466 1552.545 - 1559.700: 82.8974% ( 325) 00:44:30.466 1559.700 - 1566.854: 83.2164% ( 310) 00:44:30.466 1566.854 - 1574.009: 83.5035% ( 279) 00:44:30.466 1574.009 - 1581.163: 83.7803% ( 269) 00:44:30.466 1581.163 - 1588.318: 84.0118% ( 225) 00:44:30.466 1588.318 - 1595.472: 84.2475% ( 229) 00:44:30.466 1595.472 - 1602.627: 84.4461% ( 193) 00:44:30.466 1602.627 - 1609.782: 84.6550% ( 203) 00:44:30.466 1609.782 - 1616.936: 84.8289% ( 169) 00:44:30.466 1616.936 - 1624.091: 84.9925% ( 159) 00:44:30.466 1624.091 - 1631.245: 85.1355% ( 139) 00:44:30.466 1631.245 - 1638.400: 85.2796% ( 140) 00:44:30.466 1638.400 - 1645.555: 85.4103% ( 127) 00:44:30.466 1645.555 - 1652.709: 85.5338% ( 120) 00:44:30.466 1652.709 - 1659.864: 85.6583% ( 121) 00:44:30.466 1659.864 - 1667.018: 85.7787% ( 117) 00:44:30.466 1667.018 - 1674.173: 85.8847% ( 103) 00:44:30.466 1674.173 - 1681.328: 86.0040% ( 116) 00:44:30.466 1681.328 - 1688.482: 86.1049% ( 98) 00:44:30.466 1688.482 - 1695.637: 86.2284% ( 120) 00:44:30.466 1695.637 - 1702.791: 86.3323% ( 101) 00:44:30.466 1702.791 - 1709.946: 86.4342% ( 99) 00:44:30.466 1709.946 - 1717.100: 86.5433% ( 106) 00:44:30.466 1717.100 - 1724.255: 86.6390% ( 93) 00:44:30.466 1724.255 - 1731.410: 86.7470% ( 105) 00:44:30.466 1731.410 - 1738.564: 86.8499% ( 100) 00:44:30.466 1738.564 - 1745.719: 86.9497% ( 97) 00:44:30.466 1745.719 - 1752.873: 87.0619% ( 109) 00:44:30.466 1752.873 - 1760.028: 87.1514% ( 87) 00:44:30.466 1760.028 - 1767.183: 87.2502% ( 96) 00:44:30.466 1767.183 - 1774.337: 87.3356% ( 83) 00:44:30.466 1774.337 - 1781.492: 87.4406% ( 102) 00:44:30.466 1781.492 - 1788.646: 87.5280% ( 85) 00:44:30.466 1788.646 - 1795.801: 87.6258% ( 95) 00:44:30.466 1795.801 - 1802.955: 87.7194% ( 91) 00:44:30.466 1802.955 - 1810.110: 87.7956% ( 74) 00:44:30.466 1810.110 - 1817.265: 87.8964% ( 98) 00:44:30.466 1817.265 - 1824.419: 87.9695% ( 71) 00:44:30.467 1824.419 - 1831.574: 88.0580% ( 86) 00:44:30.467 1831.574 - 1845.883: 88.2237% ( 161) 00:44:30.467 1845.883 - 1860.192: 88.3976% ( 169) 00:44:30.467 1860.192 - 1874.501: 88.5612% ( 159) 00:44:30.467 1874.501 - 1888.810: 88.7413% ( 175) 00:44:30.467 1888.810 - 1903.120: 88.9039% ( 158) 00:44:30.467 1903.120 - 1917.429: 89.0767% ( 168) 00:44:30.467 1917.429 - 1931.738: 89.2465% ( 165) 00:44:30.467 1931.738 - 1946.047: 89.4112% ( 160) 00:44:30.467 1946.047 - 1960.356: 89.5779% ( 162) 00:44:30.467 1960.356 - 1974.666: 89.7364% ( 154) 00:44:30.467 1974.666 - 1988.975: 89.8979% ( 157) 00:44:30.467 1988.975 - 2003.284: 90.0512% ( 149) 00:44:30.467 2003.284 - 2017.593: 90.1953% ( 140) 00:44:30.467 2017.593 - 2031.902: 90.3394% ( 140) 00:44:30.467 2031.902 - 2046.211: 90.4865% ( 143) 00:44:30.467 2046.211 - 2060.521: 90.6203% ( 130) 00:44:30.467 2060.521 - 2074.830: 90.7767% ( 152) 00:44:30.467 2074.830 - 2089.139: 90.9105% ( 130) 00:44:30.467 2089.139 - 2103.448: 91.0587% ( 144) 00:44:30.467 2103.448 - 2117.757: 91.1976% ( 135) 00:44:30.467 2117.757 - 2132.066: 91.3386% ( 137) 00:44:30.467 2132.066 - 2146.376: 91.4785% ( 136) 00:44:30.467 2146.376 - 2160.685: 91.6195% ( 137) 00:44:30.467 2160.685 - 2174.994: 91.7564% ( 133) 00:44:30.467 2174.994 - 2189.303: 91.8912% ( 131) 00:44:30.467 2189.303 - 2203.612: 92.0208% ( 126) 00:44:30.467 2203.612 - 2217.921: 92.1628% ( 138) 00:44:30.467 2217.921 - 2232.231: 92.2915% ( 125) 00:44:30.467 2232.231 - 2246.540: 92.4263% ( 131) 00:44:30.467 2246.540 - 2260.849: 92.5559% ( 126) 00:44:30.467 2260.849 - 2275.158: 92.6835% ( 124) 00:44:30.467 2275.158 - 2289.467: 92.8173% ( 130) 00:44:30.467 2289.467 - 2303.776: 92.9428% ( 122) 00:44:30.467 2303.776 - 2318.086: 93.0828% ( 136) 00:44:30.467 2318.086 - 2332.395: 93.2073% ( 121) 00:44:30.467 2332.395 - 2346.704: 93.3359% ( 125) 00:44:30.467 2346.704 - 2361.013: 93.4656% ( 126) 00:44:30.467 2361.013 - 2375.322: 93.5953% ( 126) 00:44:30.467 2375.322 - 2389.631: 93.7229% ( 124) 00:44:30.467 2389.631 - 2403.941: 93.8505% ( 124) 00:44:30.467 2403.941 - 2418.250: 93.9801% ( 126) 00:44:30.467 2418.250 - 2432.559: 94.1087% ( 125) 00:44:30.467 2432.559 - 2446.868: 94.2374% ( 125) 00:44:30.467 2446.868 - 2461.177: 94.3640% ( 123) 00:44:30.467 2461.177 - 2475.486: 94.4998% ( 132) 00:44:30.467 2475.486 - 2489.796: 94.6284% ( 125) 00:44:30.467 2489.796 - 2504.105: 94.7560% ( 124) 00:44:30.467 2504.105 - 2518.414: 94.8785% ( 119) 00:44:30.467 2518.414 - 2532.723: 95.0112% ( 129) 00:44:30.467 2532.723 - 2547.032: 95.1388% ( 124) 00:44:30.467 2547.032 - 2561.341: 95.2654% ( 123) 00:44:30.467 2561.341 - 2575.651: 95.3930% ( 124) 00:44:30.467 2575.651 - 2589.960: 95.5134% ( 117) 00:44:30.467 2589.960 - 2604.269: 95.6410% ( 124) 00:44:30.467 2604.269 - 2618.578: 95.7717% ( 127) 00:44:30.467 2618.578 - 2632.887: 95.8828% ( 108) 00:44:30.467 2632.887 - 2647.197: 96.0114% ( 125) 00:44:30.467 2647.197 - 2661.506: 96.1205% ( 106) 00:44:30.467 2661.506 - 2675.815: 96.2183% ( 95) 00:44:30.467 2675.815 - 2690.124: 96.3304% ( 109) 00:44:30.467 2690.124 - 2704.433: 96.4282% ( 95) 00:44:30.467 2704.433 - 2718.742: 96.5239% ( 93) 00:44:30.467 2718.742 - 2733.052: 96.5980% ( 72) 00:44:30.467 2733.052 - 2747.361: 96.6721% ( 72) 00:44:30.467 2747.361 - 2761.670: 96.7472% ( 73) 00:44:30.467 2761.670 - 2775.979: 96.8141% ( 65) 00:44:30.467 2775.979 - 2790.288: 96.8758% ( 60) 00:44:30.467 2790.288 - 2804.597: 96.9324% ( 55) 00:44:30.467 2804.597 - 2818.907: 96.9890% ( 55) 00:44:30.467 2818.907 - 2833.216: 97.0497% ( 59) 00:44:30.467 2833.216 - 2847.525: 97.1043% ( 53) 00:44:30.467 2847.525 - 2861.834: 97.1629% ( 57) 00:44:30.467 2861.834 - 2876.143: 97.2237% ( 59) 00:44:30.467 2876.143 - 2890.452: 97.2772% ( 52) 00:44:30.467 2890.452 - 2904.762: 97.3327% ( 54) 00:44:30.467 2904.762 - 2919.071: 97.3893% ( 55) 00:44:30.467 2919.071 - 2933.380: 97.4459% ( 55) 00:44:30.467 2933.380 - 2947.689: 97.5025% ( 55) 00:44:30.467 2947.689 - 2961.998: 97.5540% ( 50) 00:44:30.467 2961.998 - 2976.307: 97.6116% ( 56) 00:44:30.467 2976.307 - 2990.617: 97.6600% ( 47) 00:44:30.467 2990.617 - 3004.926: 97.7135% ( 52) 00:44:30.467 3004.926 - 3019.235: 97.7649% ( 50) 00:44:30.467 3019.235 - 3033.544: 97.8133% ( 47) 00:44:30.467 3033.544 - 3047.853: 97.8627% ( 48) 00:44:30.467 3047.853 - 3062.162: 97.9131% ( 49) 00:44:30.467 3062.162 - 3076.472: 97.9563% ( 42) 00:44:30.467 3076.472 - 3090.781: 97.9995% ( 42) 00:44:30.467 3090.781 - 3105.090: 98.0428% ( 42) 00:44:30.467 3105.090 - 3119.399: 98.0829% ( 39) 00:44:30.467 3119.399 - 3133.708: 98.1292% ( 45) 00:44:30.467 3133.708 - 3148.017: 98.1693% ( 39) 00:44:30.467 3148.017 - 3162.327: 98.2105% ( 40) 00:44:30.467 3162.327 - 3176.636: 98.2506% ( 39) 00:44:30.467 3176.636 - 3190.945: 98.2846% ( 33) 00:44:30.467 3190.945 - 3205.254: 98.3237% ( 38) 00:44:30.467 3205.254 - 3219.563: 98.3638% ( 39) 00:44:30.467 3219.563 - 3233.872: 98.3957% ( 31) 00:44:30.467 3233.872 - 3248.182: 98.4359% ( 39) 00:44:30.467 3248.182 - 3262.491: 98.4719% ( 35) 00:44:30.467 3262.491 - 3276.800: 98.5048% ( 32) 00:44:30.467 3276.800 - 3291.109: 98.5449% ( 39) 00:44:30.467 3291.109 - 3305.418: 98.5779% ( 32) 00:44:30.467 3305.418 - 3319.728: 98.6129% ( 34) 00:44:30.467 3319.728 - 3334.037: 98.6489% ( 35) 00:44:30.467 3334.037 - 3348.346: 98.6808% ( 31) 00:44:30.467 3348.346 - 3362.655: 98.7127% ( 31) 00:44:30.467 3362.655 - 3376.964: 98.7456% ( 32) 00:44:30.467 3376.964 - 3391.273: 98.7765% ( 30) 00:44:30.467 3391.273 - 3405.583: 98.8063% ( 29) 00:44:30.467 3405.583 - 3419.892: 98.8341% ( 27) 00:44:30.467 3419.892 - 3434.201: 98.8639% ( 29) 00:44:30.467 3434.201 - 3448.510: 98.8938% ( 29) 00:44:30.467 3448.510 - 3462.819: 98.9195% ( 25) 00:44:30.467 3462.819 - 3477.128: 98.9463% ( 26) 00:44:30.467 3477.128 - 3491.438: 98.9730% ( 26) 00:44:30.467 3491.438 - 3505.747: 98.9967% ( 23) 00:44:30.467 3505.747 - 3520.056: 99.0245% ( 27) 00:44:30.467 3520.056 - 3534.365: 99.0481% ( 23) 00:44:30.467 3534.365 - 3548.674: 99.0677% ( 19) 00:44:30.467 3548.674 - 3562.983: 99.0903% ( 22) 00:44:30.467 3562.983 - 3577.293: 99.1089% ( 18) 00:44:30.467 3577.293 - 3591.602: 99.1294% ( 20) 00:44:30.467 3591.602 - 3605.911: 99.1469% ( 17) 00:44:30.467 3605.911 - 3620.220: 99.1654% ( 18) 00:44:30.467 3620.220 - 3634.529: 99.1829% ( 17) 00:44:30.467 3634.529 - 3648.838: 99.2025% ( 19) 00:44:30.467 3648.838 - 3663.148: 99.2169% ( 14) 00:44:30.467 3663.148 - 3691.766: 99.2539% ( 36) 00:44:30.467 3691.766 - 3720.384: 99.2879% ( 33) 00:44:30.467 3720.384 - 3749.003: 99.3188% ( 30) 00:44:30.467 3749.003 - 3777.621: 99.3507% ( 31) 00:44:30.467 3777.621 - 3806.239: 99.3785% ( 27) 00:44:30.467 3806.239 - 3834.858: 99.4083% ( 29) 00:44:30.467 3834.858 - 3863.476: 99.4330% ( 24) 00:44:30.467 3863.476 - 3892.094: 99.4598% ( 26) 00:44:30.467 3892.094 - 3920.713: 99.4834% ( 23) 00:44:30.467 3920.713 - 3949.331: 99.5071% ( 23) 00:44:30.467 3949.331 - 3977.949: 99.5287% ( 21) 00:44:30.467 3977.949 - 4006.568: 99.5524% ( 23) 00:44:30.467 4006.568 - 4035.186: 99.5760% ( 23) 00:44:30.467 4035.186 - 4063.804: 99.5966% ( 20) 00:44:30.467 4063.804 - 4092.423: 99.6203% ( 23) 00:44:30.467 4092.423 - 4121.041: 99.6429% ( 22) 00:44:30.467 4121.041 - 4149.659: 99.6666% ( 23) 00:44:30.467 4149.659 - 4178.278: 99.6882% ( 21) 00:44:30.467 4178.278 - 4206.896: 99.7047% ( 16) 00:44:30.467 4206.896 - 4235.514: 99.7201% ( 15) 00:44:30.467 4235.514 - 4264.133: 99.7366% ( 16) 00:44:30.467 4264.133 - 4292.751: 99.7479% ( 11) 00:44:30.467 4292.751 - 4321.369: 99.7582% ( 10) 00:44:30.467 4321.369 - 4349.988: 99.7705% ( 12) 00:44:30.467 4349.988 - 4378.606: 99.7818% ( 11) 00:44:30.467 4378.606 - 4407.224: 99.7911% ( 9) 00:44:30.467 4407.224 - 4435.843: 99.8035% ( 12) 00:44:30.467 4435.843 - 4464.461: 99.8148% ( 11) 00:44:30.467 4464.461 - 4493.079: 99.8261% ( 11) 00:44:30.467 4493.079 - 4521.698: 99.8395% ( 13) 00:44:30.467 4521.698 - 4550.316: 99.8498% ( 10) 00:44:30.467 4550.316 - 4578.934: 99.8621% ( 12) 00:44:30.467 4578.934 - 4607.553: 99.8724% ( 10) 00:44:30.467 4607.553 - 4636.171: 99.8827% ( 10) 00:44:30.467 4636.171 - 4664.790: 99.8909% ( 8) 00:44:30.467 4664.790 - 4693.408: 99.8971% ( 6) 00:44:30.467 4693.408 - 4722.026: 99.9033% ( 6) 00:44:30.467 4722.026 - 4750.645: 99.9094% ( 6) 00:44:30.467 4750.645 - 4779.263: 99.9177% ( 8) 00:44:30.467 4779.263 - 4807.881: 99.9239% ( 6) 00:44:30.467 4807.881 - 4836.500: 99.9321% ( 8) 00:44:30.467 4836.500 - 4865.118: 99.9383% ( 6) 00:44:30.467 4865.118 - 4893.736: 99.9434% ( 5) 00:44:30.467 4893.736 - 4922.355: 99.9444% ( 1) 00:44:30.467 4922.355 - 4950.973: 99.9465% ( 2) 00:44:30.467 4950.973 - 4979.591: 99.9475% ( 1) 00:44:30.467 4979.591 - 5008.210: 99.9485% ( 1) 00:44:30.467 5008.210 - 5036.828: 99.9496% ( 1) 00:44:30.467 5036.828 - 5065.446: 99.9506% ( 1) 00:44:30.467 5065.446 - 5094.065: 99.9516% ( 1) 00:44:30.467 5122.683 - 5151.301: 99.9537% ( 2) 00:44:30.467 5151.301 - 5179.920: 99.9547% ( 1) 00:44:30.467 5208.538 - 5237.156: 99.9568% ( 2) 00:44:30.467 5237.156 - 5265.775: 99.9578% ( 1) 00:44:30.467 5294.393 - 5323.011: 99.9588% ( 1) 00:44:30.467 5323.011 - 5351.630: 99.9599% ( 1) 00:44:30.467 5351.630 - 5380.248: 99.9609% ( 1) 00:44:30.467 5380.248 - 5408.866: 99.9619% ( 1) 00:44:30.467 5408.866 - 5437.485: 99.9630% ( 1) 00:44:30.467 5437.485 - 5466.103: 99.9640% ( 1) 00:44:30.467 5466.103 - 5494.721: 99.9650% ( 1) 00:44:30.467 5494.721 - 5523.340: 99.9660% ( 1) 00:44:30.467 5523.340 - 5551.958: 99.9671% ( 1) 00:44:30.467 5551.958 - 5580.576: 99.9681% ( 1) 00:44:30.467 5580.576 - 5609.195: 99.9691% ( 1) 00:44:30.467 5609.195 - 5637.813: 99.9702% ( 1) 00:44:30.467 5637.813 - 5666.431: 99.9712% ( 1) 00:44:30.467 5666.431 - 5695.050: 99.9722% ( 1) 00:44:30.467 5695.050 - 5723.668: 99.9732% ( 1) 00:44:30.467 5723.668 - 5752.286: 99.9753% ( 2) 00:44:30.467 5752.286 - 5780.905: 99.9763% ( 1) 00:44:30.467 5780.905 - 5809.523: 99.9774% ( 1) 00:44:30.467 5809.523 - 5838.141: 99.9784% ( 1) 00:44:30.467 5838.141 - 5866.760: 99.9794% ( 1) 00:44:30.467 5866.760 - 5895.378: 99.9804% ( 1) 00:44:30.467 5895.378 - 5923.997: 99.9815% ( 1) 00:44:30.467 5923.997 - 5952.615: 99.9825% ( 1) 00:44:30.467 5952.615 - 5981.233: 99.9835% ( 1) 00:44:30.467 5981.233 - 6009.852: 99.9846% ( 1) 00:44:30.467 6009.852 - 6038.470: 99.9856% ( 1) 00:44:30.467 6038.470 - 6067.088: 99.9866% ( 1) 00:44:30.467 6067.088 - 6095.707: 99.9877% ( 1) 00:44:30.467 6095.707 - 6124.325: 99.9887% ( 1) 00:44:30.467 6124.325 - 6152.943: 99.9897% ( 1) 00:44:30.467 6152.943 - 6181.562: 99.9907% ( 1) 00:44:30.467 6181.562 - 6210.180: 99.9918% ( 1) 00:44:30.467 6210.180 - 6238.798: 99.9928% ( 1) 00:44:30.467 6238.798 - 6267.417: 99.9938% ( 1) 00:44:30.467 6267.417 - 6296.035: 99.9949% ( 1) 00:44:30.467 6296.035 - 6324.653: 99.9959% ( 1) 00:44:30.467 6324.653 - 6353.272: 99.9969% ( 1) 00:44:30.467 6353.272 - 6381.890: 99.9979% ( 1) 00:44:30.467 6381.890 - 6410.508: 99.9990% ( 1) 00:44:30.467 6439.127 - 6467.745: 100.0000% ( 1) 00:44:30.467 00:44:30.467 21:58:03 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:44:31.846 Initializing NVMe Controllers 00:44:31.846 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:44:31.846 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:44:31.846 Initialization complete. Launching workers. 00:44:31.846 ======================================================== 00:44:31.846 Latency(us) 00:44:31.846 Device Information : IOPS MiB/s Average min max 00:44:31.846 PCIE (0000:00:10.0) NSID 1 from core 0: 54969.67 644.18 2328.23 495.30 17905.01 00:44:31.846 ======================================================== 00:44:31.846 Total : 54969.67 644.18 2328.23 495.30 17905.01 00:44:31.846 00:44:31.846 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:44:31.846 ================================================================================= 00:44:31.846 1.00000% : 905.055us 00:44:31.846 10.00000% : 1294.980us 00:44:31.846 25.00000% : 1645.555us 00:44:31.846 50.00000% : 2174.994us 00:44:31.846 75.00000% : 3004.926us 00:44:31.846 90.00000% : 3376.964us 00:44:31.846 95.00000% : 3749.003us 00:44:31.846 98.00000% : 4206.896us 00:44:31.846 99.00000% : 4693.408us 00:44:31.846 99.50000% : 5294.393us 00:44:31.846 99.90000% : 7183.203us 00:44:31.846 99.99000% : 10188.129us 00:44:31.846 99.99900% : 17972.318us 00:44:31.846 99.99990% : 17972.318us 00:44:31.846 99.99999% : 17972.318us 00:44:31.846 00:44:31.846 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:44:31.846 ============================================================================== 00:44:31.846 Range in us Cumulative IO count 00:44:31.846 493.666 - 497.244: 0.0018% ( 1) 00:44:31.846 522.285 - 525.862: 0.0036% ( 1) 00:44:31.846 525.862 - 529.439: 0.0055% ( 1) 00:44:31.846 536.594 - 540.171: 0.0091% ( 2) 00:44:31.846 547.326 - 550.903: 0.0127% ( 2) 00:44:31.846 583.099 - 586.676: 0.0145% ( 1) 00:44:31.846 597.408 - 600.985: 0.0164% ( 1) 00:44:31.846 608.140 - 611.717: 0.0182% ( 1) 00:44:31.846 611.717 - 615.294: 0.0200% ( 1) 00:44:31.846 615.294 - 618.872: 0.0255% ( 3) 00:44:31.846 622.449 - 626.026: 0.0273% ( 1) 00:44:31.846 626.026 - 629.603: 0.0291% ( 1) 00:44:31.846 633.181 - 636.758: 0.0327% ( 2) 00:44:31.846 643.913 - 647.490: 0.0345% ( 1) 00:44:31.846 647.490 - 651.067: 0.0382% ( 2) 00:44:31.846 658.222 - 661.799: 0.0436% ( 3) 00:44:31.846 661.799 - 665.376: 0.0454% ( 1) 00:44:31.846 665.376 - 668.954: 0.0491% ( 2) 00:44:31.846 668.954 - 672.531: 0.0509% ( 1) 00:44:31.846 672.531 - 676.108: 0.0545% ( 2) 00:44:31.846 679.686 - 683.263: 0.0564% ( 1) 00:44:31.846 686.840 - 690.417: 0.0636% ( 4) 00:44:31.846 690.417 - 693.995: 0.0709% ( 4) 00:44:31.846 693.995 - 697.572: 0.0782% ( 4) 00:44:31.846 697.572 - 701.149: 0.0854% ( 4) 00:44:31.846 701.149 - 704.727: 0.0891% ( 2) 00:44:31.846 704.727 - 708.304: 0.0982% ( 5) 00:44:31.846 708.304 - 711.881: 0.1091% ( 6) 00:44:31.846 711.881 - 715.459: 0.1145% ( 3) 00:44:31.846 715.459 - 719.036: 0.1182% ( 2) 00:44:31.846 719.036 - 722.613: 0.1236% ( 3) 00:44:31.846 722.613 - 726.190: 0.1327% ( 5) 00:44:31.846 726.190 - 729.768: 0.1382% ( 3) 00:44:31.846 729.768 - 733.345: 0.1418% ( 2) 00:44:31.846 733.345 - 736.922: 0.1454% ( 2) 00:44:31.846 736.922 - 740.500: 0.1563% ( 6) 00:44:31.846 740.500 - 744.077: 0.1673% ( 6) 00:44:31.846 744.077 - 747.654: 0.1745% ( 4) 00:44:31.846 747.654 - 751.231: 0.1891% ( 8) 00:44:31.846 751.231 - 754.809: 0.1945% ( 3) 00:44:31.846 754.809 - 758.386: 0.2018% ( 4) 00:44:31.846 758.386 - 761.963: 0.2127% ( 6) 00:44:31.846 761.963 - 765.541: 0.2254% ( 7) 00:44:31.846 765.541 - 769.118: 0.2418% ( 9) 00:44:31.846 769.118 - 772.695: 0.2545% ( 7) 00:44:31.846 772.695 - 776.272: 0.2654% ( 6) 00:44:31.846 776.272 - 779.850: 0.2782% ( 7) 00:44:31.846 779.850 - 783.427: 0.2927% ( 8) 00:44:31.846 783.427 - 787.004: 0.3109% ( 10) 00:44:31.846 787.004 - 790.582: 0.3181% ( 4) 00:44:31.846 790.582 - 794.159: 0.3345% ( 9) 00:44:31.846 794.159 - 797.736: 0.3436% ( 5) 00:44:31.846 797.736 - 801.314: 0.3672% ( 13) 00:44:31.846 801.314 - 804.891: 0.3818% ( 8) 00:44:31.846 804.891 - 808.468: 0.4018% ( 11) 00:44:31.846 808.468 - 812.045: 0.4127% ( 6) 00:44:31.846 812.045 - 815.623: 0.4254% ( 7) 00:44:31.846 815.623 - 819.200: 0.4363% ( 6) 00:44:31.846 819.200 - 822.777: 0.4454% ( 5) 00:44:31.846 822.777 - 826.355: 0.4654% ( 11) 00:44:31.846 826.355 - 829.932: 0.4818% ( 9) 00:44:31.846 829.932 - 833.509: 0.5036% ( 12) 00:44:31.846 833.509 - 837.086: 0.5236% ( 11) 00:44:31.846 837.086 - 840.664: 0.5472% ( 13) 00:44:31.846 840.664 - 844.241: 0.5654% ( 10) 00:44:31.846 844.241 - 847.818: 0.5836% ( 10) 00:44:31.846 847.818 - 851.396: 0.5908% ( 4) 00:44:31.846 851.396 - 854.973: 0.6072% ( 9) 00:44:31.846 854.973 - 858.550: 0.6236% ( 9) 00:44:31.846 858.550 - 862.128: 0.6563% ( 18) 00:44:31.846 862.128 - 865.705: 0.6727% ( 9) 00:44:31.846 865.705 - 869.282: 0.6908% ( 10) 00:44:31.846 869.282 - 872.859: 0.6999% ( 5) 00:44:31.846 872.859 - 876.437: 0.7417% ( 23) 00:44:31.846 876.437 - 880.014: 0.7690% ( 15) 00:44:31.846 880.014 - 883.591: 0.8199% ( 28) 00:44:31.846 883.591 - 887.169: 0.8690% ( 27) 00:44:31.846 887.169 - 890.746: 0.8926% ( 13) 00:44:31.846 890.746 - 894.323: 0.9272% ( 19) 00:44:31.846 894.323 - 897.900: 0.9563% ( 16) 00:44:31.846 897.900 - 901.478: 0.9944% ( 21) 00:44:31.846 901.478 - 905.055: 1.0199% ( 14) 00:44:31.846 905.055 - 908.632: 1.0508% ( 17) 00:44:31.846 908.632 - 912.210: 1.0781% ( 15) 00:44:31.846 912.210 - 915.787: 1.1144% ( 20) 00:44:31.846 915.787 - 922.941: 1.1799% ( 36) 00:44:31.846 922.941 - 930.096: 1.2562% ( 42) 00:44:31.846 930.096 - 937.251: 1.3453% ( 49) 00:44:31.846 937.251 - 944.405: 1.4289% ( 46) 00:44:31.846 944.405 - 951.560: 1.5089% ( 44) 00:44:31.846 951.560 - 958.714: 1.5980% ( 49) 00:44:31.846 958.714 - 965.869: 1.7089% ( 61) 00:44:31.846 965.869 - 973.024: 1.8562% ( 81) 00:44:31.846 973.024 - 980.178: 1.9525% ( 53) 00:44:31.846 980.178 - 987.333: 2.0834% ( 72) 00:44:31.846 987.333 - 994.487: 2.2070% ( 68) 00:44:31.846 994.487 - 1001.642: 2.3434% ( 75) 00:44:31.846 1001.642 - 1008.797: 2.4834% ( 77) 00:44:31.846 1008.797 - 1015.951: 2.5979% ( 63) 00:44:31.846 1015.951 - 1023.106: 2.7579% ( 88) 00:44:31.846 1023.106 - 1030.260: 2.9433% ( 102) 00:44:31.846 1030.260 - 1037.415: 3.1015% ( 87) 00:44:31.846 1037.415 - 1044.569: 3.2615% ( 88) 00:44:31.846 1044.569 - 1051.724: 3.4614% ( 110) 00:44:31.846 1051.724 - 1058.879: 3.6832% ( 122) 00:44:31.846 1058.879 - 1066.033: 3.8087% ( 69) 00:44:31.846 1066.033 - 1073.188: 3.9777% ( 93) 00:44:31.846 1073.188 - 1080.342: 4.1523% ( 96) 00:44:31.846 1080.342 - 1087.497: 4.3195% ( 92) 00:44:31.846 1087.497 - 1094.652: 4.5850% ( 146) 00:44:31.847 1094.652 - 1101.806: 4.7231% ( 76) 00:44:31.847 1101.806 - 1108.961: 4.8704% ( 81) 00:44:31.847 1108.961 - 1116.115: 5.0104% ( 77) 00:44:31.847 1116.115 - 1123.270: 5.1794% ( 93) 00:44:31.847 1123.270 - 1130.424: 5.3212% ( 78) 00:44:31.847 1130.424 - 1137.579: 5.5030% ( 100) 00:44:31.847 1137.579 - 1144.734: 5.6521% ( 82) 00:44:31.847 1144.734 - 1151.888: 5.9175% ( 146) 00:44:31.847 1151.888 - 1159.043: 6.0830% ( 91) 00:44:31.847 1159.043 - 1166.197: 6.2902% ( 114) 00:44:31.847 1166.197 - 1173.352: 6.4847% ( 107) 00:44:31.847 1173.352 - 1180.507: 6.6684% ( 101) 00:44:31.847 1180.507 - 1187.661: 6.9083% ( 132) 00:44:31.847 1187.661 - 1194.816: 7.0592% ( 83) 00:44:31.847 1194.816 - 1201.970: 7.3174% ( 142) 00:44:31.847 1201.970 - 1209.125: 7.5337% ( 119) 00:44:31.847 1209.125 - 1216.279: 7.7482% ( 118) 00:44:31.847 1216.279 - 1223.434: 7.9337% ( 102) 00:44:31.847 1223.434 - 1230.589: 8.1900% ( 141) 00:44:31.847 1230.589 - 1237.743: 8.3991% ( 115) 00:44:31.847 1237.743 - 1244.898: 8.6245% ( 124) 00:44:31.847 1244.898 - 1252.052: 8.8263% ( 111) 00:44:31.847 1252.052 - 1259.207: 9.0790% ( 139) 00:44:31.847 1259.207 - 1266.362: 9.2808% ( 111) 00:44:31.847 1266.362 - 1273.516: 9.5335% ( 139) 00:44:31.847 1273.516 - 1280.671: 9.7626% ( 126) 00:44:31.847 1280.671 - 1287.825: 9.9989% ( 130) 00:44:31.847 1287.825 - 1294.980: 10.2062% ( 114) 00:44:31.847 1294.980 - 1302.134: 10.4898% ( 156) 00:44:31.847 1302.134 - 1309.289: 10.7134% ( 123) 00:44:31.847 1309.289 - 1316.444: 10.9243% ( 116) 00:44:31.847 1316.444 - 1323.598: 11.1406% ( 119) 00:44:31.847 1323.598 - 1330.753: 11.3224% ( 100) 00:44:31.847 1330.753 - 1337.907: 11.5224% ( 110) 00:44:31.847 1337.907 - 1345.062: 11.7096% ( 103) 00:44:31.847 1345.062 - 1352.217: 11.8787% ( 93) 00:44:31.847 1352.217 - 1359.371: 12.0823% ( 112) 00:44:31.847 1359.371 - 1366.526: 12.3623% ( 154) 00:44:31.847 1366.526 - 1373.680: 12.6677% ( 168) 00:44:31.847 1373.680 - 1380.835: 12.9640% ( 163) 00:44:31.847 1380.835 - 1387.990: 13.3295% ( 201) 00:44:31.847 1387.990 - 1395.144: 13.6022% ( 150) 00:44:31.847 1395.144 - 1402.299: 13.8803% ( 153) 00:44:31.847 1402.299 - 1409.453: 14.1312% ( 138) 00:44:31.847 1409.453 - 1416.608: 14.4166% ( 157) 00:44:31.847 1416.608 - 1423.762: 14.6802% ( 145) 00:44:31.847 1423.762 - 1430.917: 14.9384% ( 142) 00:44:31.847 1430.917 - 1438.072: 15.2092% ( 149) 00:44:31.847 1438.072 - 1445.226: 15.4910% ( 155) 00:44:31.847 1445.226 - 1452.381: 15.8201% ( 181) 00:44:31.847 1452.381 - 1459.535: 16.0837% ( 145) 00:44:31.847 1459.535 - 1466.690: 16.4237% ( 187) 00:44:31.847 1466.690 - 1473.845: 16.7800% ( 196) 00:44:31.847 1473.845 - 1480.999: 17.0654% ( 157) 00:44:31.847 1480.999 - 1488.154: 17.3945% ( 181) 00:44:31.847 1488.154 - 1495.308: 17.6944% ( 165) 00:44:31.847 1495.308 - 1502.463: 18.0653% ( 204) 00:44:31.847 1502.463 - 1509.617: 18.3344% ( 148) 00:44:31.847 1509.617 - 1516.772: 18.7161% ( 210) 00:44:31.847 1516.772 - 1523.927: 19.1452% ( 236) 00:44:31.847 1523.927 - 1531.081: 19.4270% ( 155) 00:44:31.847 1531.081 - 1538.236: 19.6833% ( 141) 00:44:31.847 1538.236 - 1545.390: 19.9924% ( 170) 00:44:31.847 1545.390 - 1552.545: 20.3323% ( 187) 00:44:31.847 1552.545 - 1559.700: 20.6577% ( 179) 00:44:31.847 1559.700 - 1566.854: 21.0232% ( 201) 00:44:31.847 1566.854 - 1574.009: 21.3304% ( 169) 00:44:31.847 1574.009 - 1581.163: 21.7194% ( 214) 00:44:31.847 1581.163 - 1588.318: 22.0740% ( 195) 00:44:31.847 1588.318 - 1595.472: 22.4539% ( 209) 00:44:31.847 1595.472 - 1602.627: 22.7975% ( 189) 00:44:31.847 1602.627 - 1609.782: 23.1611% ( 200) 00:44:31.847 1609.782 - 1616.936: 23.4865% ( 179) 00:44:31.847 1616.936 - 1624.091: 23.8319% ( 190) 00:44:31.847 1624.091 - 1631.245: 24.2664% ( 239) 00:44:31.847 1631.245 - 1638.400: 24.6609% ( 217) 00:44:31.847 1638.400 - 1645.555: 25.0064% ( 190) 00:44:31.847 1645.555 - 1652.709: 25.3863% ( 209) 00:44:31.847 1652.709 - 1659.864: 25.7554% ( 203) 00:44:31.847 1659.864 - 1667.018: 26.1481% ( 216) 00:44:31.847 1667.018 - 1674.173: 26.5389% ( 215) 00:44:31.847 1674.173 - 1681.328: 26.8316% ( 161) 00:44:31.847 1681.328 - 1688.482: 27.2152% ( 211) 00:44:31.847 1688.482 - 1695.637: 27.5461% ( 182) 00:44:31.847 1695.637 - 1702.791: 27.8388% ( 161) 00:44:31.847 1702.791 - 1709.946: 28.1915% ( 194) 00:44:31.847 1709.946 - 1717.100: 28.5169% ( 179) 00:44:31.847 1717.100 - 1724.255: 28.8623% ( 190) 00:44:31.847 1724.255 - 1731.410: 29.2332% ( 204) 00:44:31.847 1731.410 - 1738.564: 29.6059% ( 205) 00:44:31.847 1738.564 - 1745.719: 30.0313% ( 234) 00:44:31.847 1745.719 - 1752.873: 30.4221% ( 215) 00:44:31.847 1752.873 - 1760.028: 30.7966% ( 206) 00:44:31.847 1760.028 - 1767.183: 31.1184% ( 177) 00:44:31.847 1767.183 - 1774.337: 31.4984% ( 209) 00:44:31.847 1774.337 - 1781.492: 31.8220% ( 178) 00:44:31.847 1781.492 - 1788.646: 32.1638% ( 188) 00:44:31.847 1788.646 - 1795.801: 32.5165% ( 194) 00:44:31.847 1795.801 - 1802.955: 32.8891% ( 205) 00:44:31.847 1802.955 - 1810.110: 33.2546% ( 201) 00:44:31.847 1810.110 - 1817.265: 33.6600% ( 223) 00:44:31.847 1817.265 - 1824.419: 34.0327% ( 205) 00:44:31.847 1824.419 - 1831.574: 34.4181% ( 212) 00:44:31.847 1831.574 - 1845.883: 35.1816% ( 420) 00:44:31.847 1845.883 - 1860.192: 35.8415% ( 363) 00:44:31.847 1860.192 - 1874.501: 36.5796% ( 406) 00:44:31.847 1874.501 - 1888.810: 37.2414% ( 364) 00:44:31.847 1888.810 - 1903.120: 37.9668% ( 399) 00:44:31.847 1903.120 - 1917.429: 38.8158% ( 467) 00:44:31.847 1917.429 - 1931.738: 39.8466% ( 567) 00:44:31.847 1931.738 - 1946.047: 40.8065% ( 528) 00:44:31.847 1946.047 - 1960.356: 41.6064% ( 440) 00:44:31.847 1960.356 - 1974.666: 42.3990% ( 436) 00:44:31.847 1974.666 - 1988.975: 42.9808% ( 320) 00:44:31.847 1988.975 - 2003.284: 43.5389% ( 307) 00:44:31.847 2003.284 - 2017.593: 44.1515% ( 337) 00:44:31.847 2017.593 - 2031.902: 44.7315% ( 319) 00:44:31.847 2031.902 - 2046.211: 45.2514% ( 286) 00:44:31.847 2046.211 - 2060.521: 45.8150% ( 310) 00:44:31.847 2060.521 - 2074.830: 46.3604% ( 300) 00:44:31.847 2074.830 - 2089.139: 46.8767% ( 284) 00:44:31.847 2089.139 - 2103.448: 47.4512% ( 316) 00:44:31.847 2103.448 - 2117.757: 48.0184% ( 312) 00:44:31.847 2117.757 - 2132.066: 48.5420% ( 288) 00:44:31.847 2132.066 - 2146.376: 49.0347% ( 271) 00:44:31.847 2146.376 - 2160.685: 49.5055% ( 259) 00:44:31.847 2160.685 - 2174.994: 50.0600% ( 305) 00:44:31.847 2174.994 - 2189.303: 50.5981% ( 296) 00:44:31.847 2189.303 - 2203.612: 51.1290% ( 292) 00:44:31.847 2203.612 - 2217.921: 51.5471% ( 230) 00:44:31.847 2217.921 - 2232.231: 52.0089% ( 254) 00:44:31.847 2232.231 - 2246.540: 52.4670% ( 252) 00:44:31.847 2246.540 - 2260.849: 52.9306% ( 255) 00:44:31.847 2260.849 - 2275.158: 53.5014% ( 314) 00:44:31.847 2275.158 - 2289.467: 54.1305% ( 346) 00:44:31.847 2289.467 - 2303.776: 54.6795% ( 302) 00:44:31.847 2303.776 - 2318.086: 55.2449% ( 311) 00:44:31.847 2318.086 - 2332.395: 55.7376% ( 271) 00:44:31.847 2332.395 - 2346.704: 56.3120% ( 316) 00:44:31.847 2346.704 - 2361.013: 56.7974% ( 267) 00:44:31.847 2361.013 - 2375.322: 57.3156% ( 285) 00:44:31.847 2375.322 - 2389.631: 57.7192% ( 222) 00:44:31.847 2389.631 - 2403.941: 58.1246% ( 223) 00:44:31.847 2403.941 - 2418.250: 58.5245% ( 220) 00:44:31.847 2418.250 - 2432.559: 58.8936% ( 203) 00:44:31.847 2432.559 - 2446.868: 59.2881% ( 217) 00:44:31.847 2446.868 - 2461.177: 59.6608% ( 205) 00:44:31.847 2461.177 - 2475.486: 60.0298% ( 203) 00:44:31.847 2475.486 - 2489.796: 60.3770% ( 191) 00:44:31.847 2489.796 - 2504.105: 60.7716% ( 217) 00:44:31.847 2504.105 - 2518.414: 61.1715% ( 220) 00:44:31.847 2518.414 - 2532.723: 61.5860% ( 228) 00:44:31.847 2532.723 - 2547.032: 61.9951% ( 225) 00:44:31.847 2547.032 - 2561.341: 62.4205% ( 234) 00:44:31.847 2561.341 - 2575.651: 62.8804% ( 253) 00:44:31.847 2575.651 - 2589.960: 63.3276% ( 246) 00:44:31.847 2589.960 - 2604.269: 63.7840% ( 251) 00:44:31.847 2604.269 - 2618.578: 64.1621% ( 208) 00:44:31.847 2618.578 - 2632.887: 64.6093% ( 246) 00:44:31.847 2632.887 - 2647.197: 65.0311% ( 232) 00:44:31.847 2647.197 - 2661.506: 65.4565% ( 234) 00:44:31.847 2661.506 - 2675.815: 65.8619% ( 223) 00:44:31.847 2675.815 - 2690.124: 66.2310% ( 203) 00:44:31.847 2690.124 - 2704.433: 66.6200% ( 214) 00:44:31.847 2704.433 - 2718.742: 67.0690% ( 247) 00:44:31.847 2718.742 - 2733.052: 67.4599% ( 215) 00:44:31.847 2733.052 - 2747.361: 67.8671% ( 224) 00:44:31.847 2747.361 - 2761.670: 68.2471% ( 209) 00:44:31.847 2761.670 - 2775.979: 68.6216% ( 206) 00:44:31.847 2775.979 - 2790.288: 69.0034% ( 210) 00:44:31.847 2790.288 - 2804.597: 69.4288% ( 234) 00:44:31.847 2804.597 - 2818.907: 69.8015% ( 205) 00:44:31.847 2818.907 - 2833.216: 70.2051% ( 222) 00:44:31.847 2833.216 - 2847.525: 70.6541% ( 247) 00:44:31.847 2847.525 - 2861.834: 71.0995% ( 245) 00:44:31.847 2861.834 - 2876.143: 71.5849% ( 267) 00:44:31.847 2876.143 - 2890.452: 72.0121% ( 235) 00:44:31.847 2890.452 - 2904.762: 72.4485% ( 240) 00:44:31.847 2904.762 - 2919.071: 72.8739% ( 234) 00:44:31.847 2919.071 - 2933.380: 73.2938% ( 231) 00:44:31.847 2933.380 - 2947.689: 73.6811% ( 213) 00:44:31.847 2947.689 - 2961.998: 74.1174% ( 240) 00:44:31.847 2961.998 - 2976.307: 74.5137% ( 218) 00:44:31.847 2976.307 - 2990.617: 74.9118% ( 219) 00:44:31.847 2990.617 - 3004.926: 75.2263% ( 173) 00:44:31.847 3004.926 - 3019.235: 75.5263% ( 165) 00:44:31.847 3019.235 - 3033.544: 75.8463% ( 176) 00:44:31.847 3033.544 - 3047.853: 76.1081% ( 144) 00:44:31.847 3047.853 - 3062.162: 76.3989% ( 160) 00:44:31.847 3062.162 - 3076.472: 76.6662% ( 147) 00:44:31.847 3076.472 - 3090.781: 76.9025% ( 130) 00:44:31.848 3090.781 - 3105.090: 77.1734% ( 149) 00:44:31.848 3105.090 - 3119.399: 77.4661% ( 161) 00:44:31.848 3119.399 - 3133.708: 77.7424% ( 152) 00:44:31.848 3133.708 - 3148.017: 77.9642% ( 122) 00:44:31.848 3148.017 - 3162.327: 78.2169% ( 139) 00:44:31.848 3162.327 - 3176.636: 78.5169% ( 165) 00:44:31.848 3176.636 - 3190.945: 78.7860% ( 148) 00:44:31.848 3190.945 - 3205.254: 79.0659% ( 154) 00:44:31.848 3205.254 - 3219.563: 79.4386% ( 205) 00:44:31.848 3219.563 - 3233.872: 79.7949% ( 196) 00:44:31.848 3233.872 - 3248.182: 80.4240% ( 346) 00:44:31.848 3248.182 - 3262.491: 81.2766% ( 469) 00:44:31.848 3262.491 - 3276.800: 82.3474% ( 589) 00:44:31.848 3276.800 - 3291.109: 83.8218% ( 811) 00:44:31.848 3291.109 - 3305.418: 85.2471% ( 784) 00:44:31.848 3305.418 - 3319.728: 86.5506% ( 717) 00:44:31.848 3319.728 - 3334.037: 87.7159% ( 641) 00:44:31.848 3334.037 - 3348.346: 88.5358% ( 451) 00:44:31.848 3348.346 - 3362.655: 89.3793% ( 464) 00:44:31.848 3362.655 - 3376.964: 90.0247% ( 355) 00:44:31.848 3376.964 - 3391.273: 90.5301% ( 278) 00:44:31.848 3391.273 - 3405.583: 90.9755% ( 245) 00:44:31.848 3405.583 - 3419.892: 91.2955% ( 176) 00:44:31.848 3419.892 - 3434.201: 91.5955% ( 165) 00:44:31.848 3434.201 - 3448.510: 91.8045% ( 115) 00:44:31.848 3448.510 - 3462.819: 92.0463% ( 133) 00:44:31.848 3462.819 - 3477.128: 92.2645% ( 120) 00:44:31.848 3477.128 - 3491.438: 92.4481% ( 101) 00:44:31.848 3491.438 - 3505.747: 92.6335% ( 102) 00:44:31.848 3505.747 - 3520.056: 92.7899% ( 86) 00:44:31.848 3520.056 - 3534.365: 92.9862% ( 108) 00:44:31.848 3534.365 - 3548.674: 93.1262% ( 77) 00:44:31.848 3548.674 - 3562.983: 93.2953% ( 93) 00:44:31.848 3562.983 - 3577.293: 93.4444% ( 82) 00:44:31.848 3577.293 - 3591.602: 93.6171% ( 95) 00:44:31.848 3591.602 - 3605.911: 93.7716% ( 85) 00:44:31.848 3605.911 - 3620.220: 93.8934% ( 67) 00:44:31.848 3620.220 - 3634.529: 94.0679% ( 96) 00:44:31.848 3634.529 - 3648.838: 94.2206% ( 84) 00:44:31.848 3648.838 - 3663.148: 94.3406% ( 66) 00:44:31.848 3663.148 - 3691.766: 94.6151% ( 151) 00:44:31.848 3691.766 - 3720.384: 94.8733% ( 142) 00:44:31.848 3720.384 - 3749.003: 95.1460% ( 150) 00:44:31.848 3749.003 - 3777.621: 95.4023% ( 141) 00:44:31.848 3777.621 - 3806.239: 95.6623% ( 143) 00:44:31.848 3806.239 - 3834.858: 95.9132% ( 138) 00:44:31.848 3834.858 - 3863.476: 96.1350% ( 122) 00:44:31.848 3863.476 - 3892.094: 96.3659% ( 127) 00:44:31.848 3892.094 - 3920.713: 96.6058% ( 132) 00:44:31.848 3920.713 - 3949.331: 96.7931% ( 103) 00:44:31.848 3949.331 - 3977.949: 96.9949% ( 111) 00:44:31.848 3977.949 - 4006.568: 97.1567% ( 89) 00:44:31.848 4006.568 - 4035.186: 97.3094% ( 84) 00:44:31.848 4035.186 - 4063.804: 97.4730% ( 90) 00:44:31.848 4063.804 - 4092.423: 97.6094% ( 75) 00:44:31.848 4092.423 - 4121.041: 97.7275% ( 65) 00:44:31.848 4121.041 - 4149.659: 97.8657% ( 76) 00:44:31.848 4149.659 - 4178.278: 97.9711% ( 58) 00:44:31.848 4178.278 - 4206.896: 98.0620% ( 50) 00:44:31.848 4206.896 - 4235.514: 98.1347% ( 40) 00:44:31.848 4235.514 - 4264.133: 98.2166% ( 45) 00:44:31.848 4264.133 - 4292.751: 98.2820% ( 36) 00:44:31.848 4292.751 - 4321.369: 98.3547% ( 40) 00:44:31.848 4321.369 - 4349.988: 98.4220% ( 37) 00:44:31.848 4349.988 - 4378.606: 98.4783% ( 31) 00:44:31.848 4378.606 - 4407.224: 98.5329% ( 30) 00:44:31.848 4407.224 - 4435.843: 98.5911% ( 32) 00:44:31.848 4435.843 - 4464.461: 98.6383% ( 26) 00:44:31.848 4464.461 - 4493.079: 98.6929% ( 30) 00:44:31.848 4493.079 - 4521.698: 98.7383% ( 25) 00:44:31.848 4521.698 - 4550.316: 98.7965% ( 32) 00:44:31.848 4550.316 - 4578.934: 98.8401% ( 24) 00:44:31.848 4578.934 - 4607.553: 98.8819% ( 23) 00:44:31.848 4607.553 - 4636.171: 98.9238% ( 23) 00:44:31.848 4636.171 - 4664.790: 98.9619% ( 21) 00:44:31.848 4664.790 - 4693.408: 99.0074% ( 25) 00:44:31.848 4693.408 - 4722.026: 99.0456% ( 21) 00:44:31.848 4722.026 - 4750.645: 99.0819% ( 20) 00:44:31.848 4750.645 - 4779.263: 99.1219% ( 22) 00:44:31.848 4779.263 - 4807.881: 99.1583% ( 20) 00:44:31.848 4807.881 - 4836.500: 99.1855% ( 15) 00:44:31.848 4836.500 - 4865.118: 99.2183% ( 18) 00:44:31.848 4865.118 - 4893.736: 99.2455% ( 15) 00:44:31.848 4893.736 - 4922.355: 99.2674% ( 12) 00:44:31.848 4922.355 - 4950.973: 99.2892% ( 12) 00:44:31.848 4950.973 - 4979.591: 99.3110% ( 12) 00:44:31.848 4979.591 - 5008.210: 99.3273% ( 9) 00:44:31.848 5008.210 - 5036.828: 99.3437% ( 9) 00:44:31.848 5036.828 - 5065.446: 99.3619% ( 10) 00:44:31.848 5065.446 - 5094.065: 99.3801% ( 10) 00:44:31.848 5094.065 - 5122.683: 99.3982% ( 10) 00:44:31.848 5122.683 - 5151.301: 99.4164% ( 10) 00:44:31.848 5151.301 - 5179.920: 99.4346% ( 10) 00:44:31.848 5179.920 - 5208.538: 99.4546% ( 11) 00:44:31.848 5208.538 - 5237.156: 99.4764% ( 12) 00:44:31.848 5237.156 - 5265.775: 99.4964% ( 11) 00:44:31.848 5265.775 - 5294.393: 99.5164% ( 11) 00:44:31.848 5294.393 - 5323.011: 99.5328% ( 9) 00:44:31.848 5323.011 - 5351.630: 99.5510% ( 10) 00:44:31.848 5351.630 - 5380.248: 99.5637% ( 7) 00:44:31.848 5380.248 - 5408.866: 99.5782% ( 8) 00:44:31.848 5408.866 - 5437.485: 99.5910% ( 7) 00:44:31.848 5437.485 - 5466.103: 99.6091% ( 10) 00:44:31.848 5466.103 - 5494.721: 99.6200% ( 6) 00:44:31.848 5494.721 - 5523.340: 99.6346% ( 8) 00:44:31.848 5523.340 - 5551.958: 99.6455% ( 6) 00:44:31.848 5551.958 - 5580.576: 99.6582% ( 7) 00:44:31.848 5580.576 - 5609.195: 99.6673% ( 5) 00:44:31.848 5609.195 - 5637.813: 99.6782% ( 6) 00:44:31.848 5637.813 - 5666.431: 99.6891% ( 6) 00:44:31.848 5666.431 - 5695.050: 99.7000% ( 6) 00:44:31.848 5695.050 - 5723.668: 99.7073% ( 4) 00:44:31.848 5723.668 - 5752.286: 99.7182% ( 6) 00:44:31.848 5752.286 - 5780.905: 99.7237% ( 3) 00:44:31.848 5780.905 - 5809.523: 99.7309% ( 4) 00:44:31.848 5809.523 - 5838.141: 99.7418% ( 6) 00:44:31.848 5838.141 - 5866.760: 99.7491% ( 4) 00:44:31.848 5866.760 - 5895.378: 99.7546% ( 3) 00:44:31.848 5895.378 - 5923.997: 99.7582% ( 2) 00:44:31.848 5923.997 - 5952.615: 99.7655% ( 4) 00:44:31.848 5952.615 - 5981.233: 99.7764% ( 6) 00:44:31.848 5981.233 - 6009.852: 99.7800% ( 2) 00:44:31.848 6009.852 - 6038.470: 99.7855% ( 3) 00:44:31.848 6038.470 - 6067.088: 99.7927% ( 4) 00:44:31.848 6067.088 - 6095.707: 99.7964% ( 2) 00:44:31.848 6095.707 - 6124.325: 99.8018% ( 3) 00:44:31.848 6124.325 - 6152.943: 99.8073% ( 3) 00:44:31.848 6152.943 - 6181.562: 99.8146% ( 4) 00:44:31.848 6181.562 - 6210.180: 99.8255% ( 6) 00:44:31.848 6210.180 - 6238.798: 99.8327% ( 4) 00:44:31.848 6238.798 - 6267.417: 99.8382% ( 3) 00:44:31.848 6267.417 - 6296.035: 99.8437% ( 3) 00:44:31.848 6296.035 - 6324.653: 99.8491% ( 3) 00:44:31.848 6324.653 - 6353.272: 99.8527% ( 2) 00:44:31.848 6353.272 - 6381.890: 99.8600% ( 4) 00:44:31.848 6381.890 - 6410.508: 99.8655% ( 3) 00:44:31.848 6410.508 - 6439.127: 99.8709% ( 3) 00:44:31.848 6439.127 - 6467.745: 99.8746% ( 2) 00:44:31.848 6610.837 - 6639.455: 99.8764% ( 1) 00:44:31.848 6696.692 - 6725.310: 99.8782% ( 1) 00:44:31.848 6753.928 - 6782.547: 99.8800% ( 1) 00:44:31.848 6782.547 - 6811.165: 99.8818% ( 1) 00:44:31.848 6811.165 - 6839.783: 99.8836% ( 1) 00:44:31.848 6982.875 - 7011.493: 99.8855% ( 1) 00:44:31.848 7040.112 - 7068.730: 99.8873% ( 1) 00:44:31.848 7068.730 - 7097.348: 99.8891% ( 1) 00:44:31.848 7097.348 - 7125.967: 99.8909% ( 1) 00:44:31.848 7125.967 - 7154.585: 99.8946% ( 2) 00:44:31.848 7154.585 - 7183.203: 99.9036% ( 5) 00:44:31.848 7183.203 - 7211.822: 99.9091% ( 3) 00:44:31.848 7240.440 - 7269.059: 99.9127% ( 2) 00:44:31.848 7269.059 - 7297.677: 99.9146% ( 1) 00:44:31.848 7297.677 - 7326.295: 99.9164% ( 1) 00:44:31.848 7383.532 - 7440.769: 99.9182% ( 1) 00:44:31.848 7440.769 - 7498.005: 99.9291% ( 6) 00:44:31.848 7498.005 - 7555.242: 99.9309% ( 1) 00:44:31.848 7555.242 - 7612.479: 99.9327% ( 1) 00:44:31.848 7612.479 - 7669.715: 99.9346% ( 1) 00:44:31.848 7669.715 - 7726.952: 99.9400% ( 3) 00:44:31.848 7726.952 - 7784.189: 99.9436% ( 2) 00:44:31.848 7784.189 - 7841.425: 99.9455% ( 1) 00:44:31.848 7841.425 - 7898.662: 99.9491% ( 2) 00:44:31.848 7898.662 - 7955.899: 99.9546% ( 3) 00:44:31.848 7955.899 - 8013.135: 99.9564% ( 1) 00:44:31.848 8013.135 - 8070.372: 99.9600% ( 2) 00:44:31.848 8413.792 - 8471.029: 99.9618% ( 1) 00:44:31.848 8871.686 - 8928.922: 99.9636% ( 1) 00:44:31.848 8928.922 - 8986.159: 99.9655% ( 1) 00:44:31.848 8986.159 - 9043.396: 99.9673% ( 1) 00:44:31.848 9501.289 - 9558.526: 99.9709% ( 2) 00:44:31.848 9558.526 - 9615.762: 99.9836% ( 7) 00:44:31.848 9787.472 - 9844.709: 99.9855% ( 1) 00:44:31.848 10016.419 - 10073.656: 99.9891% ( 2) 00:44:31.848 10130.893 - 10188.129: 99.9909% ( 1) 00:44:31.848 11447.336 - 11504.573: 99.9927% ( 1) 00:44:31.848 12191.413 - 12248.650: 99.9964% ( 2) 00:44:31.848 17743.371 - 17857.845: 99.9982% ( 1) 00:44:31.848 17857.845 - 17972.318: 100.0000% ( 1) 00:44:31.848 00:44:31.848 ************************************ 00:44:31.848 END TEST nvme_perf 00:44:31.848 ************************************ 00:44:31.848 21:58:05 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:44:31.848 00:44:31.848 real 0m2.644s 00:44:31.848 user 0m2.206s 00:44:31.848 sys 0m0.282s 00:44:31.848 21:58:05 nvme.nvme_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:31.848 21:58:05 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:44:31.849 21:58:05 nvme -- common/autotest_common.sh@1142 -- # return 0 00:44:31.849 21:58:05 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:44:31.849 21:58:05 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:44:31.849 21:58:05 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:31.849 21:58:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:31.849 ************************************ 00:44:31.849 START TEST nvme_hello_world 00:44:31.849 ************************************ 00:44:31.849 21:58:05 nvme.nvme_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:44:32.108 Initializing NVMe Controllers 00:44:32.108 Attached to 0000:00:10.0 00:44:32.108 Namespace ID: 1 size: 5GB 00:44:32.108 Initialization complete. 00:44:32.108 INFO: using host memory buffer for IO 00:44:32.108 Hello world! 00:44:32.368 00:44:32.368 real 0m0.351s 00:44:32.368 user 0m0.111s 00:44:32.368 sys 0m0.143s 00:44:32.368 21:58:05 nvme.nvme_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:32.368 ************************************ 00:44:32.368 END TEST nvme_hello_world 00:44:32.368 ************************************ 00:44:32.368 21:58:05 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:44:32.368 21:58:05 nvme -- common/autotest_common.sh@1142 -- # return 0 00:44:32.368 21:58:05 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:44:32.368 21:58:05 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:32.368 21:58:05 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:32.368 21:58:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:32.368 ************************************ 00:44:32.368 START TEST nvme_sgl 00:44:32.368 ************************************ 00:44:32.368 21:58:05 nvme.nvme_sgl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:44:32.627 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:44:32.627 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:44:32.627 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:44:32.627 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:44:32.627 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:44:32.627 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:44:32.627 NVMe Readv/Writev Request test 00:44:32.627 Attached to 0000:00:10.0 00:44:32.627 0000:00:10.0: build_io_request_2 test passed 00:44:32.627 0000:00:10.0: build_io_request_4 test passed 00:44:32.627 0000:00:10.0: build_io_request_5 test passed 00:44:32.627 0000:00:10.0: build_io_request_6 test passed 00:44:32.627 0000:00:10.0: build_io_request_7 test passed 00:44:32.627 0000:00:10.0: build_io_request_10 test passed 00:44:32.627 Cleaning up... 00:44:32.627 ************************************ 00:44:32.627 END TEST nvme_sgl 00:44:32.627 ************************************ 00:44:32.627 00:44:32.627 real 0m0.410s 00:44:32.627 user 0m0.183s 00:44:32.627 sys 0m0.142s 00:44:32.627 21:58:05 nvme.nvme_sgl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:32.627 21:58:05 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:44:32.886 21:58:06 nvme -- common/autotest_common.sh@1142 -- # return 0 00:44:32.886 21:58:06 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:44:32.886 21:58:06 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:32.886 21:58:06 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:32.886 21:58:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:32.886 ************************************ 00:44:32.886 START TEST nvme_e2edp 00:44:32.886 ************************************ 00:44:32.886 21:58:06 nvme.nvme_e2edp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:44:33.145 NVMe Write/Read with End-to-End data protection test 00:44:33.145 Attached to 0000:00:10.0 00:44:33.145 Cleaning up... 00:44:33.145 00:44:33.145 real 0m0.306s 00:44:33.145 user 0m0.081s 00:44:33.145 sys 0m0.148s 00:44:33.145 21:58:06 nvme.nvme_e2edp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:33.145 21:58:06 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:44:33.145 ************************************ 00:44:33.145 END TEST nvme_e2edp 00:44:33.145 ************************************ 00:44:33.145 21:58:06 nvme -- common/autotest_common.sh@1142 -- # return 0 00:44:33.145 21:58:06 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:44:33.145 21:58:06 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:33.145 21:58:06 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:33.145 21:58:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:33.145 ************************************ 00:44:33.145 START TEST nvme_reserve 00:44:33.145 ************************************ 00:44:33.145 21:58:06 nvme.nvme_reserve -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:44:33.404 ===================================================== 00:44:33.404 NVMe Controller at PCI bus 0, device 16, function 0 00:44:33.404 ===================================================== 00:44:33.404 Reservations: Not Supported 00:44:33.404 Reservation test passed 00:44:33.404 00:44:33.404 real 0m0.305s 00:44:33.404 user 0m0.105s 00:44:33.404 sys 0m0.124s 00:44:33.404 21:58:06 nvme.nvme_reserve -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:33.404 21:58:06 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:44:33.404 ************************************ 00:44:33.404 END TEST nvme_reserve 00:44:33.404 ************************************ 00:44:33.404 21:58:06 nvme -- common/autotest_common.sh@1142 -- # return 0 00:44:33.404 21:58:06 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:44:33.404 21:58:06 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:33.404 21:58:06 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:33.404 21:58:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:33.404 ************************************ 00:44:33.404 START TEST nvme_err_injection 00:44:33.404 ************************************ 00:44:33.404 21:58:06 nvme.nvme_err_injection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:44:33.971 NVMe Error Injection test 00:44:33.971 Attached to 0000:00:10.0 00:44:33.971 0000:00:10.0: get features failed as expected 00:44:33.971 0000:00:10.0: get features successfully as expected 00:44:33.971 0000:00:10.0: read failed as expected 00:44:33.971 0000:00:10.0: read successfully as expected 00:44:33.971 Cleaning up... 00:44:33.971 00:44:33.971 real 0m0.331s 00:44:33.971 user 0m0.103s 00:44:33.971 sys 0m0.141s 00:44:33.971 21:58:07 nvme.nvme_err_injection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:33.971 ************************************ 00:44:33.971 END TEST nvme_err_injection 00:44:33.971 ************************************ 00:44:33.972 21:58:07 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:44:33.972 21:58:07 nvme -- common/autotest_common.sh@1142 -- # return 0 00:44:33.972 21:58:07 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:44:33.972 21:58:07 nvme -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:44:33.972 21:58:07 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:33.972 21:58:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:33.972 ************************************ 00:44:33.972 START TEST nvme_overhead 00:44:33.972 ************************************ 00:44:33.972 21:58:07 nvme.nvme_overhead -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:44:35.356 Initializing NVMe Controllers 00:44:35.356 Attached to 0000:00:10.0 00:44:35.356 Initialization complete. Launching workers. 00:44:35.356 submit (in ns) avg, min, max = 11898.4, 9036.7, 608767.7 00:44:35.356 complete (in ns) avg, min, max = 7259.1, 5361.6, 120937.1 00:44:35.356 00:44:35.356 Submit histogram 00:44:35.356 ================ 00:44:35.356 Range in us Cumulative Count 00:44:35.356 8.999 - 9.055: 0.0085% ( 1) 00:44:35.356 9.055 - 9.111: 0.0171% ( 1) 00:44:35.356 9.167 - 9.223: 0.0342% ( 2) 00:44:35.356 9.223 - 9.279: 0.0684% ( 4) 00:44:35.356 9.279 - 9.334: 0.1026% ( 4) 00:44:35.356 9.334 - 9.390: 0.1282% ( 3) 00:44:35.356 9.390 - 9.446: 0.1624% ( 4) 00:44:35.356 9.446 - 9.502: 0.2052% ( 5) 00:44:35.356 9.502 - 9.558: 0.2223% ( 2) 00:44:35.356 9.558 - 9.614: 0.2650% ( 5) 00:44:35.356 9.614 - 9.670: 0.2992% ( 4) 00:44:35.356 9.670 - 9.726: 0.3163% ( 2) 00:44:35.356 9.726 - 9.782: 0.3761% ( 7) 00:44:35.356 9.782 - 9.838: 0.4531% ( 9) 00:44:35.356 9.838 - 9.893: 0.5129% ( 7) 00:44:35.356 9.893 - 9.949: 0.5642% ( 6) 00:44:35.356 9.949 - 10.005: 0.6411% ( 9) 00:44:35.356 10.005 - 10.061: 0.8121% ( 20) 00:44:35.356 10.061 - 10.117: 1.0771% ( 31) 00:44:35.356 10.117 - 10.173: 1.5216% ( 52) 00:44:35.357 10.173 - 10.229: 2.0773% ( 65) 00:44:35.357 10.229 - 10.285: 2.7270% ( 76) 00:44:35.357 10.285 - 10.341: 3.5818% ( 100) 00:44:35.357 10.341 - 10.397: 4.3597% ( 91) 00:44:35.357 10.397 - 10.452: 5.4454% ( 127) 00:44:35.357 10.452 - 10.508: 6.3002% ( 100) 00:44:35.357 10.508 - 10.564: 7.2149% ( 107) 00:44:35.357 10.564 - 10.620: 8.4544% ( 145) 00:44:35.357 10.620 - 10.676: 9.7880% ( 156) 00:44:35.357 10.676 - 10.732: 11.3353% ( 181) 00:44:35.357 10.732 - 10.788: 13.1817% ( 216) 00:44:35.357 10.788 - 10.844: 15.6095% ( 284) 00:44:35.357 10.844 - 10.900: 17.8236% ( 259) 00:44:35.357 10.900 - 10.955: 20.5676% ( 321) 00:44:35.357 10.955 - 11.011: 23.3202% ( 322) 00:44:35.357 11.011 - 11.067: 26.2694% ( 345) 00:44:35.357 11.067 - 11.123: 29.2956% ( 354) 00:44:35.357 11.123 - 11.179: 32.1850% ( 338) 00:44:35.357 11.179 - 11.235: 34.7923% ( 305) 00:44:35.357 11.235 - 11.291: 37.1773% ( 279) 00:44:35.357 11.291 - 11.347: 40.0410% ( 335) 00:44:35.357 11.347 - 11.403: 43.1270% ( 361) 00:44:35.357 11.403 - 11.459: 45.9908% ( 335) 00:44:35.357 11.459 - 11.514: 48.9571% ( 347) 00:44:35.357 11.514 - 11.570: 52.1286% ( 371) 00:44:35.357 11.570 - 11.626: 55.1034% ( 348) 00:44:35.357 11.626 - 11.682: 58.0356% ( 343) 00:44:35.357 11.682 - 11.738: 60.8138% ( 325) 00:44:35.357 11.738 - 11.794: 63.1475% ( 273) 00:44:35.357 11.794 - 11.850: 65.4727% ( 272) 00:44:35.357 11.850 - 11.906: 67.8321% ( 276) 00:44:35.357 11.906 - 11.962: 69.9949% ( 253) 00:44:35.357 11.962 - 12.017: 71.7815% ( 209) 00:44:35.357 12.017 - 12.073: 73.6451% ( 218) 00:44:35.357 12.073 - 12.129: 75.3206% ( 196) 00:44:35.357 12.129 - 12.185: 76.7054% ( 162) 00:44:35.357 12.185 - 12.241: 78.0646% ( 159) 00:44:35.357 12.241 - 12.297: 79.2443% ( 138) 00:44:35.357 12.297 - 12.353: 80.3385% ( 128) 00:44:35.357 12.353 - 12.409: 81.6208% ( 150) 00:44:35.357 12.409 - 12.465: 82.6979% ( 126) 00:44:35.357 12.465 - 12.521: 83.6040% ( 106) 00:44:35.357 12.521 - 12.576: 84.6384% ( 121) 00:44:35.357 12.576 - 12.632: 85.5702% ( 109) 00:44:35.357 12.632 - 12.688: 86.3310% ( 89) 00:44:35.357 12.688 - 12.744: 86.9208% ( 69) 00:44:35.357 12.744 - 12.800: 87.5876% ( 78) 00:44:35.357 12.800 - 12.856: 88.1176% ( 62) 00:44:35.357 12.856 - 12.912: 88.4852% ( 43) 00:44:35.357 12.912 - 12.968: 88.9554% ( 55) 00:44:35.357 12.968 - 13.024: 89.4512% ( 58) 00:44:35.357 13.024 - 13.079: 89.9897% ( 63) 00:44:35.357 13.079 - 13.135: 90.4086% ( 49) 00:44:35.357 13.135 - 13.191: 90.8189% ( 48) 00:44:35.357 13.191 - 13.247: 91.2806% ( 54) 00:44:35.357 13.247 - 13.303: 91.6396% ( 42) 00:44:35.357 13.303 - 13.359: 91.9901% ( 41) 00:44:35.357 13.359 - 13.415: 92.3748% ( 45) 00:44:35.357 13.415 - 13.471: 92.5970% ( 26) 00:44:35.357 13.471 - 13.527: 92.8107% ( 25) 00:44:35.357 13.527 - 13.583: 92.9304% ( 14) 00:44:35.357 13.583 - 13.638: 93.0415% ( 13) 00:44:35.357 13.638 - 13.694: 93.2296% ( 22) 00:44:35.357 13.694 - 13.750: 93.3835% ( 18) 00:44:35.357 13.750 - 13.806: 93.5545% ( 20) 00:44:35.357 13.806 - 13.862: 93.8109% ( 30) 00:44:35.357 13.862 - 13.918: 94.0759% ( 31) 00:44:35.357 13.918 - 13.974: 94.4520% ( 44) 00:44:35.357 13.974 - 14.030: 94.7940% ( 40) 00:44:35.357 14.030 - 14.086: 95.1188% ( 38) 00:44:35.357 14.086 - 14.141: 95.4437% ( 38) 00:44:35.357 14.141 - 14.197: 95.6574% ( 25) 00:44:35.357 14.197 - 14.253: 95.9138% ( 30) 00:44:35.357 14.253 - 14.309: 96.2045% ( 34) 00:44:35.357 14.309 - 14.421: 96.6746% ( 55) 00:44:35.357 14.421 - 14.533: 96.9653% ( 34) 00:44:35.357 14.533 - 14.645: 97.2987% ( 39) 00:44:35.357 14.645 - 14.756: 97.5979% ( 35) 00:44:35.357 14.756 - 14.868: 97.7945% ( 23) 00:44:35.357 14.868 - 14.980: 97.9826% ( 22) 00:44:35.357 14.980 - 15.092: 98.0937% ( 13) 00:44:35.357 15.092 - 15.203: 98.1706% ( 9) 00:44:35.357 15.203 - 15.315: 98.2219% ( 6) 00:44:35.357 15.315 - 15.427: 98.2903% ( 8) 00:44:35.357 15.427 - 15.539: 98.3245% ( 4) 00:44:35.357 15.539 - 15.651: 98.3416% ( 2) 00:44:35.357 15.651 - 15.762: 98.3587% ( 2) 00:44:35.357 15.762 - 15.874: 98.3843% ( 3) 00:44:35.357 15.874 - 15.986: 98.4100% ( 3) 00:44:35.357 16.098 - 16.210: 98.4271% ( 2) 00:44:35.357 16.210 - 16.321: 98.4442% ( 2) 00:44:35.357 16.321 - 16.433: 98.4613% ( 2) 00:44:35.357 16.433 - 16.545: 98.4869% ( 3) 00:44:35.357 16.545 - 16.657: 98.4955% ( 1) 00:44:35.357 16.657 - 16.769: 98.5126% ( 2) 00:44:35.357 16.769 - 16.880: 98.5382% ( 3) 00:44:35.357 16.880 - 16.992: 98.5553% ( 2) 00:44:35.357 16.992 - 17.104: 98.5724% ( 2) 00:44:35.357 17.104 - 17.216: 98.6237% ( 6) 00:44:35.357 17.216 - 17.328: 98.6493% ( 3) 00:44:35.357 17.328 - 17.439: 98.6664% ( 2) 00:44:35.357 17.439 - 17.551: 98.6750% ( 1) 00:44:35.357 17.551 - 17.663: 98.6921% ( 2) 00:44:35.357 17.663 - 17.775: 98.7434% ( 6) 00:44:35.357 17.886 - 17.998: 98.7690% ( 3) 00:44:35.357 17.998 - 18.110: 98.7861% ( 2) 00:44:35.357 18.110 - 18.222: 98.7947% ( 1) 00:44:35.357 18.222 - 18.334: 98.8032% ( 1) 00:44:35.357 18.334 - 18.445: 98.8118% ( 1) 00:44:35.357 18.445 - 18.557: 98.8289% ( 2) 00:44:35.357 18.781 - 18.893: 98.8374% ( 1) 00:44:35.357 18.893 - 19.004: 98.8716% ( 4) 00:44:35.357 19.004 - 19.116: 98.8802% ( 1) 00:44:35.357 19.228 - 19.340: 98.8887% ( 1) 00:44:35.357 19.340 - 19.452: 98.8972% ( 1) 00:44:35.357 19.452 - 19.563: 98.9229% ( 3) 00:44:35.357 19.563 - 19.675: 98.9400% ( 2) 00:44:35.357 19.675 - 19.787: 98.9571% ( 2) 00:44:35.357 19.787 - 19.899: 98.9656% ( 1) 00:44:35.357 19.899 - 20.010: 98.9742% ( 1) 00:44:35.357 20.010 - 20.122: 98.9913% ( 2) 00:44:35.357 20.122 - 20.234: 98.9998% ( 1) 00:44:35.357 20.346 - 20.458: 99.0340% ( 4) 00:44:35.357 20.458 - 20.569: 99.0426% ( 1) 00:44:35.357 20.569 - 20.681: 99.0597% ( 2) 00:44:35.357 20.793 - 20.905: 99.0768% ( 2) 00:44:35.357 21.128 - 21.240: 99.0853% ( 1) 00:44:35.357 21.240 - 21.352: 99.0939% ( 1) 00:44:35.357 21.352 - 21.464: 99.1024% ( 1) 00:44:35.357 21.464 - 21.576: 99.1195% ( 2) 00:44:35.357 21.576 - 21.687: 99.1452% ( 3) 00:44:35.357 21.687 - 21.799: 99.1622% ( 2) 00:44:35.357 21.799 - 21.911: 99.1879% ( 3) 00:44:35.357 21.911 - 22.023: 99.2221% ( 4) 00:44:35.357 22.023 - 22.134: 99.2563% ( 4) 00:44:35.357 22.134 - 22.246: 99.2819% ( 3) 00:44:35.357 22.246 - 22.358: 99.3247% ( 5) 00:44:35.357 22.358 - 22.470: 99.3589% ( 4) 00:44:35.357 22.470 - 22.582: 99.3931% ( 4) 00:44:35.357 22.582 - 22.693: 99.4187% ( 3) 00:44:35.357 22.693 - 22.805: 99.4700% ( 6) 00:44:35.357 22.805 - 22.917: 99.4956% ( 3) 00:44:35.357 22.917 - 23.029: 99.5384% ( 5) 00:44:35.357 23.029 - 23.141: 99.5640% ( 3) 00:44:35.357 23.141 - 23.252: 99.5897% ( 3) 00:44:35.357 23.252 - 23.364: 99.5982% ( 1) 00:44:35.357 23.364 - 23.476: 99.6068% ( 1) 00:44:35.357 23.588 - 23.700: 99.6239% ( 2) 00:44:35.357 23.700 - 23.811: 99.6410% ( 2) 00:44:35.357 23.923 - 24.035: 99.6495% ( 1) 00:44:35.357 24.147 - 24.259: 99.6581% ( 1) 00:44:35.357 24.370 - 24.482: 99.6666% ( 1) 00:44:35.357 24.482 - 24.594: 99.6752% ( 1) 00:44:35.357 24.706 - 24.817: 99.6837% ( 1) 00:44:35.357 25.041 - 25.153: 99.7008% ( 2) 00:44:35.357 25.153 - 25.265: 99.7094% ( 1) 00:44:35.357 25.488 - 25.600: 99.7179% ( 1) 00:44:35.357 26.047 - 26.159: 99.7264% ( 1) 00:44:35.357 26.383 - 26.494: 99.7350% ( 1) 00:44:35.357 26.494 - 26.606: 99.7435% ( 1) 00:44:35.357 26.606 - 26.718: 99.7521% ( 1) 00:44:35.357 26.718 - 26.830: 99.7606% ( 1) 00:44:35.357 26.941 - 27.053: 99.7692% ( 1) 00:44:35.357 27.165 - 27.277: 99.7777% ( 1) 00:44:35.357 27.277 - 27.389: 99.7863% ( 1) 00:44:35.357 27.389 - 27.500: 99.7948% ( 1) 00:44:35.357 27.500 - 27.612: 99.8119% ( 2) 00:44:35.357 27.612 - 27.724: 99.8205% ( 1) 00:44:35.357 27.724 - 27.836: 99.8376% ( 2) 00:44:35.357 27.836 - 27.948: 99.8461% ( 1) 00:44:35.357 28.171 - 28.283: 99.8547% ( 1) 00:44:35.357 28.395 - 28.507: 99.8632% ( 1) 00:44:35.357 28.507 - 28.618: 99.8889% ( 3) 00:44:35.357 28.618 - 28.842: 99.8974% ( 1) 00:44:35.357 29.513 - 29.736: 99.9060% ( 1) 00:44:35.357 30.631 - 30.854: 99.9145% ( 1) 00:44:35.357 30.854 - 31.078: 99.9231% ( 1) 00:44:35.357 31.301 - 31.525: 99.9316% ( 1) 00:44:35.357 32.643 - 32.866: 99.9402% ( 1) 00:44:35.357 33.537 - 33.761: 99.9487% ( 1) 00:44:35.357 34.655 - 34.879: 99.9573% ( 1) 00:44:35.357 37.785 - 38.009: 99.9658% ( 1) 00:44:35.357 42.480 - 42.704: 99.9744% ( 1) 00:44:35.357 75.570 - 76.017: 99.9829% ( 1) 00:44:35.357 108.660 - 109.107: 99.9915% ( 1) 00:44:35.357 608.140 - 611.717: 100.0000% ( 1) 00:44:35.357 00:44:35.357 Complete histogram 00:44:35.358 ================== 00:44:35.358 Range in us Cumulative Count 00:44:35.358 5.338 - 5.366: 0.0171% ( 2) 00:44:35.358 5.366 - 5.394: 0.0342% ( 2) 00:44:35.358 5.394 - 5.422: 0.0598% ( 3) 00:44:35.358 5.422 - 5.450: 0.0769% ( 2) 00:44:35.358 5.450 - 5.478: 0.0940% ( 2) 00:44:35.358 5.478 - 5.506: 0.1111% ( 2) 00:44:35.358 5.506 - 5.534: 0.1282% ( 2) 00:44:35.358 5.534 - 5.562: 0.1368% ( 1) 00:44:35.358 5.562 - 5.590: 0.1795% ( 5) 00:44:35.358 5.590 - 5.617: 0.1966% ( 2) 00:44:35.358 5.617 - 5.645: 0.2137% ( 2) 00:44:35.358 5.645 - 5.673: 0.2308% ( 2) 00:44:35.358 5.673 - 5.701: 0.2650% ( 4) 00:44:35.358 5.701 - 5.729: 0.2821% ( 2) 00:44:35.358 5.757 - 5.785: 0.3077% ( 3) 00:44:35.358 5.785 - 5.813: 0.3163% ( 1) 00:44:35.358 5.813 - 5.841: 0.3505% ( 4) 00:44:35.358 5.841 - 5.869: 0.3847% ( 4) 00:44:35.358 5.869 - 5.897: 0.3932% ( 1) 00:44:35.358 5.897 - 5.925: 0.4103% ( 2) 00:44:35.358 5.925 - 5.953: 0.4531% ( 5) 00:44:35.358 5.953 - 5.981: 0.5300% ( 9) 00:44:35.358 5.981 - 6.009: 0.6839% ( 18) 00:44:35.358 6.009 - 6.037: 0.9660% ( 33) 00:44:35.358 6.037 - 6.065: 1.4618% ( 58) 00:44:35.358 6.065 - 6.093: 2.1457% ( 80) 00:44:35.358 6.093 - 6.121: 2.8466% ( 82) 00:44:35.358 6.121 - 6.148: 3.3766% ( 62) 00:44:35.358 6.148 - 6.176: 3.9237% ( 64) 00:44:35.358 6.176 - 6.204: 4.4623% ( 63) 00:44:35.358 6.204 - 6.232: 5.0692% ( 71) 00:44:35.358 6.232 - 6.260: 5.6078% ( 63) 00:44:35.358 6.260 - 6.288: 6.2489% ( 75) 00:44:35.358 6.288 - 6.316: 6.7789% ( 62) 00:44:35.358 6.316 - 6.344: 7.4286% ( 76) 00:44:35.358 6.344 - 6.372: 8.1894% ( 89) 00:44:35.358 6.372 - 6.400: 9.0956% ( 106) 00:44:35.358 6.400 - 6.428: 10.3522% ( 147) 00:44:35.358 6.428 - 6.456: 11.8482% ( 175) 00:44:35.358 6.456 - 6.484: 13.3442% ( 175) 00:44:35.358 6.484 - 6.512: 15.0282% ( 197) 00:44:35.358 6.512 - 6.540: 16.7806% ( 205) 00:44:35.358 6.540 - 6.568: 18.4647% ( 197) 00:44:35.358 6.568 - 6.596: 20.4052% ( 227) 00:44:35.358 6.596 - 6.624: 22.3884% ( 232) 00:44:35.358 6.624 - 6.652: 24.4743% ( 244) 00:44:35.358 6.652 - 6.679: 26.5515% ( 243) 00:44:35.358 6.679 - 6.707: 28.6972% ( 251) 00:44:35.358 6.707 - 6.735: 31.1592% ( 288) 00:44:35.358 6.735 - 6.763: 33.5271% ( 277) 00:44:35.358 6.763 - 6.791: 35.8095% ( 267) 00:44:35.358 6.791 - 6.819: 38.0065% ( 257) 00:44:35.358 6.819 - 6.847: 39.9726% ( 230) 00:44:35.358 6.847 - 6.875: 41.8020% ( 214) 00:44:35.358 6.875 - 6.903: 43.3920% ( 186) 00:44:35.358 6.903 - 6.931: 45.0162% ( 190) 00:44:35.358 6.931 - 6.959: 46.7003% ( 197) 00:44:35.358 6.959 - 6.987: 48.3501% ( 193) 00:44:35.358 6.987 - 7.015: 49.9487% ( 187) 00:44:35.358 7.015 - 7.043: 51.7695% ( 213) 00:44:35.358 7.043 - 7.071: 53.6075% ( 215) 00:44:35.358 7.071 - 7.099: 55.6249% ( 236) 00:44:35.358 7.099 - 7.127: 57.7449% ( 248) 00:44:35.358 7.127 - 7.155: 60.0701% ( 272) 00:44:35.358 7.155 - 7.210: 65.0026% ( 577) 00:44:35.358 7.210 - 7.266: 69.9436% ( 578) 00:44:35.358 7.266 - 7.322: 74.1238% ( 489) 00:44:35.358 7.322 - 7.378: 77.4662% ( 391) 00:44:35.358 7.378 - 7.434: 80.1334% ( 312) 00:44:35.358 7.434 - 7.490: 82.1679% ( 238) 00:44:35.358 7.490 - 7.546: 83.8348% ( 195) 00:44:35.358 7.546 - 7.602: 85.1257% ( 151) 00:44:35.358 7.602 - 7.658: 86.0403% ( 107) 00:44:35.358 7.658 - 7.714: 86.9465% ( 106) 00:44:35.358 7.714 - 7.769: 87.6218% ( 79) 00:44:35.358 7.769 - 7.825: 88.2630% ( 75) 00:44:35.358 7.825 - 7.881: 88.9896% ( 85) 00:44:35.358 7.881 - 7.937: 89.6051% ( 72) 00:44:35.358 7.937 - 7.993: 90.2206% ( 72) 00:44:35.358 7.993 - 8.049: 90.6480% ( 50) 00:44:35.358 8.049 - 8.105: 91.0668% ( 49) 00:44:35.358 8.105 - 8.161: 91.2720% ( 24) 00:44:35.358 8.161 - 8.217: 91.4772% ( 24) 00:44:35.358 8.217 - 8.272: 91.6396% ( 19) 00:44:35.358 8.272 - 8.328: 91.7422% ( 12) 00:44:35.358 8.328 - 8.384: 91.8533% ( 13) 00:44:35.358 8.384 - 8.440: 91.9046% ( 6) 00:44:35.358 8.440 - 8.496: 91.9644% ( 7) 00:44:35.358 8.496 - 8.552: 92.0414% ( 9) 00:44:35.358 8.552 - 8.608: 92.3833% ( 40) 00:44:35.358 8.608 - 8.664: 92.6825% ( 35) 00:44:35.358 8.664 - 8.720: 92.9048% ( 26) 00:44:35.358 8.720 - 8.776: 93.0501% ( 17) 00:44:35.358 8.776 - 8.831: 93.1612% ( 13) 00:44:35.358 8.831 - 8.887: 93.2724% ( 13) 00:44:35.358 8.887 - 8.943: 93.3407% ( 8) 00:44:35.358 8.943 - 8.999: 93.4433% ( 12) 00:44:35.358 8.999 - 9.055: 93.4946% ( 6) 00:44:35.358 9.055 - 9.111: 93.8195% ( 38) 00:44:35.358 9.111 - 9.167: 94.4435% ( 73) 00:44:35.358 9.167 - 9.223: 95.2214% ( 91) 00:44:35.358 9.223 - 9.279: 96.0335% ( 95) 00:44:35.358 9.279 - 9.334: 96.6490% ( 72) 00:44:35.358 9.334 - 9.390: 97.0508% ( 47) 00:44:35.358 9.390 - 9.446: 97.4184% ( 43) 00:44:35.358 9.446 - 9.502: 97.6663% ( 29) 00:44:35.358 9.502 - 9.558: 97.8885% ( 26) 00:44:35.358 9.558 - 9.614: 98.0253% ( 16) 00:44:35.358 9.614 - 9.670: 98.1364% ( 13) 00:44:35.358 9.670 - 9.726: 98.2134% ( 9) 00:44:35.358 9.726 - 9.782: 98.3074% ( 11) 00:44:35.358 9.782 - 9.838: 98.3672% ( 7) 00:44:35.358 9.838 - 9.893: 98.4271% ( 7) 00:44:35.358 9.893 - 9.949: 98.4613% ( 4) 00:44:35.358 9.949 - 10.005: 98.4955% ( 4) 00:44:35.358 10.005 - 10.061: 98.5211% ( 3) 00:44:35.358 10.061 - 10.117: 98.5553% ( 4) 00:44:35.358 10.117 - 10.173: 98.5895% ( 4) 00:44:35.358 10.173 - 10.229: 98.5981% ( 1) 00:44:35.358 10.341 - 10.397: 98.6066% ( 1) 00:44:35.358 10.397 - 10.452: 98.6237% ( 2) 00:44:35.358 10.452 - 10.508: 98.6322% ( 1) 00:44:35.358 10.564 - 10.620: 98.6408% ( 1) 00:44:35.358 10.620 - 10.676: 98.6579% ( 2) 00:44:35.358 10.676 - 10.732: 98.6750% ( 2) 00:44:35.358 10.732 - 10.788: 98.6835% ( 1) 00:44:35.358 10.900 - 10.955: 98.6921% ( 1) 00:44:35.358 11.011 - 11.067: 98.7092% ( 2) 00:44:35.358 11.067 - 11.123: 98.7177% ( 1) 00:44:35.358 11.179 - 11.235: 98.7263% ( 1) 00:44:35.358 11.235 - 11.291: 98.7434% ( 2) 00:44:35.358 11.291 - 11.347: 98.7519% ( 1) 00:44:35.358 11.347 - 11.403: 98.7605% ( 1) 00:44:35.358 11.403 - 11.459: 98.7690% ( 1) 00:44:35.358 11.626 - 11.682: 98.7776% ( 1) 00:44:35.358 11.682 - 11.738: 98.7861% ( 1) 00:44:35.358 11.738 - 11.794: 98.7947% ( 1) 00:44:35.358 11.850 - 11.906: 98.8032% ( 1) 00:44:35.358 11.962 - 12.017: 98.8118% ( 1) 00:44:35.358 12.017 - 12.073: 98.8203% ( 1) 00:44:35.358 12.073 - 12.129: 98.8289% ( 1) 00:44:35.358 12.185 - 12.241: 98.8460% ( 2) 00:44:35.358 12.297 - 12.353: 98.8545% ( 1) 00:44:35.358 12.353 - 12.409: 98.8631% ( 1) 00:44:35.358 12.576 - 12.632: 98.8716% ( 1) 00:44:35.358 12.632 - 12.688: 98.8802% ( 1) 00:44:35.358 12.968 - 13.024: 98.8887% ( 1) 00:44:35.358 13.024 - 13.079: 98.8972% ( 1) 00:44:35.358 13.135 - 13.191: 98.9058% ( 1) 00:44:35.358 13.247 - 13.303: 98.9229% ( 2) 00:44:35.358 13.359 - 13.415: 98.9314% ( 1) 00:44:35.358 13.694 - 13.750: 98.9571% ( 3) 00:44:35.358 13.806 - 13.862: 98.9656% ( 1) 00:44:35.358 13.974 - 14.030: 98.9742% ( 1) 00:44:35.358 14.141 - 14.197: 98.9827% ( 1) 00:44:35.358 14.197 - 14.253: 98.9913% ( 1) 00:44:35.358 14.309 - 14.421: 99.0084% ( 2) 00:44:35.358 14.421 - 14.533: 99.0169% ( 1) 00:44:35.358 14.533 - 14.645: 99.0340% ( 2) 00:44:35.358 14.645 - 14.756: 99.0597% ( 3) 00:44:35.358 14.756 - 14.868: 99.0682% ( 1) 00:44:35.358 14.980 - 15.092: 99.0768% ( 1) 00:44:35.358 15.092 - 15.203: 99.0939% ( 2) 00:44:35.358 15.427 - 15.539: 99.1195% ( 3) 00:44:35.358 15.539 - 15.651: 99.1281% ( 1) 00:44:35.358 15.986 - 16.098: 99.1366% ( 1) 00:44:35.358 16.098 - 16.210: 99.1622% ( 3) 00:44:35.358 16.433 - 16.545: 99.1793% ( 2) 00:44:35.358 16.657 - 16.769: 99.1964% ( 2) 00:44:35.358 16.769 - 16.880: 99.2135% ( 2) 00:44:35.358 16.992 - 17.104: 99.2221% ( 1) 00:44:35.358 17.104 - 17.216: 99.2563% ( 4) 00:44:35.358 17.328 - 17.439: 99.2648% ( 1) 00:44:35.358 17.439 - 17.551: 99.2734% ( 1) 00:44:35.358 17.551 - 17.663: 99.3247% ( 6) 00:44:35.358 17.663 - 17.775: 99.3503% ( 3) 00:44:35.358 17.775 - 17.886: 99.3931% ( 5) 00:44:35.358 17.886 - 17.998: 99.4614% ( 8) 00:44:35.358 17.998 - 18.110: 99.4785% ( 2) 00:44:35.358 18.110 - 18.222: 99.5298% ( 6) 00:44:35.358 18.222 - 18.334: 99.5555% ( 3) 00:44:35.358 18.334 - 18.445: 99.5811% ( 3) 00:44:35.358 18.445 - 18.557: 99.6239% ( 5) 00:44:35.358 18.557 - 18.669: 99.6410% ( 2) 00:44:35.358 18.669 - 18.781: 99.6495% ( 1) 00:44:35.359 18.781 - 18.893: 99.6666% ( 2) 00:44:35.359 18.893 - 19.004: 99.6923% ( 3) 00:44:35.359 19.004 - 19.116: 99.7094% ( 2) 00:44:35.359 19.116 - 19.228: 99.7264% ( 2) 00:44:35.359 19.228 - 19.340: 99.7350% ( 1) 00:44:35.359 20.010 - 20.122: 99.7777% ( 5) 00:44:35.359 20.122 - 20.234: 99.7863% ( 1) 00:44:35.359 21.240 - 21.352: 99.7948% ( 1) 00:44:35.359 21.576 - 21.687: 99.8034% ( 1) 00:44:35.359 22.246 - 22.358: 99.8119% ( 1) 00:44:35.359 22.358 - 22.470: 99.8205% ( 1) 00:44:35.359 22.582 - 22.693: 99.8290% ( 1) 00:44:35.359 22.805 - 22.917: 99.8376% ( 1) 00:44:35.359 23.029 - 23.141: 99.8547% ( 2) 00:44:35.359 23.141 - 23.252: 99.8803% ( 3) 00:44:35.359 23.588 - 23.700: 99.8974% ( 2) 00:44:35.359 23.923 - 24.035: 99.9060% ( 1) 00:44:35.359 25.376 - 25.488: 99.9145% ( 1) 00:44:35.359 25.712 - 25.824: 99.9231% ( 1) 00:44:35.359 25.935 - 26.047: 99.9316% ( 1) 00:44:35.359 26.271 - 26.383: 99.9402% ( 1) 00:44:35.359 28.171 - 28.283: 99.9487% ( 1) 00:44:35.359 31.525 - 31.748: 99.9573% ( 1) 00:44:35.359 33.314 - 33.537: 99.9658% ( 1) 00:44:35.359 36.667 - 36.891: 99.9744% ( 1) 00:44:35.359 46.058 - 46.281: 99.9829% ( 1) 00:44:35.359 62.155 - 62.603: 99.9915% ( 1) 00:44:35.359 120.734 - 121.628: 100.0000% ( 1) 00:44:35.359 00:44:35.359 00:44:35.359 real 0m1.284s 00:44:35.359 user 0m1.099s 00:44:35.359 sys 0m0.125s 00:44:35.359 21:58:08 nvme.nvme_overhead -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:35.359 ************************************ 00:44:35.359 END TEST nvme_overhead 00:44:35.359 ************************************ 00:44:35.359 21:58:08 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:44:35.359 21:58:08 nvme -- common/autotest_common.sh@1142 -- # return 0 00:44:35.359 21:58:08 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:44:35.359 21:58:08 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:44:35.359 21:58:08 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:35.359 21:58:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:35.359 ************************************ 00:44:35.359 START TEST nvme_arbitration 00:44:35.359 ************************************ 00:44:35.359 21:58:08 nvme.nvme_arbitration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:44:38.644 Initializing NVMe Controllers 00:44:38.644 Attached to 0000:00:10.0 00:44:38.644 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:44:38.644 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:44:38.644 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:44:38.644 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:44:38.644 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:44:38.644 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:44:38.644 Initialization complete. Launching workers. 00:44:38.644 Starting thread on core 1 with urgent priority queue 00:44:38.644 Starting thread on core 2 with urgent priority queue 00:44:38.644 Starting thread on core 3 with urgent priority queue 00:44:38.644 Starting thread on core 0 with urgent priority queue 00:44:38.644 QEMU NVMe Ctrl (12340 ) core 0: 917.33 IO/s 109.01 secs/100000 ios 00:44:38.644 QEMU NVMe Ctrl (12340 ) core 1: 938.67 IO/s 106.53 secs/100000 ios 00:44:38.644 QEMU NVMe Ctrl (12340 ) core 2: 490.67 IO/s 203.80 secs/100000 ios 00:44:38.644 QEMU NVMe Ctrl (12340 ) core 3: 426.67 IO/s 234.38 secs/100000 ios 00:44:38.644 ======================================================== 00:44:38.644 00:44:38.644 ************************************ 00:44:38.644 END TEST nvme_arbitration 00:44:38.644 ************************************ 00:44:38.644 00:44:38.644 real 0m3.414s 00:44:38.644 user 0m9.310s 00:44:38.644 sys 0m0.148s 00:44:38.644 21:58:11 nvme.nvme_arbitration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:38.644 21:58:11 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:44:38.644 21:58:11 nvme -- common/autotest_common.sh@1142 -- # return 0 00:44:38.644 21:58:11 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:44:38.644 21:58:11 nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:44:38.644 21:58:11 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:38.644 21:58:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:38.644 ************************************ 00:44:38.644 START TEST nvme_single_aen 00:44:38.644 ************************************ 00:44:38.644 21:58:11 nvme.nvme_single_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:44:38.903 Asynchronous Event Request test 00:44:38.903 Attached to 0000:00:10.0 00:44:38.903 Reset controller to setup AER completions for this process 00:44:38.903 Registering asynchronous event callbacks... 00:44:38.903 Getting orig temperature thresholds of all controllers 00:44:38.903 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:44:38.903 Setting all controllers temperature threshold low to trigger AER 00:44:38.903 Waiting for all controllers temperature threshold to be set lower 00:44:38.903 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:44:38.903 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:44:38.903 Waiting for all controllers to trigger AER and reset threshold 00:44:38.903 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:44:38.903 Cleaning up... 00:44:38.903 ************************************ 00:44:38.903 END TEST nvme_single_aen 00:44:38.903 ************************************ 00:44:38.903 00:44:38.903 real 0m0.283s 00:44:38.903 user 0m0.077s 00:44:38.903 sys 0m0.151s 00:44:38.903 21:58:12 nvme.nvme_single_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:38.903 21:58:12 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:44:39.163 21:58:12 nvme -- common/autotest_common.sh@1142 -- # return 0 00:44:39.163 21:58:12 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:44:39.163 21:58:12 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:39.163 21:58:12 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:39.163 21:58:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:39.163 ************************************ 00:44:39.163 START TEST nvme_doorbell_aers 00:44:39.163 ************************************ 00:44:39.163 21:58:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1123 -- # nvme_doorbell_aers 00:44:39.163 21:58:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:44:39.163 21:58:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:44:39.163 21:58:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:44:39.163 21:58:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:44:39.163 21:58:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:44:39.163 21:58:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:44:39.163 21:58:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:44:39.163 21:58:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:44:39.163 21:58:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:44:39.163 21:58:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:44:39.163 21:58:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:44:39.163 21:58:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:44:39.163 21:58:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:44:39.422 [2024-07-15 21:58:12.678514] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 173916) is not found. Dropping the request. 00:44:49.457 Executing: test_write_invalid_db 00:44:49.457 Waiting for AER completion... 00:44:49.457 Failure: test_write_invalid_db 00:44:49.457 00:44:49.457 Executing: test_invalid_db_write_overflow_sq 00:44:49.457 Waiting for AER completion... 00:44:49.457 Failure: test_invalid_db_write_overflow_sq 00:44:49.457 00:44:49.457 Executing: test_invalid_db_write_overflow_cq 00:44:49.457 Waiting for AER completion... 00:44:49.457 Failure: test_invalid_db_write_overflow_cq 00:44:49.457 00:44:49.457 ************************************ 00:44:49.457 END TEST nvme_doorbell_aers 00:44:49.457 00:44:49.457 real 0m10.138s 00:44:49.457 user 0m8.962s 00:44:49.457 sys 0m1.142s 00:44:49.457 21:58:22 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:49.457 21:58:22 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:44:49.457 ************************************ 00:44:49.457 21:58:22 nvme -- common/autotest_common.sh@1142 -- # return 0 00:44:49.457 21:58:22 nvme -- nvme/nvme.sh@97 -- # uname 00:44:49.457 21:58:22 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:44:49.457 21:58:22 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:44:49.457 21:58:22 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:44:49.457 21:58:22 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:49.457 21:58:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:49.457 ************************************ 00:44:49.457 START TEST nvme_multi_aen 00:44:49.457 ************************************ 00:44:49.457 21:58:22 nvme.nvme_multi_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:44:49.457 [2024-07-15 21:58:22.776382] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 173916) is not found. Dropping the request. 00:44:49.457 [2024-07-15 21:58:22.776598] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 173916) is not found. Dropping the request. 00:44:49.457 [2024-07-15 21:58:22.776684] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 173916) is not found. Dropping the request. 00:44:49.457 Child process pid: 174122 00:44:50.023 [Child] Asynchronous Event Request test 00:44:50.023 [Child] Attached to 0000:00:10.0 00:44:50.023 [Child] Registering asynchronous event callbacks... 00:44:50.023 [Child] Getting orig temperature thresholds of all controllers 00:44:50.023 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:44:50.023 [Child] Waiting for all controllers to trigger AER and reset threshold 00:44:50.023 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:44:50.023 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:44:50.023 [Child] Cleaning up... 00:44:50.023 Asynchronous Event Request test 00:44:50.023 Attached to 0000:00:10.0 00:44:50.023 Reset controller to setup AER completions for this process 00:44:50.023 Registering asynchronous event callbacks... 00:44:50.023 Getting orig temperature thresholds of all controllers 00:44:50.023 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:44:50.023 Setting all controllers temperature threshold low to trigger AER 00:44:50.023 Waiting for all controllers temperature threshold to be set lower 00:44:50.023 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:44:50.023 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:44:50.023 Waiting for all controllers to trigger AER and reset threshold 00:44:50.023 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:44:50.023 Cleaning up... 00:44:50.023 ************************************ 00:44:50.023 END TEST nvme_multi_aen 00:44:50.023 ************************************ 00:44:50.023 00:44:50.023 real 0m0.645s 00:44:50.023 user 0m0.165s 00:44:50.023 sys 0m0.302s 00:44:50.023 21:58:23 nvme.nvme_multi_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:50.023 21:58:23 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:44:50.023 21:58:23 nvme -- common/autotest_common.sh@1142 -- # return 0 00:44:50.023 21:58:23 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:44:50.023 21:58:23 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:44:50.023 21:58:23 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:50.023 21:58:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:50.023 ************************************ 00:44:50.023 START TEST nvme_startup 00:44:50.023 ************************************ 00:44:50.023 21:58:23 nvme.nvme_startup -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:44:50.280 Initializing NVMe Controllers 00:44:50.280 Attached to 0000:00:10.0 00:44:50.280 Initialization complete. 00:44:50.280 Time used:206188.203 (us). 00:44:50.280 00:44:50.280 real 0m0.313s 00:44:50.280 user 0m0.080s 00:44:50.280 sys 0m0.155s 00:44:50.280 21:58:23 nvme.nvme_startup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:50.280 21:58:23 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:44:50.280 ************************************ 00:44:50.280 END TEST nvme_startup 00:44:50.280 ************************************ 00:44:50.280 21:58:23 nvme -- common/autotest_common.sh@1142 -- # return 0 00:44:50.280 21:58:23 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:44:50.280 21:58:23 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:50.280 21:58:23 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:50.280 21:58:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:50.280 ************************************ 00:44:50.280 START TEST nvme_multi_secondary 00:44:50.280 ************************************ 00:44:50.280 21:58:23 nvme.nvme_multi_secondary -- common/autotest_common.sh@1123 -- # nvme_multi_secondary 00:44:50.280 21:58:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=174184 00:44:50.280 21:58:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:44:50.280 21:58:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=174185 00:44:50.280 21:58:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:44:50.280 21:58:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:44:53.562 Initializing NVMe Controllers 00:44:53.562 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:44:53.562 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:44:53.562 Initialization complete. Launching workers. 00:44:53.562 ======================================================== 00:44:53.562 Latency(us) 00:44:53.562 Device Information : IOPS MiB/s Average min max 00:44:53.562 PCIE (0000:00:10.0) NSID 1 from core 2: 16132.98 63.02 990.90 126.91 21703.01 00:44:53.562 ======================================================== 00:44:53.562 Total : 16132.98 63.02 990.90 126.91 21703.01 00:44:53.562 00:44:53.820 21:58:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 174184 00:44:53.820 Initializing NVMe Controllers 00:44:53.820 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:44:53.820 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:44:53.820 Initialization complete. Launching workers. 00:44:53.820 ======================================================== 00:44:53.820 Latency(us) 00:44:53.820 Device Information : IOPS MiB/s Average min max 00:44:53.820 PCIE (0000:00:10.0) NSID 1 from core 1: 36991.17 144.50 432.16 126.67 6102.10 00:44:53.820 ======================================================== 00:44:53.820 Total : 36991.17 144.50 432.16 126.67 6102.10 00:44:53.820 00:44:56.385 Initializing NVMe Controllers 00:44:56.385 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:44:56.385 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:44:56.385 Initialization complete. Launching workers. 00:44:56.385 ======================================================== 00:44:56.385 Latency(us) 00:44:56.385 Device Information : IOPS MiB/s Average min max 00:44:56.385 PCIE (0000:00:10.0) NSID 1 from core 0: 43861.99 171.34 364.43 132.85 8409.59 00:44:56.385 ======================================================== 00:44:56.385 Total : 43861.99 171.34 364.43 132.85 8409.59 00:44:56.385 00:44:56.385 21:58:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 174185 00:44:56.385 21:58:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=174270 00:44:56.385 21:58:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:44:56.385 21:58:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=174271 00:44:56.385 21:58:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:44:56.385 21:58:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:44:59.687 Initializing NVMe Controllers 00:44:59.687 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:44:59.688 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:44:59.688 Initialization complete. Launching workers. 00:44:59.688 ======================================================== 00:44:59.688 Latency(us) 00:44:59.688 Device Information : IOPS MiB/s Average min max 00:44:59.688 PCIE (0000:00:10.0) NSID 1 from core 1: 40399.76 157.81 395.71 129.91 2948.38 00:44:59.688 ======================================================== 00:44:59.688 Total : 40399.76 157.81 395.71 129.91 2948.38 00:44:59.688 00:44:59.688 Initializing NVMe Controllers 00:44:59.688 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:44:59.688 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:44:59.688 Initialization complete. Launching workers. 00:44:59.688 ======================================================== 00:44:59.688 Latency(us) 00:44:59.688 Device Information : IOPS MiB/s Average min max 00:44:59.688 PCIE (0000:00:10.0) NSID 1 from core 0: 40010.67 156.29 399.55 127.24 2792.59 00:44:59.688 ======================================================== 00:44:59.688 Total : 40010.67 156.29 399.55 127.24 2792.59 00:44:59.688 00:45:01.585 Initializing NVMe Controllers 00:45:01.585 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:45:01.585 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:45:01.585 Initialization complete. Launching workers. 00:45:01.585 ======================================================== 00:45:01.585 Latency(us) 00:45:01.585 Device Information : IOPS MiB/s Average min max 00:45:01.585 PCIE (0000:00:10.0) NSID 1 from core 2: 18748.18 73.24 853.00 138.67 24750.29 00:45:01.585 ======================================================== 00:45:01.585 Total : 18748.18 73.24 853.00 138.67 24750.29 00:45:01.585 00:45:01.585 21:58:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 174270 00:45:01.585 21:58:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 174271 00:45:01.585 00:45:01.585 real 0m11.139s 00:45:01.585 user 0m18.546s 00:45:01.585 sys 0m1.007s 00:45:01.585 21:58:34 nvme.nvme_multi_secondary -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:01.585 21:58:34 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:45:01.585 ************************************ 00:45:01.585 END TEST nvme_multi_secondary 00:45:01.585 ************************************ 00:45:01.585 21:58:34 nvme -- common/autotest_common.sh@1142 -- # return 0 00:45:01.585 21:58:34 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:45:01.585 21:58:34 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:45:01.585 21:58:34 nvme -- common/autotest_common.sh@1087 -- # [[ -e /proc/173470 ]] 00:45:01.585 21:58:34 nvme -- common/autotest_common.sh@1088 -- # kill 173470 00:45:01.585 21:58:34 nvme -- common/autotest_common.sh@1089 -- # wait 173470 00:45:01.585 [2024-07-15 21:58:34.804911] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 174121) is not found. Dropping the request. 00:45:01.586 [2024-07-15 21:58:34.805088] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 174121) is not found. Dropping the request. 00:45:01.586 [2024-07-15 21:58:34.805127] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 174121) is not found. Dropping the request. 00:45:01.586 [2024-07-15 21:58:34.805193] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 174121) is not found. Dropping the request. 00:45:01.844 21:58:35 nvme -- common/autotest_common.sh@1091 -- # rm -f /var/run/spdk_stub0 00:45:01.844 21:58:35 nvme -- common/autotest_common.sh@1095 -- # echo 2 00:45:01.844 21:58:35 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:45:01.844 21:58:35 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:01.844 21:58:35 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:01.844 21:58:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:45:01.844 ************************************ 00:45:01.844 START TEST bdev_nvme_reset_stuck_adm_cmd 00:45:01.844 ************************************ 00:45:01.844 21:58:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:45:01.844 * Looking for test storage... 00:45:01.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:45:01.844 21:58:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:45:01.844 21:58:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:45:01.844 21:58:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:45:01.844 21:58:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:45:01.844 21:58:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:45:01.844 21:58:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:45:01.844 21:58:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:45:01.844 21:58:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:45:01.844 21:58:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:45:01.844 21:58:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:45:01.844 21:58:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:45:01.844 21:58:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:45:01.844 21:58:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:45:01.844 21:58:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:45:01.844 21:58:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:45:02.104 21:58:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:45:02.104 21:58:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:45:02.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:02.104 21:58:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:45:02.104 21:58:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:45:02.104 21:58:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:45:02.104 21:58:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=174419 00:45:02.104 21:58:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:45:02.104 21:58:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:45:02.104 21:58:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 174419 00:45:02.104 21:58:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@829 -- # '[' -z 174419 ']' 00:45:02.104 21:58:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:02.104 21:58:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # local max_retries=100 00:45:02.104 21:58:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:02.104 21:58:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # xtrace_disable 00:45:02.104 21:58:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:45:02.104 [2024-07-15 21:58:35.368463] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:45:02.104 [2024-07-15 21:58:35.368789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174419 ] 00:45:02.368 [2024-07-15 21:58:35.558126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:02.634 [2024-07-15 21:58:35.790955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:45:02.634 [2024-07-15 21:58:35.791145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:45:02.634 [2024-07-15 21:58:35.791337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:02.634 [2024-07-15 21:58:35.791350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:45:03.573 21:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:45:03.573 21:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # return 0 00:45:03.573 21:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:45:03.573 21:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:03.573 21:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:45:03.573 nvme0n1 00:45:03.573 21:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:03.573 21:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:45:03.573 21:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_qotyw.txt 00:45:03.574 21:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:45:03.574 21:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:03.574 21:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:45:03.574 true 00:45:03.574 21:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:03.574 21:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:45:03.574 21:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721080716 00:45:03.574 21:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=174469 00:45:03.574 21:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:45:03.574 21:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:45:03.574 21:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:45:06.101 21:58:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:45:06.101 21:58:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:06.101 21:58:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:45:06.101 [2024-07-15 21:58:38.923823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:45:06.101 [2024-07-15 21:58:38.926955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:45:06.101 [2024-07-15 21:58:38.927060] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:45:06.101 [2024-07-15 21:58:38.927118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:06.101 [2024-07-15 21:58:38.929640] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:45:06.101 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 174469 00:45:06.101 21:58:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:06.101 21:58:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 174469 00:45:06.101 21:58:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 174469 00:45:06.101 21:58:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:45:06.101 21:58:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:45:06.101 21:58:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:06.101 21:58:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:06.101 21:58:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:45:06.102 21:58:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:06.102 21:58:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:45:06.102 21:58:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_qotyw.txt 00:45:06.102 21:58:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:45:06.102 21:58:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:45:06.102 21:58:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:45:06.102 21:58:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:45:06.102 21:58:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:45:06.102 21:58:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:45:06.102 21:58:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:45:06.102 21:58:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:45:06.102 21:58:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:45:06.102 21:58:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:45:06.102 21:58:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:45:06.102 21:58:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:45:06.102 21:58:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:45:06.102 21:58:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:45:06.102 21:58:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:45:06.102 21:58:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:45:06.102 21:58:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:45:06.102 21:58:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:45:06.102 21:58:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:45:06.102 21:58:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_qotyw.txt 00:45:06.102 21:58:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 174419 00:45:06.102 21:58:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@948 -- # '[' -z 174419 ']' 00:45:06.102 21:58:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # kill -0 174419 00:45:06.102 21:58:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # uname 00:45:06.102 21:58:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:45:06.102 21:58:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 174419 00:45:06.102 killing process with pid 174419 00:45:06.102 21:58:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:45:06.102 21:58:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:45:06.102 21:58:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 174419' 00:45:06.102 21:58:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@967 -- # kill 174419 00:45:06.102 21:58:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # wait 174419 00:45:09.381 21:58:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:45:09.381 21:58:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:45:09.382 ************************************ 00:45:09.382 END TEST bdev_nvme_reset_stuck_adm_cmd 00:45:09.382 ************************************ 00:45:09.382 00:45:09.382 real 0m7.025s 00:45:09.382 user 0m24.536s 00:45:09.382 sys 0m0.671s 00:45:09.382 21:58:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:09.382 21:58:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:45:09.382 21:58:42 nvme -- common/autotest_common.sh@1142 -- # return 0 00:45:09.382 21:58:42 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:45:09.382 21:58:42 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:45:09.382 21:58:42 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:09.382 21:58:42 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:09.382 21:58:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:45:09.382 ************************************ 00:45:09.382 START TEST nvme_fio 00:45:09.382 ************************************ 00:45:09.382 21:58:42 nvme.nvme_fio -- common/autotest_common.sh@1123 -- # nvme_fio_test 00:45:09.382 21:58:42 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:45:09.382 21:58:42 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:45:09.382 21:58:42 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=($(get_nvme_bdfs)) 00:45:09.382 21:58:42 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:45:09.382 21:58:42 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:45:09.382 21:58:42 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:45:09.382 21:58:42 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:45:09.382 21:58:42 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:45:09.382 21:58:42 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:45:09.382 21:58:42 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:45:09.382 21:58:42 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:45:09.382 21:58:42 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:45:09.382 21:58:42 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:45:09.382 21:58:42 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:45:09.382 21:58:42 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:45:09.382 21:58:42 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:45:09.382 21:58:42 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:45:09.641 21:58:42 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:45:09.641 21:58:42 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:45:09.641 21:58:42 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:45:09.641 21:58:42 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:45:09.641 21:58:42 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=(libasan libclang_rt.asan) 00:45:09.641 21:58:42 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:45:09.641 21:58:42 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:45:09.641 21:58:42 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:45:09.641 21:58:42 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:45:09.641 21:58:42 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:09.641 21:58:42 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:45:09.641 21:58:42 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:45:09.641 21:58:42 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:09.641 21:58:42 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:45:09.641 21:58:42 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:45:09.641 21:58:42 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:45:09.641 21:58:42 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:45:09.641 21:58:42 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:45:09.641 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:45:09.641 fio-3.35 00:45:09.641 Starting 1 thread 00:45:16.212 00:45:16.212 test: (groupid=0, jobs=1): err= 0: pid=174623: Mon Jul 15 21:58:48 2024 00:45:16.212 read: IOPS=23.4k, BW=91.4MiB/s (95.9MB/s)(183MiB/2001msec) 00:45:16.212 slat (nsec): min=3885, max=73590, avg=4855.57, stdev=884.58 00:45:16.212 clat (usec): min=427, max=8194, avg=2724.96, stdev=343.09 00:45:16.212 lat (usec): min=436, max=8268, avg=2729.81, stdev=343.47 00:45:16.212 clat percentiles (usec): 00:45:16.212 | 1.00th=[ 2114], 5.00th=[ 2507], 10.00th=[ 2540], 20.00th=[ 2606], 00:45:16.212 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2737], 00:45:16.212 | 70.00th=[ 2769], 80.00th=[ 2802], 90.00th=[ 2868], 95.00th=[ 2999], 00:45:16.212 | 99.00th=[ 3490], 99.50th=[ 4883], 99.90th=[ 7635], 99.95th=[ 7767], 00:45:16.212 | 99.99th=[ 8094] 00:45:16.212 bw ( KiB/s): min=93112, max=95584, per=100.00%, avg=94309.33, stdev=1237.81, samples=3 00:45:16.212 iops : min=23278, max=23896, avg=23577.33, stdev=309.45, samples=3 00:45:16.212 write: IOPS=23.3k, BW=90.8MiB/s (95.3MB/s)(182MiB/2001msec); 0 zone resets 00:45:16.212 slat (nsec): min=4030, max=59665, avg=5202.00, stdev=945.23 00:45:16.212 clat (usec): min=460, max=8146, avg=2732.77, stdev=354.95 00:45:16.212 lat (usec): min=470, max=8153, avg=2737.97, stdev=355.37 00:45:16.212 clat percentiles (usec): 00:45:16.212 | 1.00th=[ 2147], 5.00th=[ 2507], 10.00th=[ 2540], 20.00th=[ 2606], 00:45:16.212 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2737], 00:45:16.212 | 70.00th=[ 2769], 80.00th=[ 2802], 90.00th=[ 2868], 95.00th=[ 2999], 00:45:16.212 | 99.00th=[ 3523], 99.50th=[ 5145], 99.90th=[ 7635], 99.95th=[ 7767], 00:45:16.212 | 99.99th=[ 8029] 00:45:16.212 bw ( KiB/s): min=94000, max=94832, per=100.00%, avg=94298.67, stdev=462.99, samples=3 00:45:16.212 iops : min=23500, max=23708, avg=23574.67, stdev=115.75, samples=3 00:45:16.212 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:45:16.212 lat (msec) : 2=0.78%, 4=98.51%, 10=0.68% 00:45:16.212 cpu : usr=99.95%, sys=0.00%, ctx=6, majf=0, minf=36 00:45:16.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:45:16.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:16.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:45:16.213 issued rwts: total=46837,46535,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:16.213 latency : target=0, window=0, percentile=100.00%, depth=128 00:45:16.213 00:45:16.213 Run status group 0 (all jobs): 00:45:16.213 READ: bw=91.4MiB/s (95.9MB/s), 91.4MiB/s-91.4MiB/s (95.9MB/s-95.9MB/s), io=183MiB (192MB), run=2001-2001msec 00:45:16.213 WRITE: bw=90.8MiB/s (95.3MB/s), 90.8MiB/s-90.8MiB/s (95.3MB/s-95.3MB/s), io=182MiB (191MB), run=2001-2001msec 00:45:16.213 ----------------------------------------------------- 00:45:16.213 Suppressions used: 00:45:16.213 count bytes template 00:45:16.213 1 32 /usr/src/fio/parse.c 00:45:16.213 ----------------------------------------------------- 00:45:16.213 00:45:16.213 ************************************ 00:45:16.213 END TEST nvme_fio 00:45:16.213 ************************************ 00:45:16.213 21:58:49 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:45:16.213 21:58:49 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:45:16.213 00:45:16.213 real 0m6.856s 00:45:16.213 user 0m4.419s 00:45:16.213 sys 0m4.591s 00:45:16.213 21:58:49 nvme.nvme_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:16.213 21:58:49 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:45:16.213 21:58:49 nvme -- common/autotest_common.sh@1142 -- # return 0 00:45:16.213 ************************************ 00:45:16.213 END TEST nvme 00:45:16.213 ************************************ 00:45:16.213 00:45:16.213 real 0m51.314s 00:45:16.213 user 2m12.656s 00:45:16.213 sys 0m12.463s 00:45:16.213 21:58:49 nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:16.213 21:58:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:45:16.213 21:58:49 -- common/autotest_common.sh@1142 -- # return 0 00:45:16.213 21:58:49 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:45:16.213 21:58:49 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:45:16.213 21:58:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:16.213 21:58:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:16.213 21:58:49 -- common/autotest_common.sh@10 -- # set +x 00:45:16.213 ************************************ 00:45:16.213 START TEST nvme_scc 00:45:16.213 ************************************ 00:45:16.213 21:58:49 nvme_scc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:45:16.213 * Looking for test storage... 00:45:16.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:45:16.213 21:58:49 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:45:16.213 21:58:49 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:45:16.213 21:58:49 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:45:16.213 21:58:49 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:45:16.213 21:58:49 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:16.213 21:58:49 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:16.213 21:58:49 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:16.213 21:58:49 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:16.213 21:58:49 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:45:16.213 21:58:49 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:45:16.213 21:58:49 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:45:16.213 21:58:49 nvme_scc -- paths/export.sh@5 -- # export PATH 00:45:16.213 21:58:49 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:45:16.213 21:58:49 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:45:16.213 21:58:49 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:45:16.213 21:58:49 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:45:16.213 21:58:49 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:45:16.213 21:58:49 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:45:16.213 21:58:49 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:45:16.213 21:58:49 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:45:16.213 21:58:49 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:45:16.213 21:58:49 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:45:16.213 21:58:49 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:16.213 21:58:49 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:45:16.213 21:58:49 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:45:16.213 21:58:49 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:45:16.213 21:58:49 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:45:16.472 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:45:16.472 Waiting for block devices as requested 00:45:16.472 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:45:16.472 21:58:49 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:45:16.472 21:58:49 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:45:16.472 21:58:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:45:16.472 21:58:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:45:16.472 21:58:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:45:16.472 21:58:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:45:16.472 21:58:49 nvme_scc -- scripts/common.sh@15 -- # local i 00:45:16.472 21:58:49 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:45:16.472 21:58:49 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:45:16.472 21:58:49 nvme_scc -- scripts/common.sh@24 -- # return 0 00:45:16.472 21:58:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:45:16.472 21:58:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:45:16.472 21:58:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:45:16.472 21:58:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:45:16.472 21:58:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:45:16.472 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.472 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.472 21:58:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:45:16.733 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:45:16.733 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.733 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.733 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:45:16.734 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:45:16.735 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.736 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.737 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:45:16.738 21:58:49 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@206 -- # echo nvme0 00:45:16.738 21:58:49 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:45:16.738 21:58:49 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:45:16.738 21:58:49 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:45:16.738 21:58:49 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:45:17.308 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:45:17.308 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:45:18.244 21:58:51 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:45:18.244 21:58:51 nvme_scc -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:45:18.244 21:58:51 nvme_scc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:18.244 21:58:51 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:45:18.244 ************************************ 00:45:18.244 START TEST nvme_simple_copy 00:45:18.244 ************************************ 00:45:18.244 21:58:51 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:45:18.503 Initializing NVMe Controllers 00:45:18.503 Attaching to 0000:00:10.0 00:45:18.503 Controller supports SCC. Attached to 0000:00:10.0 00:45:18.503 Namespace ID: 1 size: 5GB 00:45:18.503 Initialization complete. 00:45:18.503 00:45:18.503 Controller QEMU NVMe Ctrl (12340 ) 00:45:18.503 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:45:18.503 Namespace Block Size:4096 00:45:18.503 Writing LBAs 0 to 63 with Random Data 00:45:18.503 Copied LBAs from 0 - 63 to the Destination LBA 256 00:45:18.503 LBAs matching Written Data: 64 00:45:18.503 00:45:18.503 real 0m0.305s 00:45:18.503 user 0m0.106s 00:45:18.503 sys 0m0.102s 00:45:18.503 21:58:51 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:18.503 21:58:51 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:45:18.503 ************************************ 00:45:18.503 END TEST nvme_simple_copy 00:45:18.503 ************************************ 00:45:18.503 21:58:51 nvme_scc -- common/autotest_common.sh@1142 -- # return 0 00:45:18.503 ************************************ 00:45:18.503 END TEST nvme_scc 00:45:18.503 ************************************ 00:45:18.503 00:45:18.503 real 0m2.700s 00:45:18.503 user 0m0.725s 00:45:18.503 sys 0m1.867s 00:45:18.503 21:58:51 nvme_scc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:18.503 21:58:51 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:45:18.762 21:58:51 -- common/autotest_common.sh@1142 -- # return 0 00:45:18.762 21:58:51 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:45:18.762 21:58:51 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:45:18.762 21:58:51 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:45:18.762 21:58:51 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:45:18.762 21:58:51 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:45:18.762 21:58:51 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:45:18.762 21:58:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:18.762 21:58:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:18.762 21:58:51 -- common/autotest_common.sh@10 -- # set +x 00:45:18.762 ************************************ 00:45:18.762 START TEST nvme_rpc 00:45:18.762 ************************************ 00:45:18.762 21:58:51 nvme_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:45:18.762 * Looking for test storage... 00:45:18.762 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:45:18.762 21:58:52 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:18.762 21:58:52 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:45:18.762 21:58:52 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:45:18.762 21:58:52 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:45:18.762 21:58:52 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:45:18.763 21:58:52 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:45:18.763 21:58:52 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:45:18.763 21:58:52 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:45:18.763 21:58:52 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:45:18.763 21:58:52 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:45:18.763 21:58:52 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:45:18.763 21:58:52 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:45:18.763 21:58:52 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:45:18.763 21:58:52 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:45:18.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:18.763 21:58:52 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:45:18.763 21:58:52 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=175165 00:45:18.763 21:58:52 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:45:18.763 21:58:52 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:45:18.763 21:58:52 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 175165 00:45:18.763 21:58:52 nvme_rpc -- common/autotest_common.sh@829 -- # '[' -z 175165 ']' 00:45:18.763 21:58:52 nvme_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:18.763 21:58:52 nvme_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:45:18.763 21:58:52 nvme_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:18.763 21:58:52 nvme_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:45:18.763 21:58:52 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:45:19.021 [2024-07-15 21:58:52.183078] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:45:19.021 [2024-07-15 21:58:52.183283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175165 ] 00:45:19.021 [2024-07-15 21:58:52.344967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:45:19.280 [2024-07-15 21:58:52.571012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:19.280 [2024-07-15 21:58:52.571020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:45:20.217 21:58:53 nvme_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:45:20.217 21:58:53 nvme_rpc -- common/autotest_common.sh@862 -- # return 0 00:45:20.217 21:58:53 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:45:20.475 Nvme0n1 00:45:20.475 21:58:53 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:45:20.475 21:58:53 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:45:20.734 request: 00:45:20.734 { 00:45:20.734 "bdev_name": "Nvme0n1", 00:45:20.734 "filename": "non_existing_file", 00:45:20.734 "method": "bdev_nvme_apply_firmware", 00:45:20.734 "req_id": 1 00:45:20.734 } 00:45:20.734 Got JSON-RPC error response 00:45:20.734 response: 00:45:20.734 { 00:45:20.734 "code": -32603, 00:45:20.734 "message": "open file failed." 00:45:20.734 } 00:45:20.734 21:58:53 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:45:20.734 21:58:53 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:45:20.734 21:58:53 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:45:20.994 21:58:54 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:45:20.994 21:58:54 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 175165 00:45:20.994 21:58:54 nvme_rpc -- common/autotest_common.sh@948 -- # '[' -z 175165 ']' 00:45:20.994 21:58:54 nvme_rpc -- common/autotest_common.sh@952 -- # kill -0 175165 00:45:20.994 21:58:54 nvme_rpc -- common/autotest_common.sh@953 -- # uname 00:45:20.994 21:58:54 nvme_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:45:20.994 21:58:54 nvme_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 175165 00:45:20.994 killing process with pid 175165 00:45:20.994 21:58:54 nvme_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:45:20.994 21:58:54 nvme_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:45:20.994 21:58:54 nvme_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 175165' 00:45:20.994 21:58:54 nvme_rpc -- common/autotest_common.sh@967 -- # kill 175165 00:45:20.994 21:58:54 nvme_rpc -- common/autotest_common.sh@972 -- # wait 175165 00:45:24.284 ************************************ 00:45:24.284 END TEST nvme_rpc 00:45:24.284 ************************************ 00:45:24.284 00:45:24.284 real 0m5.044s 00:45:24.284 user 0m9.256s 00:45:24.284 sys 0m0.626s 00:45:24.284 21:58:56 nvme_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:24.284 21:58:56 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:45:24.284 21:58:56 -- common/autotest_common.sh@1142 -- # return 0 00:45:24.284 21:58:56 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:45:24.284 21:58:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:24.284 21:58:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:24.284 21:58:56 -- common/autotest_common.sh@10 -- # set +x 00:45:24.284 ************************************ 00:45:24.284 START TEST nvme_rpc_timeouts 00:45:24.284 ************************************ 00:45:24.284 21:58:56 nvme_rpc_timeouts -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:45:24.284 * Looking for test storage... 00:45:24.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:45:24.284 21:58:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:24.284 21:58:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_175258 00:45:24.284 21:58:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_175258 00:45:24.284 21:58:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=175286 00:45:24.284 21:58:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:45:24.284 21:58:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:45:24.284 21:58:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 175286 00:45:24.284 21:58:57 nvme_rpc_timeouts -- common/autotest_common.sh@829 -- # '[' -z 175286 ']' 00:45:24.284 21:58:57 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:24.284 21:58:57 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # local max_retries=100 00:45:24.284 21:58:57 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:24.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:24.284 21:58:57 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # xtrace_disable 00:45:24.284 21:58:57 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:45:24.284 [2024-07-15 21:58:57.191139] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:45:24.284 [2024-07-15 21:58:57.191818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175286 ] 00:45:24.284 [2024-07-15 21:58:57.365681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:45:24.284 [2024-07-15 21:58:57.584706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:24.284 [2024-07-15 21:58:57.584713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:45:25.221 21:58:58 nvme_rpc_timeouts -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:45:25.221 21:58:58 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # return 0 00:45:25.221 21:58:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:45:25.221 Checking default timeout settings: 00:45:25.221 21:58:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:45:25.480 Making settings changes with rpc: 00:45:25.480 21:58:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:45:25.480 21:58:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:45:25.739 Check default vs. modified settings: 00:45:25.739 21:58:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:45:25.739 21:58:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_175258 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_175258 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:45:25.998 Setting action_on_timeout is changed as expected. 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_175258 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_175258 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:45:25.998 Setting timeout_us is changed as expected. 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_175258 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_175258 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:45:25.998 Setting timeout_admin_us is changed as expected. 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_175258 /tmp/settings_modified_175258 00:45:25.998 21:58:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 175286 00:45:25.998 21:58:59 nvme_rpc_timeouts -- common/autotest_common.sh@948 -- # '[' -z 175286 ']' 00:45:25.998 21:58:59 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # kill -0 175286 00:45:25.998 21:58:59 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # uname 00:45:25.998 21:58:59 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:45:25.998 21:58:59 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 175286 00:45:26.257 killing process with pid 175286 00:45:26.257 21:58:59 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:45:26.257 21:58:59 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:45:26.257 21:58:59 nvme_rpc_timeouts -- common/autotest_common.sh@966 -- # echo 'killing process with pid 175286' 00:45:26.257 21:58:59 nvme_rpc_timeouts -- common/autotest_common.sh@967 -- # kill 175286 00:45:26.257 21:58:59 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # wait 175286 00:45:28.794 RPC TIMEOUT SETTING TEST PASSED. 00:45:28.794 21:59:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:45:28.794 00:45:28.794 real 0m5.153s 00:45:28.794 user 0m9.709s 00:45:28.794 sys 0m0.562s 00:45:28.794 21:59:02 nvme_rpc_timeouts -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:28.794 21:59:02 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:45:28.794 ************************************ 00:45:28.794 END TEST nvme_rpc_timeouts 00:45:28.794 ************************************ 00:45:29.052 21:59:02 -- common/autotest_common.sh@1142 -- # return 0 00:45:29.052 21:59:02 -- spdk/autotest.sh@243 -- # uname -s 00:45:29.052 21:59:02 -- spdk/autotest.sh@243 -- # '[' Linux = Linux ']' 00:45:29.052 21:59:02 -- spdk/autotest.sh@244 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:45:29.052 21:59:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:29.052 21:59:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:29.052 21:59:02 -- common/autotest_common.sh@10 -- # set +x 00:45:29.052 ************************************ 00:45:29.052 START TEST sw_hotplug 00:45:29.052 ************************************ 00:45:29.052 21:59:02 sw_hotplug -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:45:29.052 * Looking for test storage... 00:45:29.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:45:29.052 21:59:02 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:45:29.620 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:45:29.620 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:45:30.555 21:59:03 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:45:30.556 21:59:03 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:45:30.556 21:59:03 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:45:30.556 21:59:03 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@230 -- # local class 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@15 -- # local i 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@325 -- # (( 1 )) 00:45:30.556 21:59:03 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 00:45:30.556 21:59:03 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=1 00:45:30.556 21:59:03 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:45:30.556 21:59:03 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:45:30.821 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:45:30.821 Waiting for block devices as requested 00:45:30.821 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:45:31.088 21:59:04 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED=0000:00:10.0 00:45:31.089 21:59:04 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:45:31.348 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:45:31.608 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:45:31.608 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:45:32.545 21:59:05 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:45:32.545 21:59:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:32.545 21:59:05 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:45:32.545 21:59:05 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:45:32.545 21:59:05 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=175877 00:45:32.545 21:59:05 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:45:32.545 21:59:05 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 3 -r 3 -l warning 00:45:32.545 21:59:05 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:45:32.545 21:59:05 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:45:32.545 21:59:05 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:45:32.545 21:59:05 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:45:32.545 21:59:05 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:45:32.545 21:59:05 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:45:32.545 21:59:05 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 false 00:45:32.545 21:59:05 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:45:32.546 21:59:05 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:45:32.546 21:59:05 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:45:32.546 21:59:05 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:45:32.546 21:59:05 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:45:32.805 Initializing NVMe Controllers 00:45:32.805 Attaching to 0000:00:10.0 00:45:32.805 Attached to 0000:00:10.0 00:45:32.805 Initialization complete. Starting I/O... 00:45:32.805 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:45:32.805 00:45:33.767 QEMU NVMe Ctrl (12340 ): 3080 I/Os completed (+3080) 00:45:33.767 00:45:35.146 QEMU NVMe Ctrl (12340 ): 7335 I/Os completed (+4255) 00:45:35.146 00:45:36.085 QEMU NVMe Ctrl (12340 ): 11693 I/Os completed (+4358) 00:45:36.085 00:45:37.023 QEMU NVMe Ctrl (12340 ): 15807 I/Os completed (+4114) 00:45:37.023 00:45:37.962 QEMU NVMe Ctrl (12340 ): 20053 I/Os completed (+4246) 00:45:37.962 00:45:38.531 21:59:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:45:38.531 21:59:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:45:38.531 21:59:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:45:38.789 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/device 00:45:38.789 EAL: Scan for (pci) bus failed. 00:45:38.789 [2024-07-15 21:59:11.918508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:45:38.789 Controller removed: QEMU NVMe Ctrl (12340 ) 00:45:38.789 [2024-07-15 21:59:11.919567] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:38.789 [2024-07-15 21:59:11.919674] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:38.789 [2024-07-15 21:59:11.919733] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:38.789 [2024-07-15 21:59:11.919781] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:38.789 21:59:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:45:38.789 21:59:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:45:38.789 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:45:38.789 [2024-07-15 21:59:11.926945] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:38.789 [2024-07-15 21:59:11.927035] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:38.789 [2024-07-15 21:59:11.927121] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:38.789 [2024-07-15 21:59:11.927207] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:38.789 21:59:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:45:38.789 21:59:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:45:38.789 21:59:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:45:38.789 21:59:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:45:38.789 21:59:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:45:38.789 21:59:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:45:38.789 Attaching to 0000:00:10.0 00:45:38.789 Attached to 0000:00:10.0 00:45:38.789 QEMU NVMe Ctrl (12340 ): 120 I/Os completed (+120) 00:45:38.789 00:45:39.724 QEMU NVMe Ctrl (12340 ): 4055 I/Os completed (+3935) 00:45:39.724 00:45:41.097 QEMU NVMe Ctrl (12340 ): 8224 I/Os completed (+4169) 00:45:41.097 00:45:42.031 QEMU NVMe Ctrl (12340 ): 12392 I/Os completed (+4168) 00:45:42.031 00:45:42.969 QEMU NVMe Ctrl (12340 ): 16485 I/Os completed (+4093) 00:45:42.969 00:45:43.905 QEMU NVMe Ctrl (12340 ): 20203 I/Os completed (+3718) 00:45:43.905 00:45:44.843 21:59:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:45:44.843 21:59:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:45:44.843 21:59:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:45:44.843 21:59:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:45:44.843 [2024-07-15 21:59:18.062404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:45:44.843 Controller removed: QEMU NVMe Ctrl (12340 ) 00:45:44.843 [2024-07-15 21:59:18.065744] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:44.843 [2024-07-15 21:59:18.065989] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:44.843 [2024-07-15 21:59:18.066142] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:44.843 [2024-07-15 21:59:18.066243] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:44.843 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:45:44.843 [2024-07-15 21:59:18.082292] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:44.843 [2024-07-15 21:59:18.082431] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:44.843 [2024-07-15 21:59:18.082484] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:44.843 [2024-07-15 21:59:18.082564] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:44.843 00:45:44.843 21:59:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:45:44.843 21:59:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:45:44.843 21:59:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:45:44.843 21:59:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:45:44.843 21:59:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:45:44.843 21:59:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:45:45.103 21:59:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:45:45.103 21:59:18 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:45:45.103 Attaching to 0000:00:10.0 00:45:45.103 Attached to 0000:00:10.0 00:45:46.064 QEMU NVMe Ctrl (12340 ): 3109 I/Os completed (+3109) 00:45:46.064 00:45:47.001 QEMU NVMe Ctrl (12340 ): 7167 I/Os completed (+4058) 00:45:47.001 00:45:47.937 QEMU NVMe Ctrl (12340 ): 11115 I/Os completed (+3948) 00:45:47.937 00:45:48.874 QEMU NVMe Ctrl (12340 ): 15108 I/Os completed (+3993) 00:45:48.874 00:45:49.812 QEMU NVMe Ctrl (12340 ): 19070 I/Os completed (+3962) 00:45:49.812 00:45:50.762 QEMU NVMe Ctrl (12340 ): 23146 I/Os completed (+4076) 00:45:50.762 00:45:51.021 21:59:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:45:51.021 21:59:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:45:51.021 21:59:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:45:51.021 21:59:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:45:51.021 [2024-07-15 21:59:24.243111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:45:51.021 Controller removed: QEMU NVMe Ctrl (12340 ) 00:45:51.021 [2024-07-15 21:59:24.243964] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:51.021 [2024-07-15 21:59:24.244025] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:51.021 [2024-07-15 21:59:24.244065] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:51.021 [2024-07-15 21:59:24.244104] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:51.021 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:45:51.021 [2024-07-15 21:59:24.250408] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:51.021 [2024-07-15 21:59:24.250471] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:51.021 [2024-07-15 21:59:24.250496] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:51.021 [2024-07-15 21:59:24.250525] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:51.021 21:59:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:45:51.021 21:59:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:45:51.021 21:59:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:45:51.021 21:59:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:45:51.021 21:59:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:45:51.021 21:59:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:45:51.279 21:59:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:45:51.279 21:59:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:45:51.279 Attaching to 0000:00:10.0 00:45:51.279 Attached to 0000:00:10.0 00:45:51.279 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:45:51.279 [2024-07-15 21:59:24.440399] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:45:57.836 21:59:30 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:45:57.836 21:59:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:45:57.836 21:59:30 sw_hotplug -- common/autotest_common.sh@715 -- # time=24.54 00:45:57.836 21:59:30 sw_hotplug -- common/autotest_common.sh@716 -- # echo 24.54 00:45:57.836 21:59:30 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:45:57.836 21:59:30 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=24.54 00:45:57.836 21:59:30 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 24.54 1 00:45:57.836 remove_attach_helper took 24.54s to complete (handling 1 nvme drive(s)) 21:59:30 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:46:03.110 21:59:36 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 175877 00:46:03.110 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (175877) - No such process 00:46:03.110 21:59:36 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 175877 00:46:03.110 21:59:36 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:46:03.110 21:59:36 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:46:03.110 21:59:36 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:46:03.110 21:59:36 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=176247 00:46:03.110 21:59:36 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:46:03.110 21:59:36 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:46:03.110 21:59:36 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 176247 00:46:03.110 21:59:36 sw_hotplug -- common/autotest_common.sh@829 -- # '[' -z 176247 ']' 00:46:03.110 21:59:36 sw_hotplug -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:03.110 21:59:36 sw_hotplug -- common/autotest_common.sh@834 -- # local max_retries=100 00:46:03.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:03.110 21:59:36 sw_hotplug -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:03.110 21:59:36 sw_hotplug -- common/autotest_common.sh@838 -- # xtrace_disable 00:46:03.110 21:59:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:03.368 [2024-07-15 21:59:36.511445] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:46:03.368 [2024-07-15 21:59:36.511598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176247 ] 00:46:03.368 [2024-07-15 21:59:36.653425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:03.626 [2024-07-15 21:59:36.871030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:46:04.561 21:59:37 sw_hotplug -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:46:04.561 21:59:37 sw_hotplug -- common/autotest_common.sh@862 -- # return 0 00:46:04.561 21:59:37 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:46:04.561 21:59:37 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:04.561 21:59:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:04.561 21:59:37 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:04.561 21:59:37 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:46:04.561 21:59:37 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:46:04.561 21:59:37 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:46:04.561 21:59:37 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:46:04.561 21:59:37 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:46:04.561 21:59:37 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:46:04.561 21:59:37 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:46:04.561 21:59:37 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:46:04.561 21:59:37 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:46:04.562 21:59:37 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:46:04.562 21:59:37 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:46:04.562 21:59:37 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:46:04.562 21:59:37 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:46:11.143 21:59:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:46:11.143 21:59:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:46:11.143 21:59:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:46:11.143 21:59:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:46:11.143 21:59:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:11.143 21:59:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:11.143 21:59:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:11.143 21:59:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:11.143 21:59:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:11.143 21:59:43 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:11.143 21:59:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:11.143 21:59:43 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:11.143 [2024-07-15 21:59:43.899403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:46:11.143 [2024-07-15 21:59:43.901415] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:11.143 [2024-07-15 21:59:43.901587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:46:11.143 [2024-07-15 21:59:43.901674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:11.143 [2024-07-15 21:59:43.901795] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:11.143 [2024-07-15 21:59:43.901894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:46:11.143 [2024-07-15 21:59:43.901980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:11.143 [2024-07-15 21:59:43.902081] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:11.143 [2024-07-15 21:59:43.902162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:46:11.143 [2024-07-15 21:59:43.902235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:11.143 [2024-07-15 21:59:43.902314] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:11.143 [2024-07-15 21:59:43.902384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:46:11.143 [2024-07-15 21:59:43.902465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:11.143 21:59:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:11.143 21:59:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:11.143 21:59:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:11.143 21:59:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:11.143 21:59:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:11.143 21:59:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:11.143 21:59:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:11.143 21:59:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:11.143 21:59:44 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:11.143 21:59:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:11.143 21:59:44 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:11.143 21:59:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:46:11.143 21:59:44 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:46:11.143 21:59:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:46:11.143 21:59:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:46:11.143 21:59:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:46:11.402 21:59:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:46:11.402 21:59:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:46:11.402 21:59:44 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:46:18.022 21:59:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:46:18.022 21:59:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:46:18.022 21:59:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:46:18.022 21:59:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:18.022 21:59:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:18.022 21:59:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:18.022 21:59:50 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:18.022 21:59:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:18.022 21:59:50 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:18.022 21:59:50 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:46:18.022 21:59:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:46:18.022 21:59:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:46:18.022 21:59:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:46:18.022 21:59:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:46:18.022 21:59:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:18.022 21:59:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:18.022 21:59:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:18.022 21:59:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:18.022 21:59:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:18.022 21:59:50 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:18.022 21:59:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:18.022 21:59:50 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:18.022 21:59:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:18.022 21:59:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:18.022 21:59:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:18.022 21:59:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:18.022 21:59:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:18.022 21:59:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:18.022 21:59:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:18.022 21:59:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:18.022 21:59:51 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:18.022 21:59:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:18.022 21:59:51 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:18.022 21:59:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:18.022 21:59:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:18.590 21:59:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:18.590 21:59:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:18.590 21:59:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:18.590 21:59:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:18.590 21:59:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:18.590 21:59:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:18.590 21:59:51 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:18.590 21:59:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:18.590 21:59:51 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:18.590 21:59:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:18.590 21:59:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:19.156 21:59:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:19.156 21:59:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:19.156 21:59:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:19.156 21:59:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:19.156 21:59:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:19.156 21:59:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:19.156 21:59:52 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:19.156 21:59:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:19.156 21:59:52 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:19.156 21:59:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:19.156 21:59:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:19.724 21:59:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:19.724 21:59:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:19.724 21:59:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:19.724 21:59:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:19.724 21:59:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:19.724 21:59:52 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:19.724 21:59:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:19.724 21:59:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:19.724 21:59:52 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:19.724 21:59:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:19.724 21:59:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:20.292 21:59:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:20.292 21:59:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:20.292 21:59:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:20.292 21:59:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:20.292 21:59:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:20.292 21:59:53 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:20.292 21:59:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:20.292 21:59:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:20.292 21:59:53 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:20.292 21:59:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:20.292 21:59:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:20.860 21:59:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:20.860 21:59:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:20.860 21:59:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:20.860 21:59:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:20.860 21:59:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:20.860 21:59:54 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:20.860 21:59:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:20.860 21:59:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:20.860 21:59:54 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:20.860 21:59:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:20.860 21:59:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:21.428 21:59:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:21.428 21:59:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:21.428 21:59:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:21.428 21:59:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:21.428 21:59:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:21.428 21:59:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:21.428 21:59:54 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:21.428 21:59:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:21.428 21:59:54 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:21.428 21:59:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:21.428 21:59:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:21.994 21:59:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:21.995 21:59:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:21.995 21:59:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:21.995 21:59:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:21.995 21:59:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:21.995 21:59:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:21.995 21:59:55 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:21.995 21:59:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:21.995 21:59:55 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:21.995 21:59:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:21.995 21:59:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:22.562 21:59:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:22.563 21:59:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:22.563 21:59:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:22.563 21:59:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:22.563 21:59:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:22.563 21:59:55 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:22.563 21:59:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:22.563 21:59:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:22.563 21:59:55 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:22.563 21:59:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:22.563 21:59:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:23.131 21:59:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:23.131 21:59:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:23.131 21:59:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:23.131 21:59:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:23.131 21:59:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:23.131 21:59:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:23.131 21:59:56 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:23.131 21:59:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:23.131 21:59:56 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:23.131 21:59:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:23.131 21:59:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:23.700 21:59:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:23.701 21:59:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:23.701 21:59:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:23.701 21:59:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:23.701 21:59:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:23.701 21:59:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:23.701 21:59:56 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:23.701 21:59:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:23.701 21:59:56 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:23.701 21:59:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:23.701 21:59:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:24.269 21:59:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:24.269 21:59:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:24.269 21:59:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:24.269 21:59:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:24.269 21:59:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:24.269 21:59:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:24.269 21:59:57 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:24.269 21:59:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:24.269 21:59:57 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:24.269 21:59:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:24.269 21:59:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:24.837 21:59:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:24.837 21:59:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:24.837 21:59:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:24.837 21:59:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:24.837 21:59:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:24.837 21:59:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:24.837 21:59:57 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:24.837 21:59:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:24.837 21:59:58 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:24.837 21:59:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:24.838 21:59:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:25.418 21:59:58 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:25.418 21:59:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:25.418 21:59:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:25.418 21:59:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:25.418 21:59:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:25.418 21:59:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:25.418 21:59:58 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:25.418 21:59:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:25.418 21:59:58 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:25.418 21:59:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:25.418 21:59:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:25.699 [2024-07-15 21:59:59.070435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:46:25.699 [2024-07-15 21:59:59.071856] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:25.699 [2024-07-15 21:59:59.071913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:46:25.699 [2024-07-15 21:59:59.071939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:25.699 [2024-07-15 21:59:59.071981] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:25.699 [2024-07-15 21:59:59.072001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:46:25.699 [2024-07-15 21:59:59.072033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:25.699 [2024-07-15 21:59:59.072052] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:25.699 [2024-07-15 21:59:59.072070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:46:25.699 [2024-07-15 21:59:59.072085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:25.699 [2024-07-15 21:59:59.072121] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:25.699 [2024-07-15 21:59:59.072145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:46:25.699 [2024-07-15 21:59:59.072176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:25.958 21:59:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:25.958 21:59:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:25.958 21:59:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:25.958 21:59:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:25.958 21:59:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:25.958 21:59:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:25.958 21:59:59 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:25.958 21:59:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:25.958 21:59:59 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:25.958 21:59:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:46:25.958 21:59:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:46:25.958 21:59:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:46:25.958 21:59:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:46:25.958 21:59:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:46:25.958 21:59:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:46:25.958 21:59:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:46:25.958 21:59:59 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:46:32.531 22:00:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:46:32.531 22:00:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:46:32.531 22:00:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:46:32.531 22:00:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:32.531 22:00:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:32.531 22:00:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:32.531 22:00:05 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:32.531 22:00:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:32.531 22:00:05 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:32.531 22:00:05 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:46:32.531 22:00:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:46:32.531 22:00:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:46:32.531 22:00:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:46:32.531 22:00:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:46:32.531 22:00:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:32.531 22:00:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:32.531 22:00:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:32.531 22:00:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:32.531 22:00:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:32.531 22:00:05 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:32.531 22:00:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:32.531 22:00:05 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:32.531 22:00:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:32.531 22:00:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:32.791 22:00:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:32.791 22:00:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:32.791 22:00:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:32.791 22:00:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:32.791 22:00:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:32.791 22:00:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:32.791 22:00:05 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:32.791 22:00:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:32.791 22:00:05 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:32.791 22:00:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:32.791 22:00:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:33.360 22:00:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:33.360 22:00:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:33.360 22:00:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:33.360 22:00:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:33.360 22:00:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:33.360 22:00:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:33.360 22:00:06 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:33.360 22:00:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:33.360 22:00:06 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:33.360 22:00:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:33.360 22:00:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:33.928 22:00:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:33.928 22:00:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:33.928 22:00:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:33.928 22:00:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:33.928 22:00:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:33.928 22:00:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:33.928 22:00:07 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:33.928 22:00:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:33.928 22:00:07 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:33.928 22:00:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:33.928 22:00:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:34.496 22:00:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:34.496 22:00:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:34.496 22:00:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:34.496 22:00:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:34.496 22:00:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:34.496 22:00:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:34.496 22:00:07 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:34.496 22:00:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:34.496 22:00:07 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:34.496 22:00:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:34.496 22:00:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:35.062 22:00:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:35.062 22:00:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:35.062 22:00:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:35.062 22:00:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:35.062 22:00:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:35.062 22:00:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:35.062 22:00:08 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:35.062 22:00:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:35.062 22:00:08 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:35.062 22:00:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:35.062 22:00:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:35.632 22:00:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:35.632 22:00:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:35.632 22:00:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:35.632 22:00:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:35.632 22:00:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:35.632 22:00:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:35.632 22:00:08 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:35.632 22:00:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:35.632 22:00:08 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:35.632 22:00:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:35.632 22:00:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:36.198 22:00:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:36.198 22:00:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:36.198 22:00:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:36.198 22:00:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:36.198 22:00:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:36.198 22:00:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:36.198 22:00:09 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:36.198 22:00:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:36.198 22:00:09 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:36.198 22:00:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:36.198 22:00:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:36.764 22:00:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:36.764 22:00:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:36.764 22:00:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:36.764 22:00:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:36.764 22:00:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:36.764 22:00:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:36.764 22:00:09 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:36.764 22:00:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:36.764 22:00:09 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:36.764 22:00:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:36.764 22:00:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:37.332 22:00:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:37.332 22:00:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:37.332 22:00:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:37.332 22:00:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:37.332 22:00:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:37.332 22:00:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:37.332 22:00:10 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:37.332 22:00:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:37.332 22:00:10 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:37.332 22:00:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:37.332 22:00:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:37.898 22:00:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:37.898 22:00:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:37.898 22:00:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:37.898 22:00:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:37.898 22:00:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:37.898 22:00:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:37.898 22:00:11 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:37.898 22:00:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:37.898 22:00:11 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:37.898 22:00:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:37.898 22:00:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:38.464 22:00:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:38.464 22:00:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:38.464 22:00:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:38.464 22:00:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:38.464 22:00:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:38.464 22:00:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:38.464 22:00:11 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:38.464 22:00:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:38.464 22:00:11 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:38.464 22:00:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:38.464 22:00:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:39.055 22:00:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:39.055 22:00:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:39.055 22:00:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:39.055 22:00:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:39.055 22:00:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:39.055 22:00:12 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:39.055 22:00:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:39.055 22:00:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:39.055 22:00:12 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:39.055 22:00:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:39.055 22:00:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:39.313 [2024-07-15 22:00:12.644535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:46:39.313 [2024-07-15 22:00:12.645887] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:39.313 [2024-07-15 22:00:12.645930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:46:39.313 [2024-07-15 22:00:12.645955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:39.313 [2024-07-15 22:00:12.645987] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:39.313 [2024-07-15 22:00:12.646025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:46:39.313 [2024-07-15 22:00:12.646048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:39.313 [2024-07-15 22:00:12.646063] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:39.313 [2024-07-15 22:00:12.646087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:46:39.313 [2024-07-15 22:00:12.646103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:39.313 [2024-07-15 22:00:12.646123] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:39.313 [2024-07-15 22:00:12.646154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:46:39.313 [2024-07-15 22:00:12.646174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:39.570 22:00:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:39.570 22:00:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:39.570 22:00:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:39.570 22:00:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:39.570 22:00:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:39.570 22:00:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:39.570 22:00:12 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:39.570 22:00:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:39.570 22:00:12 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:39.570 22:00:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:46:39.570 22:00:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:46:39.570 22:00:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:46:39.570 22:00:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:46:39.570 22:00:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:46:39.570 22:00:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:46:39.570 22:00:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:46:39.570 22:00:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:46:46.133 22:00:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:46:46.133 22:00:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:46:46.133 22:00:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:46:46.133 22:00:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:46.133 22:00:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:46.133 22:00:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:46.133 22:00:18 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:46.133 22:00:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:46.133 22:00:18 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:46.133 22:00:18 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:46:46.133 22:00:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:46:46.133 22:00:18 sw_hotplug -- common/autotest_common.sh@715 -- # time=41.12 00:46:46.133 22:00:18 sw_hotplug -- common/autotest_common.sh@716 -- # echo 41.12 00:46:46.133 22:00:18 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:46:46.133 22:00:18 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=41.12 00:46:46.133 22:00:18 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 41.12 1 00:46:46.133 remove_attach_helper took 41.12s to complete (handling 1 nvme drive(s)) 22:00:18 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:46:46.133 22:00:18 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:46.133 22:00:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:46.133 22:00:18 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:46.133 22:00:18 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:46:46.133 22:00:18 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:46.133 22:00:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:46.133 22:00:18 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:46.133 22:00:18 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:46:46.133 22:00:18 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:46:46.133 22:00:18 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:46:46.133 22:00:18 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:46:46.133 22:00:18 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:46:46.133 22:00:18 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:46:46.133 22:00:18 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:46:46.133 22:00:18 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:46:46.133 22:00:18 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:46:46.133 22:00:18 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:46:46.133 22:00:18 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:46:46.133 22:00:18 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:46:46.133 22:00:18 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:46:52.694 22:00:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:46:52.694 22:00:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:46:52.694 22:00:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:46:52.694 22:00:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:46:52.694 22:00:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:52.694 22:00:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:52.694 22:00:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:52.694 22:00:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:52.694 22:00:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:52.694 22:00:25 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:52.694 22:00:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:52.694 22:00:25 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:52.694 22:00:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:52.694 22:00:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:52.694 22:00:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:52.694 22:00:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:52.694 22:00:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:52.694 22:00:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:52.694 22:00:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:52.694 22:00:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:52.694 22:00:25 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:52.694 22:00:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:52.694 22:00:25 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:52.694 22:00:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:52.694 22:00:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:52.952 22:00:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:52.952 22:00:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:52.952 22:00:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:52.952 22:00:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:52.952 22:00:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:52.952 22:00:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:52.952 22:00:26 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:52.952 22:00:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:52.952 22:00:26 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:52.952 22:00:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:52.952 22:00:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:53.515 22:00:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:53.515 22:00:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:53.515 22:00:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:53.515 22:00:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:53.515 22:00:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:53.515 22:00:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:53.515 22:00:26 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:53.515 22:00:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:53.516 22:00:26 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:53.516 22:00:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:53.516 22:00:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:54.134 22:00:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:54.134 22:00:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:54.134 22:00:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:54.134 22:00:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:54.134 22:00:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:54.134 22:00:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:54.134 22:00:27 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:54.134 22:00:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:54.134 22:00:27 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:54.134 22:00:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:54.134 22:00:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:54.700 22:00:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:54.700 22:00:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:54.700 22:00:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:54.700 22:00:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:54.700 22:00:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:54.700 22:00:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:54.700 22:00:27 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:54.700 22:00:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:54.700 22:00:27 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:54.700 22:00:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:54.700 22:00:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:55.265 22:00:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:55.265 22:00:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:55.265 22:00:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:55.265 22:00:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:55.265 22:00:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:55.265 22:00:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:55.265 22:00:28 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:55.265 22:00:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:55.265 22:00:28 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:55.265 22:00:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:55.265 22:00:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:55.831 22:00:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:55.831 22:00:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:55.831 22:00:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:55.831 22:00:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:55.831 22:00:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:55.831 22:00:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:55.831 22:00:28 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:55.831 22:00:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:55.831 22:00:28 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:55.831 22:00:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:55.831 22:00:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:56.400 22:00:29 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:56.400 22:00:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:56.400 22:00:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:56.400 22:00:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:56.400 22:00:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:56.400 22:00:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:56.400 22:00:29 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:56.400 22:00:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:56.400 22:00:29 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:56.400 22:00:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:56.400 22:00:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:56.970 22:00:30 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:56.970 22:00:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:56.970 22:00:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:56.970 22:00:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:56.970 22:00:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:56.970 22:00:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:56.970 22:00:30 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:56.970 22:00:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:56.970 22:00:30 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:56.970 22:00:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:56.970 22:00:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:57.229 22:00:30 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:57.229 22:00:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:57.229 22:00:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:57.229 22:00:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:57.229 22:00:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:57.229 22:00:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:57.229 22:00:30 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:57.229 22:00:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:57.495 22:00:30 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:57.495 22:00:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:57.495 22:00:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:58.069 22:00:31 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:58.069 22:00:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:58.069 22:00:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:58.069 22:00:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:58.069 22:00:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:58.069 22:00:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:58.069 22:00:31 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:58.069 22:00:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:58.069 22:00:31 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:58.069 22:00:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:58.069 22:00:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:58.634 22:00:31 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:58.634 22:00:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:58.634 22:00:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:58.634 22:00:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:58.634 22:00:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:58.634 22:00:31 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:58.634 22:00:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:58.634 22:00:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:58.634 22:00:31 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:58.634 22:00:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:58.634 22:00:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:58.892 22:00:32 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:58.892 22:00:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:58.892 22:00:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:58.892 22:00:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:58.892 22:00:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:58.892 22:00:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:58.892 22:00:32 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:59.150 22:00:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:59.150 22:00:32 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:59.150 22:00:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:59.150 22:00:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:59.716 22:00:32 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:59.716 22:00:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:59.716 22:00:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:59.716 22:00:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:59.716 22:00:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:59.716 22:00:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:59.716 22:00:32 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:59.716 22:00:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:59.716 22:00:32 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:59.716 22:00:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:59.716 22:00:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:00.282 22:00:33 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:00.282 22:00:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:00.282 22:00:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:00.282 22:00:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:00.282 22:00:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:00.282 22:00:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:00.282 22:00:33 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:00.282 22:00:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:00.282 22:00:33 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:00.282 22:00:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:00.282 22:00:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:00.848 22:00:33 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:00.848 22:00:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:00.848 22:00:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:00.848 22:00:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:00.848 22:00:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:00.848 22:00:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:00.848 22:00:33 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:00.848 22:00:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:00.848 22:00:33 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:00.848 22:00:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:00.848 22:00:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:01.414 22:00:34 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:01.414 22:00:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:01.414 22:00:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:01.414 22:00:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:01.414 22:00:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:01.414 22:00:34 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:01.414 22:00:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:01.414 22:00:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:01.414 22:00:34 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:01.414 22:00:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:01.414 22:00:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:01.993 22:00:35 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:01.993 22:00:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:01.993 22:00:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:01.993 22:00:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:01.993 22:00:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:01.993 22:00:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:01.993 22:00:35 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:01.993 22:00:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:01.993 22:00:35 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:01.993 22:00:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:01.993 22:00:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:02.257 22:00:35 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:02.257 22:00:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:02.257 22:00:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:02.257 22:00:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:02.257 22:00:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:02.257 22:00:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:02.257 22:00:35 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:02.257 22:00:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:02.257 22:00:35 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:02.514 22:00:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:02.514 22:00:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:03.081 22:00:36 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:03.081 22:00:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:03.081 22:00:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:03.081 22:00:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:03.081 22:00:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:03.081 22:00:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:03.081 22:00:36 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:03.081 22:00:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:03.081 22:00:36 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:03.081 22:00:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:03.081 22:00:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:03.647 22:00:36 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:03.647 22:00:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:03.647 22:00:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:03.647 22:00:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:03.647 22:00:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:03.647 22:00:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:03.647 22:00:36 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:03.647 22:00:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:03.647 22:00:36 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:03.647 22:00:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:03.647 22:00:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:04.214 22:00:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:04.214 22:00:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:04.214 22:00:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:04.214 22:00:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:04.214 22:00:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:04.214 22:00:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:04.214 22:00:37 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:04.214 22:00:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:04.214 22:00:37 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:04.214 22:00:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:04.214 22:00:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:04.214 [2024-07-15 22:00:37.427694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:47:04.214 [2024-07-15 22:00:37.428852] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:04.214 [2024-07-15 22:00:37.428893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:47:04.214 [2024-07-15 22:00:37.428912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:04.214 [2024-07-15 22:00:37.428942] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:04.214 [2024-07-15 22:00:37.428960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:47:04.214 [2024-07-15 22:00:37.428996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:04.214 [2024-07-15 22:00:37.429021] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:04.214 [2024-07-15 22:00:37.429047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:47:04.214 [2024-07-15 22:00:37.429066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:04.214 [2024-07-15 22:00:37.429082] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:04.214 [2024-07-15 22:00:37.429100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:47:04.214 [2024-07-15 22:00:37.429116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:04.471 22:00:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:04.471 22:00:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:04.471 22:00:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:04.471 22:00:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:04.729 22:00:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:04.729 22:00:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:04.729 22:00:37 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:04.729 22:00:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:04.729 22:00:37 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:04.729 22:00:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:47:04.729 22:00:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:47:04.729 22:00:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:47:04.729 22:00:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:47:04.729 22:00:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:47:04.729 22:00:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:47:04.729 22:00:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:47:04.729 22:00:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:47:11.295 22:00:44 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:47:11.295 22:00:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:47:11.295 22:00:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:47:11.295 22:00:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:11.295 22:00:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:11.295 22:00:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:11.295 22:00:44 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:11.295 22:00:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:11.295 22:00:44 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:11.295 22:00:44 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:47:11.295 22:00:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:47:11.295 22:00:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:47:11.295 22:00:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:47:11.295 22:00:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:47:11.295 22:00:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:11.295 22:00:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:11.295 22:00:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:11.295 22:00:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:11.295 22:00:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:11.295 22:00:44 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:11.295 22:00:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:11.295 22:00:44 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:11.295 22:00:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:11.295 22:00:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:11.553 22:00:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:11.553 22:00:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:11.553 22:00:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:11.553 22:00:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:11.553 22:00:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:11.553 22:00:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:11.553 22:00:44 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:11.553 22:00:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:11.553 22:00:44 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:11.553 22:00:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:11.553 22:00:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:12.129 22:00:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:12.129 22:00:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:12.129 22:00:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:12.129 22:00:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:12.129 22:00:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:12.129 22:00:45 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:12.129 22:00:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:12.129 22:00:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:12.129 22:00:45 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:12.129 22:00:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:12.129 22:00:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:12.699 22:00:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:12.699 22:00:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:12.699 22:00:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:12.699 22:00:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:12.699 22:00:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:12.699 22:00:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:12.699 22:00:45 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:12.699 22:00:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:12.699 22:00:45 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:12.699 22:00:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:12.699 22:00:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:13.267 22:00:46 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:13.267 22:00:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:13.267 22:00:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:13.267 22:00:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:13.267 22:00:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:13.267 22:00:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:13.267 22:00:46 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:13.267 22:00:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:13.267 22:00:46 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:13.267 22:00:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:13.267 22:00:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:13.837 22:00:46 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:13.837 22:00:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:13.837 22:00:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:13.837 22:00:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:13.837 22:00:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:13.837 22:00:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:13.837 22:00:46 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:13.837 22:00:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:13.837 22:00:46 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:13.837 22:00:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:13.837 22:00:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:14.403 22:00:47 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:14.403 22:00:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:14.403 22:00:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:14.403 22:00:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:14.403 22:00:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:14.403 22:00:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:14.403 22:00:47 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:14.403 22:00:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:14.403 22:00:47 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:14.403 22:00:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:14.403 22:00:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:14.970 22:00:48 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:14.970 22:00:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:14.970 22:00:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:14.970 22:00:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:14.970 22:00:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:14.970 22:00:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:14.970 22:00:48 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:14.970 22:00:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:14.970 22:00:48 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:14.970 22:00:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:14.970 22:00:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:15.560 22:00:48 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:15.560 22:00:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:15.560 22:00:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:15.560 22:00:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:15.560 22:00:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:15.560 22:00:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:15.560 22:00:48 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:15.560 22:00:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:15.560 22:00:48 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:15.560 22:00:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:15.560 22:00:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:15.819 22:00:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:15.819 22:00:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:15.819 22:00:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:15.819 22:00:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:15.819 22:00:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:15.819 22:00:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:15.819 22:00:49 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:15.819 22:00:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:16.088 22:00:49 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:16.088 22:00:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:16.088 22:00:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:16.653 22:00:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:16.653 22:00:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:16.653 22:00:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:16.653 22:00:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:16.653 22:00:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:16.653 22:00:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:16.653 22:00:49 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:16.653 22:00:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:16.653 22:00:49 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:16.653 22:00:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:16.653 22:00:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:17.221 22:00:50 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:17.221 22:00:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:17.221 22:00:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:17.221 22:00:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:17.221 22:00:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:17.221 22:00:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:17.221 22:00:50 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:17.221 22:00:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:17.221 22:00:50 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:17.221 22:00:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:17.221 22:00:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:17.787 22:00:50 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:17.787 22:00:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:17.787 22:00:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:17.787 22:00:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:17.787 22:00:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:17.787 22:00:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:17.787 22:00:50 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:17.787 22:00:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:17.787 22:00:50 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:17.787 22:00:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:17.787 22:00:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:18.052 22:00:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:18.052 22:00:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:18.052 22:00:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:18.052 22:00:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:18.052 22:00:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:18.052 22:00:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:18.052 22:00:51 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:18.052 22:00:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:18.311 22:00:51 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:18.311 22:00:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:18.311 22:00:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:18.880 22:00:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:18.880 22:00:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:18.880 22:00:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:18.880 22:00:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:18.880 22:00:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:18.880 22:00:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:18.880 22:00:51 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:18.880 22:00:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:18.880 22:00:51 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:18.880 22:00:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:18.880 22:00:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:19.460 22:00:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:19.460 22:00:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:19.460 22:00:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:19.460 22:00:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:19.460 22:00:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:19.460 22:00:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:19.460 22:00:52 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:19.460 22:00:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:19.460 22:00:52 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:19.460 22:00:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:19.460 22:00:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:20.028 22:00:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:20.028 22:00:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:20.028 22:00:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:20.028 22:00:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:20.028 22:00:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:20.028 22:00:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:20.028 22:00:53 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:20.028 22:00:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:20.028 22:00:53 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:20.028 22:00:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:20.028 22:00:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:20.286 22:00:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:20.286 22:00:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:20.286 22:00:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:20.286 22:00:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:20.286 22:00:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:20.286 22:00:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:20.286 22:00:53 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:20.286 22:00:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:20.545 22:00:53 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:20.545 22:00:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:20.545 22:00:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:21.115 22:00:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:21.115 22:00:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:21.115 22:00:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:21.115 22:00:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:21.115 22:00:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:21.115 22:00:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:21.116 22:00:54 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:21.116 22:00:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:21.116 22:00:54 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:21.116 22:00:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:21.116 22:00:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:21.683 22:00:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:21.683 22:00:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:21.683 22:00:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:21.683 22:00:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:21.683 22:00:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:21.683 22:00:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:21.683 22:00:54 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:21.683 22:00:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:21.683 22:00:54 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:21.683 22:00:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:21.683 22:00:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:21.941 22:00:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:21.941 22:00:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:21.941 22:00:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:21.941 22:00:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:21.941 22:00:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:21.941 22:00:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:21.941 22:00:55 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:21.941 22:00:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:21.941 22:00:55 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:22.198 22:00:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:22.198 22:00:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:22.765 22:00:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:22.765 22:00:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:22.765 22:00:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:22.765 22:00:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:22.765 22:00:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:22.765 22:00:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:22.765 22:00:55 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:22.765 22:00:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:22.765 22:00:55 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:22.765 22:00:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:22.765 22:00:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:23.331 22:00:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:23.331 22:00:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:23.331 22:00:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:23.331 22:00:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:23.331 22:00:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:23.331 22:00:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:23.331 22:00:56 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:23.331 22:00:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:23.331 22:00:56 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:23.331 22:00:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:23.331 22:00:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:23.898 22:00:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:23.898 22:00:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:23.898 22:00:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:23.898 22:00:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:23.898 22:00:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:23.898 22:00:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:23.898 22:00:56 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:23.898 22:00:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:23.898 22:00:56 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:23.898 22:00:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:23.898 22:00:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:24.464 22:00:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:24.464 22:00:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:24.464 22:00:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:24.464 22:00:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:24.464 22:00:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:24.464 22:00:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:24.464 22:00:57 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:24.464 22:00:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:24.464 22:00:57 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:24.464 22:00:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:24.464 22:00:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:24.722 22:00:58 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:24.722 22:00:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:24.722 22:00:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:24.722 22:00:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:24.981 22:00:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:24.981 22:00:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:24.981 22:00:58 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:24.981 22:00:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:24.981 22:00:58 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:24.981 22:00:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:24.981 22:00:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:25.550 22:00:58 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:25.550 22:00:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:25.550 22:00:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:25.550 22:00:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:25.550 22:00:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:25.550 22:00:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:25.550 22:00:58 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:25.550 22:00:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:25.550 22:00:58 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:25.550 22:00:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:25.550 22:00:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:26.118 22:00:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:26.118 22:00:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:26.118 22:00:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:26.118 22:00:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:26.118 22:00:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:26.118 22:00:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:26.118 22:00:59 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:26.118 22:00:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:26.118 22:00:59 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:26.118 22:00:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:26.118 22:00:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:26.377 22:00:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:26.377 22:00:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:26.377 22:00:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:26.377 22:00:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:26.377 22:00:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:26.637 22:00:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:26.637 22:00:59 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:26.637 22:00:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:26.637 22:00:59 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:26.637 22:00:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:26.637 22:00:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:27.206 22:01:00 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:27.206 22:01:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:27.206 22:01:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:27.206 22:01:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:27.206 22:01:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:27.206 22:01:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:27.206 22:01:00 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:27.206 22:01:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:27.206 22:01:00 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:27.206 22:01:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:27.206 22:01:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:27.773 22:01:00 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:27.773 22:01:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:27.773 22:01:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:27.773 22:01:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:27.773 22:01:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:27.773 22:01:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:27.773 22:01:00 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:27.773 22:01:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:27.773 22:01:00 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:27.773 22:01:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:27.773 22:01:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:28.340 22:01:01 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:28.340 22:01:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:28.340 22:01:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:28.340 22:01:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:28.340 22:01:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:28.340 22:01:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:28.340 22:01:01 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:28.340 22:01:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:28.340 22:01:01 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:28.340 22:01:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:28.340 22:01:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:28.907 22:01:01 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:28.907 22:01:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:28.907 22:01:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:28.907 22:01:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:28.907 22:01:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:28.907 22:01:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:28.907 22:01:01 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:28.907 22:01:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:28.907 22:01:02 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:28.907 22:01:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:28.907 22:01:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:29.165 [2024-07-15 22:01:02.380076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:47:29.165 [2024-07-15 22:01:02.381078] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:29.165 [2024-07-15 22:01:02.381118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:47:29.165 [2024-07-15 22:01:02.381134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:29.165 [2024-07-15 22:01:02.381156] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:29.165 [2024-07-15 22:01:02.381168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:47:29.165 [2024-07-15 22:01:02.381179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:29.165 [2024-07-15 22:01:02.381190] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:29.165 [2024-07-15 22:01:02.381201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:47:29.165 [2024-07-15 22:01:02.381212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:29.165 [2024-07-15 22:01:02.381225] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:29.165 [2024-07-15 22:01:02.381236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:47:29.165 [2024-07-15 22:01:02.381246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:29.422 22:01:02 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:29.422 22:01:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:29.422 22:01:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:29.422 22:01:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:29.423 22:01:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:29.423 22:01:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:29.423 22:01:02 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:29.423 22:01:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:29.423 22:01:02 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:29.423 22:01:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:47:29.423 22:01:02 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:47:29.423 22:01:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:47:29.423 22:01:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:47:29.423 22:01:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:47:29.423 22:01:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:47:29.423 22:01:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:47:29.423 22:01:02 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:47:35.994 22:01:08 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:47:35.994 22:01:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:47:35.994 22:01:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:47:35.994 22:01:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:35.994 22:01:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:35.994 22:01:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:35.994 22:01:08 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:35.994 22:01:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:35.994 22:01:08 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:35.994 22:01:08 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:47:35.994 22:01:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:47:35.994 22:01:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:47:35.994 22:01:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:47:35.994 22:01:08 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:47:35.994 22:01:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:35.994 22:01:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:35.994 22:01:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:35.994 22:01:08 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:35.994 22:01:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:35.994 22:01:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:35.994 22:01:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:35.994 22:01:08 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:35.994 22:01:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:35.994 22:01:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:36.253 22:01:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:36.253 22:01:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:36.253 22:01:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:36.253 22:01:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:36.253 22:01:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:36.253 22:01:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:36.253 22:01:09 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:36.253 22:01:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:36.253 22:01:09 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:36.253 22:01:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:36.253 22:01:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:36.820 22:01:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:36.820 22:01:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:36.820 22:01:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:36.820 22:01:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:36.820 22:01:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:36.820 22:01:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:36.820 22:01:09 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:36.820 22:01:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:36.820 22:01:09 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:36.820 22:01:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:36.820 22:01:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:37.388 22:01:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:37.388 22:01:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:37.388 22:01:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:37.388 22:01:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:37.388 22:01:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:37.388 22:01:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:37.388 22:01:10 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:37.388 22:01:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:37.388 22:01:10 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:37.388 22:01:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:37.388 22:01:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:37.954 22:01:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:37.954 22:01:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:37.954 22:01:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:37.954 22:01:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:37.954 22:01:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:37.954 22:01:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:37.954 22:01:11 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:37.954 22:01:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:37.954 22:01:11 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:37.954 22:01:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:37.954 22:01:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:38.520 22:01:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:38.520 22:01:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:38.520 22:01:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:38.520 22:01:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:38.520 22:01:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:38.520 22:01:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:38.520 22:01:11 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:38.520 22:01:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:38.520 22:01:11 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:38.520 22:01:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:38.520 22:01:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:39.085 22:01:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:39.085 22:01:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:39.085 22:01:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:39.085 22:01:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:39.085 22:01:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:39.085 22:01:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:39.085 22:01:12 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:39.085 22:01:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:39.085 22:01:12 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:39.085 22:01:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:39.085 22:01:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:39.653 22:01:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:39.653 22:01:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:39.653 22:01:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:39.653 22:01:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:39.653 22:01:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:39.653 22:01:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:39.653 22:01:12 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:39.653 22:01:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:39.653 22:01:12 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:39.653 22:01:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:39.653 22:01:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:40.219 22:01:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:40.219 22:01:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:40.219 22:01:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:40.219 22:01:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:40.219 22:01:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:40.219 22:01:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:40.219 22:01:13 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:40.219 22:01:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:40.219 22:01:13 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:40.219 22:01:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:40.219 22:01:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:40.785 22:01:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:40.785 22:01:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:40.785 22:01:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:40.785 22:01:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:40.785 22:01:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:40.785 22:01:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:40.785 22:01:13 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:40.785 22:01:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:40.785 22:01:13 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:40.785 22:01:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:40.785 22:01:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:41.353 22:01:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:41.353 22:01:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:41.353 22:01:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:41.353 22:01:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:41.353 22:01:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:41.353 22:01:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:41.353 22:01:14 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:41.353 22:01:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:41.353 22:01:14 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:41.353 22:01:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:41.353 22:01:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:41.923 22:01:15 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:41.923 22:01:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:41.923 22:01:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:41.923 22:01:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:41.923 22:01:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:41.923 22:01:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:41.923 22:01:15 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:41.923 22:01:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:41.923 22:01:15 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:41.924 22:01:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:41.924 22:01:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:42.498 22:01:15 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:42.498 22:01:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:42.498 22:01:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:42.498 22:01:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:42.498 22:01:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:42.498 22:01:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:42.498 22:01:15 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:42.498 22:01:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:42.498 22:01:15 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:42.498 22:01:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:42.498 22:01:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:43.067 22:01:16 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:43.067 22:01:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:43.067 22:01:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:43.067 22:01:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:43.067 22:01:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:43.067 22:01:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:43.067 22:01:16 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:43.067 22:01:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:43.067 22:01:16 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:43.067 22:01:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:43.067 22:01:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:43.633 22:01:16 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:43.633 22:01:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:43.633 22:01:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:43.633 22:01:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:43.633 22:01:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:43.633 22:01:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:43.633 22:01:16 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:43.633 22:01:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:43.633 22:01:16 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:43.633 22:01:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:43.633 22:01:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:44.201 22:01:17 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:44.201 22:01:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:44.201 22:01:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:44.201 22:01:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:44.201 22:01:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:44.201 22:01:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:44.201 22:01:17 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:44.201 22:01:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:44.201 22:01:17 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:44.201 22:01:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:44.201 22:01:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:44.769 22:01:17 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:44.769 22:01:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:44.769 22:01:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:44.769 22:01:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:44.769 22:01:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:44.769 22:01:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:44.769 22:01:17 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:44.769 22:01:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:44.769 22:01:17 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:44.769 22:01:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:44.769 22:01:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:45.028 22:01:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:45.028 22:01:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:45.028 22:01:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:45.028 22:01:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:45.028 22:01:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:45.028 22:01:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:45.028 22:01:18 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:45.028 22:01:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:45.286 22:01:18 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:45.286 22:01:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:45.286 22:01:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:45.851 22:01:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:45.851 22:01:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:45.851 22:01:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:45.851 22:01:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:45.851 22:01:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:45.851 22:01:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:45.851 22:01:18 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:45.851 22:01:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:45.851 22:01:18 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:45.851 22:01:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:45.851 22:01:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:46.419 22:01:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:46.419 22:01:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:46.419 22:01:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:46.419 22:01:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:46.419 22:01:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:46.419 22:01:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:46.419 22:01:19 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:46.419 22:01:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:46.419 22:01:19 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:46.419 22:01:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:46.419 22:01:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:46.994 22:01:20 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:46.995 22:01:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:46.995 22:01:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:46.995 22:01:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:46.995 22:01:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:46.995 22:01:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:46.995 22:01:20 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:46.995 22:01:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:46.995 22:01:20 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:46.995 22:01:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:46.995 22:01:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:47.567 22:01:20 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:47.567 22:01:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:47.567 22:01:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:47.567 22:01:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:47.567 22:01:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:47.567 22:01:20 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:47.567 22:01:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:47.567 22:01:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:47.567 22:01:20 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:47.567 22:01:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:47.567 22:01:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:48.139 22:01:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:48.139 22:01:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:48.139 22:01:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:48.139 22:01:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:48.139 22:01:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:48.139 22:01:21 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:48.139 22:01:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:48.139 22:01:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:48.139 22:01:21 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:48.139 22:01:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:48.139 22:01:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:48.707 22:01:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:48.707 22:01:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:48.707 22:01:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:48.707 22:01:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:48.707 22:01:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:48.707 22:01:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:48.707 22:01:21 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:48.707 22:01:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:48.707 22:01:21 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:48.707 22:01:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:48.707 22:01:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:48.966 22:01:22 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:48.966 22:01:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:48.966 22:01:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:48.966 22:01:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:48.966 22:01:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:48.966 22:01:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:49.224 22:01:22 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:49.224 22:01:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:49.224 22:01:22 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:49.224 22:01:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:49.224 22:01:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:49.797 22:01:22 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:49.797 22:01:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:49.797 22:01:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:49.797 22:01:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:49.797 22:01:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:49.797 22:01:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:49.797 22:01:22 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:49.797 22:01:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:49.797 22:01:22 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:49.797 22:01:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:49.797 22:01:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:50.365 22:01:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:50.365 22:01:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:50.365 22:01:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:50.365 22:01:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:50.365 22:01:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:50.365 22:01:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:50.365 22:01:23 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:50.365 22:01:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:50.365 22:01:23 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:50.365 22:01:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:50.365 22:01:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:50.933 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:50.933 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:50.933 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:50.933 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:50.933 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:50.933 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:50.933 22:01:24 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:50.933 22:01:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:50.933 22:01:24 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:50.933 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:50.933 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:51.500 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:51.500 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:51.500 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:51.500 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:51.500 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:51.500 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:51.500 22:01:24 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:51.500 22:01:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:51.500 22:01:24 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:51.500 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:51.500 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:51.758 22:01:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:51.758 22:01:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:51.758 22:01:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:52.016 22:01:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:52.016 22:01:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:52.016 22:01:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:52.016 22:01:25 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:52.016 22:01:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:52.016 22:01:25 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:52.016 22:01:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:52.016 22:01:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:52.619 22:01:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:52.619 22:01:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:52.619 22:01:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:52.619 22:01:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:52.619 22:01:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:52.619 22:01:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:52.619 22:01:25 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:52.619 22:01:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:52.619 22:01:25 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:52.619 22:01:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:52.619 22:01:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:52.877 22:01:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:52.877 22:01:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:52.877 22:01:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:53.134 22:01:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:53.134 22:01:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:53.134 22:01:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:53.134 22:01:26 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:53.134 22:01:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:53.134 22:01:26 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:53.134 22:01:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:53.134 22:01:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:53.701 22:01:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:53.701 22:01:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:53.701 22:01:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:53.701 22:01:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:53.701 22:01:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:53.701 22:01:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:53.701 22:01:26 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:53.701 22:01:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:53.701 22:01:26 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:53.701 22:01:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:53.701 22:01:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:53.959 [2024-07-15 22:01:27.132937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:47:53.959 [2024-07-15 22:01:27.133946] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:53.959 [2024-07-15 22:01:27.133984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:47:53.959 [2024-07-15 22:01:27.134000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:53.959 [2024-07-15 22:01:27.134023] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:53.959 [2024-07-15 22:01:27.134041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:47:53.959 [2024-07-15 22:01:27.134052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:53.959 [2024-07-15 22:01:27.134067] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:53.959 [2024-07-15 22:01:27.134078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:47:53.959 [2024-07-15 22:01:27.134090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:53.959 [2024-07-15 22:01:27.134117] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:53.959 [2024-07-15 22:01:27.134138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:47:53.959 [2024-07-15 22:01:27.134157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:54.219 22:01:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:54.219 22:01:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:54.219 22:01:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:54.219 22:01:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:54.219 22:01:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:54.219 22:01:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:54.219 22:01:27 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:54.219 22:01:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:54.219 22:01:27 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:54.219 22:01:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:47:54.219 22:01:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:47:54.219 22:01:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:47:54.219 22:01:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:47:54.219 22:01:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:47:54.219 22:01:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:47:54.219 22:01:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:47:54.219 22:01:27 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:48:00.818 22:01:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:48:00.818 22:01:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:48:00.819 22:01:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:48:00.819 22:01:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:00.819 22:01:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:00.819 22:01:33 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:00.819 22:01:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:00.819 22:01:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:00.819 22:01:33 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:00.819 22:01:33 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:48:00.819 22:01:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:48:00.819 22:01:33 sw_hotplug -- common/autotest_common.sh@715 -- # time=74.66 00:48:00.819 22:01:33 sw_hotplug -- common/autotest_common.sh@716 -- # echo 74.66 00:48:00.819 22:01:33 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:48:00.819 22:01:33 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=74.66 00:48:00.819 22:01:33 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 74.66 1 00:48:00.819 remove_attach_helper took 74.66s to complete (handling 1 nvme drive(s)) 22:01:33 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:48:00.819 22:01:33 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 176247 00:48:00.819 22:01:33 sw_hotplug -- common/autotest_common.sh@948 -- # '[' -z 176247 ']' 00:48:00.819 22:01:33 sw_hotplug -- common/autotest_common.sh@952 -- # kill -0 176247 00:48:00.819 22:01:33 sw_hotplug -- common/autotest_common.sh@953 -- # uname 00:48:00.819 22:01:33 sw_hotplug -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:48:00.819 22:01:33 sw_hotplug -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 176247 00:48:00.819 22:01:33 sw_hotplug -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:48:00.819 22:01:33 sw_hotplug -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:48:00.819 22:01:33 sw_hotplug -- common/autotest_common.sh@966 -- # echo 'killing process with pid 176247' 00:48:00.819 killing process with pid 176247 00:48:00.819 22:01:33 sw_hotplug -- common/autotest_common.sh@967 -- # kill 176247 00:48:00.819 22:01:33 sw_hotplug -- common/autotest_common.sh@972 -- # wait 176247 00:48:03.368 22:01:36 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:48:03.627 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:48:03.628 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:48:04.566 00:48:04.566 real 2m35.661s 00:48:04.566 user 2m16.862s 00:48:04.566 sys 0m15.994s 00:48:04.566 22:01:37 sw_hotplug -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:04.566 22:01:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:04.566 ************************************ 00:48:04.566 END TEST sw_hotplug 00:48:04.566 ************************************ 00:48:04.566 22:01:37 -- common/autotest_common.sh@1142 -- # return 0 00:48:04.566 22:01:37 -- spdk/autotest.sh@247 -- # [[ 0 -eq 1 ]] 00:48:04.566 22:01:37 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:48:04.566 22:01:37 -- spdk/autotest.sh@260 -- # timing_exit lib 00:48:04.566 22:01:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:48:04.566 22:01:37 -- common/autotest_common.sh@10 -- # set +x 00:48:04.825 22:01:37 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:48:04.825 22:01:37 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:48:04.825 22:01:37 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:48:04.825 22:01:37 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:48:04.825 22:01:37 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:48:04.825 22:01:37 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:48:04.825 22:01:37 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:48:04.825 22:01:37 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:48:04.825 22:01:37 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:48:04.825 22:01:37 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:48:04.825 22:01:37 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:48:04.825 22:01:37 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:48:04.825 22:01:37 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:48:04.825 22:01:37 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:48:04.825 22:01:37 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:48:04.825 22:01:37 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:48:04.825 22:01:37 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:48:04.825 22:01:37 -- spdk/autotest.sh@375 -- # [[ 1 -eq 1 ]] 00:48:04.825 22:01:37 -- spdk/autotest.sh@376 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:48:04.825 22:01:37 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:48:04.825 22:01:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:04.825 22:01:37 -- common/autotest_common.sh@10 -- # set +x 00:48:04.825 ************************************ 00:48:04.825 START TEST blockdev_raid5f 00:48:04.825 ************************************ 00:48:04.825 22:01:37 blockdev_raid5f -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:48:04.825 * Looking for test storage... 00:48:04.825 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:48:04.825 22:01:38 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:48:04.825 22:01:38 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:48:04.825 22:01:38 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:48:04.825 22:01:38 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:48:04.825 22:01:38 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:48:04.825 22:01:38 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:48:04.825 22:01:38 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:48:04.825 22:01:38 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:48:04.825 22:01:38 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:48:04.825 22:01:38 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:48:04.825 22:01:38 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:48:04.825 22:01:38 blockdev_raid5f -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:48:04.825 22:01:38 blockdev_raid5f -- bdev/blockdev.sh@674 -- # uname -s 00:48:04.825 22:01:38 blockdev_raid5f -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:48:04.825 22:01:38 blockdev_raid5f -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:48:04.825 22:01:38 blockdev_raid5f -- bdev/blockdev.sh@682 -- # test_type=raid5f 00:48:04.825 22:01:38 blockdev_raid5f -- bdev/blockdev.sh@683 -- # crypto_device= 00:48:04.825 22:01:38 blockdev_raid5f -- bdev/blockdev.sh@684 -- # dek= 00:48:04.825 22:01:38 blockdev_raid5f -- bdev/blockdev.sh@685 -- # env_ctx= 00:48:04.825 22:01:38 blockdev_raid5f -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:48:04.825 22:01:38 blockdev_raid5f -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:48:04.825 22:01:38 blockdev_raid5f -- bdev/blockdev.sh@690 -- # [[ raid5f == bdev ]] 00:48:04.825 22:01:38 blockdev_raid5f -- bdev/blockdev.sh@690 -- # [[ raid5f == crypto_* ]] 00:48:04.825 22:01:38 blockdev_raid5f -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:48:04.825 22:01:38 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=178505 00:48:04.825 22:01:38 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:48:04.825 22:01:38 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 178505 00:48:04.825 22:01:38 blockdev_raid5f -- common/autotest_common.sh@829 -- # '[' -z 178505 ']' 00:48:04.825 22:01:38 blockdev_raid5f -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:04.825 22:01:38 blockdev_raid5f -- common/autotest_common.sh@834 -- # local max_retries=100 00:48:04.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:04.825 22:01:38 blockdev_raid5f -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:04.825 22:01:38 blockdev_raid5f -- common/autotest_common.sh@838 -- # xtrace_disable 00:48:04.825 22:01:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:48:04.825 22:01:38 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:48:04.825 [2024-07-15 22:01:38.162725] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:48:04.826 [2024-07-15 22:01:38.162979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178505 ] 00:48:05.084 [2024-07-15 22:01:38.312183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:05.343 [2024-07-15 22:01:38.530487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:06.293 22:01:39 blockdev_raid5f -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:48:06.293 22:01:39 blockdev_raid5f -- common/autotest_common.sh@862 -- # return 0 00:48:06.293 22:01:39 blockdev_raid5f -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:48:06.293 22:01:39 blockdev_raid5f -- bdev/blockdev.sh@726 -- # setup_raid5f_conf 00:48:06.293 22:01:39 blockdev_raid5f -- bdev/blockdev.sh@280 -- # rpc_cmd 00:48:06.293 22:01:39 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:06.293 22:01:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:48:06.293 Malloc0 00:48:06.293 Malloc1 00:48:06.293 Malloc2 00:48:06.293 22:01:39 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:06.293 22:01:39 blockdev_raid5f -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:48:06.293 22:01:39 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:06.293 22:01:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:48:06.552 22:01:39 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:06.552 22:01:39 blockdev_raid5f -- bdev/blockdev.sh@740 -- # cat 00:48:06.552 22:01:39 blockdev_raid5f -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:48:06.552 22:01:39 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:06.552 22:01:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:48:06.552 22:01:39 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:06.552 22:01:39 blockdev_raid5f -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:48:06.552 22:01:39 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:06.552 22:01:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:48:06.552 22:01:39 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:06.552 22:01:39 blockdev_raid5f -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:48:06.552 22:01:39 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:06.552 22:01:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:48:06.552 22:01:39 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:06.552 22:01:39 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:48:06.552 22:01:39 blockdev_raid5f -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:48:06.552 22:01:39 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:48:06.552 22:01:39 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:06.552 22:01:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:48:06.552 22:01:39 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:06.552 22:01:39 blockdev_raid5f -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:48:06.552 22:01:39 blockdev_raid5f -- bdev/blockdev.sh@749 -- # jq -r .name 00:48:06.552 22:01:39 blockdev_raid5f -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "c7bcbf9e-929b-4273-81e4-e41536af0c24"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "c7bcbf9e-929b-4273-81e4-e41536af0c24",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "c7bcbf9e-929b-4273-81e4-e41536af0c24",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "9a3bb906-7667-479f-a6c4-5e57390b9f8c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "cb6a4935-6d0f-44fc-9da3-51d2062e561b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "55b5cd30-9a54-4473-a377-e2004f80993b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:48:06.552 22:01:39 blockdev_raid5f -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:48:06.552 22:01:39 blockdev_raid5f -- bdev/blockdev.sh@752 -- # hello_world_bdev=raid5f 00:48:06.552 22:01:39 blockdev_raid5f -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:48:06.552 22:01:39 blockdev_raid5f -- bdev/blockdev.sh@754 -- # killprocess 178505 00:48:06.552 22:01:39 blockdev_raid5f -- common/autotest_common.sh@948 -- # '[' -z 178505 ']' 00:48:06.552 22:01:39 blockdev_raid5f -- common/autotest_common.sh@952 -- # kill -0 178505 00:48:06.552 22:01:39 blockdev_raid5f -- common/autotest_common.sh@953 -- # uname 00:48:06.552 22:01:39 blockdev_raid5f -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:48:06.552 22:01:39 blockdev_raid5f -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 178505 00:48:06.552 22:01:39 blockdev_raid5f -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:48:06.552 killing process with pid 178505 00:48:06.552 22:01:39 blockdev_raid5f -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:48:06.552 22:01:39 blockdev_raid5f -- common/autotest_common.sh@966 -- # echo 'killing process with pid 178505' 00:48:06.552 22:01:39 blockdev_raid5f -- common/autotest_common.sh@967 -- # kill 178505 00:48:06.552 22:01:39 blockdev_raid5f -- common/autotest_common.sh@972 -- # wait 178505 00:48:09.834 22:01:43 blockdev_raid5f -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:48:09.834 22:01:43 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:48:09.834 22:01:43 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:48:09.834 22:01:43 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:09.834 22:01:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:48:09.834 ************************************ 00:48:09.834 START TEST bdev_hello_world 00:48:09.834 ************************************ 00:48:09.834 22:01:43 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:48:09.834 [2024-07-15 22:01:43.084394] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:48:09.834 [2024-07-15 22:01:43.084622] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178597 ] 00:48:10.090 [2024-07-15 22:01:43.244149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:10.090 [2024-07-15 22:01:43.441776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:10.654 [2024-07-15 22:01:43.967988] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:48:10.655 [2024-07-15 22:01:43.968071] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:48:10.655 [2024-07-15 22:01:43.968104] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:48:10.655 [2024-07-15 22:01:43.968570] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:48:10.655 [2024-07-15 22:01:43.968710] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:48:10.655 [2024-07-15 22:01:43.968738] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:48:10.655 [2024-07-15 22:01:43.968821] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:48:10.655 00:48:10.655 [2024-07-15 22:01:43.968867] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:48:12.550 00:48:12.550 real 0m2.433s 00:48:12.550 user 0m2.068s 00:48:12.550 sys 0m0.248s 00:48:12.550 22:01:45 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:12.550 ************************************ 00:48:12.550 END TEST bdev_hello_world 00:48:12.550 ************************************ 00:48:12.550 22:01:45 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:48:12.550 22:01:45 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:48:12.550 22:01:45 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:48:12.550 22:01:45 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:48:12.550 22:01:45 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:12.550 22:01:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:48:12.550 ************************************ 00:48:12.550 START TEST bdev_bounds 00:48:12.550 ************************************ 00:48:12.550 22:01:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:48:12.550 22:01:45 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=178647 00:48:12.550 22:01:45 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:48:12.550 Process bdevio pid: 178647 00:48:12.550 22:01:45 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 178647' 00:48:12.550 22:01:45 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 178647 00:48:12.550 22:01:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 178647 ']' 00:48:12.550 22:01:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:12.550 22:01:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:48:12.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:12.550 22:01:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:12.550 22:01:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:48:12.550 22:01:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:48:12.550 22:01:45 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:48:12.550 [2024-07-15 22:01:45.566860] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:48:12.550 [2024-07-15 22:01:45.567157] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178647 ] 00:48:12.550 [2024-07-15 22:01:45.741121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:48:12.807 [2024-07-15 22:01:45.970049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:48:12.807 [2024-07-15 22:01:45.970383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:12.807 [2024-07-15 22:01:45.970388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:48:13.371 22:01:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:48:13.371 22:01:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:48:13.371 22:01:46 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:48:13.371 I/O targets: 00:48:13.371 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:48:13.371 00:48:13.371 00:48:13.371 CUnit - A unit testing framework for C - Version 2.1-3 00:48:13.371 http://cunit.sourceforge.net/ 00:48:13.371 00:48:13.371 00:48:13.371 Suite: bdevio tests on: raid5f 00:48:13.371 Test: blockdev write read block ...passed 00:48:13.371 Test: blockdev write zeroes read block ...passed 00:48:13.629 Test: blockdev write zeroes read no split ...passed 00:48:13.629 Test: blockdev write zeroes read split ...passed 00:48:13.629 Test: blockdev write zeroes read split partial ...passed 00:48:13.629 Test: blockdev reset ...passed 00:48:13.629 Test: blockdev write read 8 blocks ...passed 00:48:13.629 Test: blockdev write read size > 128k ...passed 00:48:13.629 Test: blockdev write read invalid size ...passed 00:48:13.629 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:48:13.629 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:48:13.629 Test: blockdev write read max offset ...passed 00:48:13.629 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:48:13.629 Test: blockdev writev readv 8 blocks ...passed 00:48:13.629 Test: blockdev writev readv 30 x 1block ...passed 00:48:13.629 Test: blockdev writev readv block ...passed 00:48:13.629 Test: blockdev writev readv size > 128k ...passed 00:48:13.629 Test: blockdev writev readv size > 128k in two iovs ...passed 00:48:13.887 Test: blockdev comparev and writev ...passed 00:48:13.887 Test: blockdev nvme passthru rw ...passed 00:48:13.887 Test: blockdev nvme passthru vendor specific ...passed 00:48:13.887 Test: blockdev nvme admin passthru ...passed 00:48:13.887 Test: blockdev copy ...passed 00:48:13.887 00:48:13.887 Run Summary: Type Total Ran Passed Failed Inactive 00:48:13.888 suites 1 1 n/a 0 0 00:48:13.888 tests 23 23 23 0 0 00:48:13.888 asserts 130 130 130 0 n/a 00:48:13.888 00:48:13.888 Elapsed time = 0.665 seconds 00:48:13.888 0 00:48:13.888 22:01:47 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 178647 00:48:13.888 22:01:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 178647 ']' 00:48:13.888 22:01:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 178647 00:48:13.888 22:01:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:48:13.888 22:01:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:48:13.888 22:01:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 178647 00:48:13.888 22:01:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:48:13.888 22:01:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:48:13.888 22:01:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 178647' 00:48:13.888 killing process with pid 178647 00:48:13.888 22:01:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@967 -- # kill 178647 00:48:13.888 22:01:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # wait 178647 00:48:15.790 22:01:48 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:48:15.790 00:48:15.790 real 0m3.349s 00:48:15.790 user 0m8.054s 00:48:15.790 sys 0m0.380s 00:48:15.790 22:01:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:15.790 ************************************ 00:48:15.790 END TEST bdev_bounds 00:48:15.790 ************************************ 00:48:15.790 22:01:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:48:15.790 22:01:48 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:48:15.790 22:01:48 blockdev_raid5f -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:48:15.790 22:01:48 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:48:15.790 22:01:48 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:15.790 22:01:48 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:48:15.790 ************************************ 00:48:15.790 START TEST bdev_nbd 00:48:15.790 ************************************ 00:48:15.790 22:01:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:48:15.790 22:01:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:48:15.790 22:01:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:48:15.790 22:01:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:48:15.790 22:01:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:48:15.790 22:01:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=($2) 00:48:15.790 22:01:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:48:15.790 22:01:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=1 00:48:15.790 22:01:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:48:15.790 22:01:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=(/dev/nbd+([0-9])) 00:48:15.790 22:01:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:48:15.790 22:01:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=1 00:48:15.790 22:01:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:48:15.790 22:01:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:48:15.790 22:01:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:48:15.790 22:01:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:48:15.790 22:01:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=178716 00:48:15.790 22:01:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:48:15.790 22:01:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 178716 /var/tmp/spdk-nbd.sock 00:48:15.790 22:01:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:48:15.790 22:01:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 178716 ']' 00:48:15.790 22:01:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:48:15.790 22:01:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:48:15.790 22:01:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:48:15.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:48:15.790 22:01:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:48:15.790 22:01:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:48:15.790 [2024-07-15 22:01:48.984534] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:48:15.790 [2024-07-15 22:01:48.984748] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:48:15.790 [2024-07-15 22:01:49.144900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:16.049 [2024-07-15 22:01:49.340128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:16.617 22:01:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:48:16.617 22:01:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:48:16.617 22:01:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:48:16.617 22:01:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:48:16.617 22:01:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:48:16.617 22:01:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:48:16.617 22:01:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:48:16.617 22:01:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:48:16.617 22:01:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:48:16.617 22:01:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:48:16.617 22:01:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:48:16.617 22:01:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:48:16.617 22:01:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:48:16.617 22:01:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:48:16.617 22:01:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:48:16.876 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:48:16.876 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:48:16.876 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:48:16.876 22:01:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:48:16.876 22:01:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:48:16.876 22:01:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:48:16.876 22:01:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:48:16.876 22:01:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:48:16.876 22:01:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:48:16.876 22:01:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:48:16.876 22:01:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:48:16.876 22:01:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:48:16.876 1+0 records in 00:48:16.876 1+0 records out 00:48:16.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225309 s, 18.2 MB/s 00:48:16.876 22:01:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:48:16.876 22:01:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:48:16.876 22:01:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:48:16.876 22:01:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:48:16.876 22:01:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:48:16.876 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:48:16.876 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:48:16.876 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:48:17.136 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:48:17.136 { 00:48:17.136 "nbd_device": "/dev/nbd0", 00:48:17.136 "bdev_name": "raid5f" 00:48:17.136 } 00:48:17.136 ]' 00:48:17.136 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:48:17.136 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:48:17.136 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:48:17.136 { 00:48:17.136 "nbd_device": "/dev/nbd0", 00:48:17.136 "bdev_name": "raid5f" 00:48:17.136 } 00:48:17.136 ]' 00:48:17.136 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:48:17.136 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:48:17.136 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:48:17.136 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:48:17.136 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:48:17.136 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:48:17.136 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:48:17.394 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:48:17.394 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:48:17.394 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:48:17.394 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:48:17.394 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:48:17.394 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:48:17.394 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:48:17.394 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:48:17.394 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:48:17.394 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:48:17.395 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:48:17.654 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:48:17.654 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:48:17.654 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:48:17.654 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:48:17.654 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:48:17.654 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:48:17.654 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:48:17.654 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:48:17.654 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:48:17.654 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:48:17.654 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:48:17.654 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:48:17.654 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:48:17.654 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:48:17.654 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:48:17.654 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:48:17.654 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:48:17.654 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:48:17.654 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:48:17.654 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:48:17.654 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:48:17.654 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:48:17.654 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:48:17.654 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:48:17.654 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:48:17.654 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:48:17.654 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:48:17.654 22:01:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:48:17.913 /dev/nbd0 00:48:17.913 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:48:17.913 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:48:17.913 22:01:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:48:17.913 22:01:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:48:17.913 22:01:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:48:17.913 22:01:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:48:17.913 22:01:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:48:17.913 22:01:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:48:17.913 22:01:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:48:17.913 22:01:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:48:17.913 22:01:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:48:17.913 1+0 records in 00:48:17.913 1+0 records out 00:48:17.913 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041251 s, 9.9 MB/s 00:48:17.913 22:01:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:48:17.913 22:01:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:48:17.913 22:01:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:48:17.913 22:01:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:48:17.913 22:01:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:48:17.913 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:48:17.913 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:48:17.913 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:48:17.913 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:48:17.913 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:48:18.172 { 00:48:18.172 "nbd_device": "/dev/nbd0", 00:48:18.172 "bdev_name": "raid5f" 00:48:18.172 } 00:48:18.172 ]' 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:48:18.172 { 00:48:18.172 "nbd_device": "/dev/nbd0", 00:48:18.172 "bdev_name": "raid5f" 00:48:18.172 } 00:48:18.172 ]' 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:48:18.172 256+0 records in 00:48:18.172 256+0 records out 00:48:18.172 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131261 s, 79.9 MB/s 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:48:18.172 256+0 records in 00:48:18.172 256+0 records out 00:48:18.172 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0304524 s, 34.4 MB/s 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:48:18.172 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:48:18.432 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:48:18.432 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:48:18.432 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:48:18.432 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:48:18.432 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:48:18.432 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:48:18.432 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:48:18.432 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:48:18.432 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:48:18.432 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:48:18.432 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:48:18.691 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:48:18.691 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:48:18.691 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:48:18.691 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:48:18.691 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:48:18.691 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:48:18.691 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:48:18.691 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:48:18.691 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:48:18.691 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:48:18.691 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:48:18.691 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:48:18.691 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:48:18.691 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:48:18.691 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:48:18.691 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:48:18.691 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:48:18.691 22:01:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:48:18.691 malloc_lvol_verify 00:48:18.954 22:01:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:48:18.954 b24b04ba-708f-4863-9dba-008a9cece850 00:48:18.954 22:01:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:48:19.217 19978327-42be-4f32-8ab2-336b70e0c298 00:48:19.217 22:01:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:48:19.476 /dev/nbd0 00:48:19.476 22:01:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:48:19.476 mke2fs 1.45.5 (07-Jan-2020) 00:48:19.476 00:48:19.476 Filesystem too small for a journal 00:48:19.476 Creating filesystem with 1024 4k blocks and 1024 inodes 00:48:19.476 00:48:19.476 Allocating group tables: 0/1 done 00:48:19.476 Writing inode tables: 0/1 done 00:48:19.476 Writing superblocks and filesystem accounting information: 0/1 done 00:48:19.476 00:48:19.476 22:01:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:48:19.476 22:01:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:48:19.476 22:01:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:48:19.476 22:01:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:48:19.476 22:01:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:48:19.476 22:01:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:48:19.476 22:01:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:48:19.476 22:01:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:48:19.736 22:01:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:48:19.736 22:01:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:48:19.736 22:01:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:48:19.736 22:01:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:48:19.736 22:01:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:48:19.736 22:01:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:48:19.736 22:01:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:48:19.736 22:01:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:48:19.736 22:01:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:48:19.736 22:01:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:48:19.736 22:01:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 178716 00:48:19.736 22:01:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 178716 ']' 00:48:19.736 22:01:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 178716 00:48:19.736 22:01:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:48:19.736 22:01:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:48:19.736 22:01:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 178716 00:48:19.736 killing process with pid 178716 00:48:19.736 22:01:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:48:19.736 22:01:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:48:19.736 22:01:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 178716' 00:48:19.736 22:01:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@967 -- # kill 178716 00:48:19.736 22:01:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # wait 178716 00:48:21.640 ************************************ 00:48:21.640 END TEST bdev_nbd 00:48:21.640 ************************************ 00:48:21.640 22:01:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:48:21.640 00:48:21.640 real 0m5.597s 00:48:21.640 user 0m7.711s 00:48:21.640 sys 0m1.013s 00:48:21.640 22:01:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:21.640 22:01:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:48:21.640 22:01:54 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:48:21.640 22:01:54 blockdev_raid5f -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:48:21.640 22:01:54 blockdev_raid5f -- bdev/blockdev.sh@764 -- # '[' raid5f = nvme ']' 00:48:21.640 22:01:54 blockdev_raid5f -- bdev/blockdev.sh@764 -- # '[' raid5f = gpt ']' 00:48:21.640 22:01:54 blockdev_raid5f -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:48:21.640 22:01:54 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:48:21.640 22:01:54 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:21.640 22:01:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:48:21.640 ************************************ 00:48:21.640 START TEST bdev_fio 00:48:21.640 ************************************ 00:48:21.640 22:01:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:48:21.640 22:01:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:48:21.640 22:01:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:48:21.640 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:48:21.640 22:01:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:48:21.640 22:01:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:48:21.640 22:01:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:48:21.640 22:01:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:48:21.640 22:01:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:48:21.640 22:01:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:48:21.640 22:01:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:48:21.640 22:01:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:48:21.640 22:01:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:48:21.640 22:01:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:48:21.640 22:01:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:48:21.640 22:01:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:48:21.640 22:01:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:48:21.640 22:01:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:48:21.640 22:01:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:48:21.640 22:01:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:48:21.640 22:01:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:48:21.640 22:01:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:48:21.640 22:01:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:48:21.640 22:01:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:48:21.640 22:01:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:48:21.640 22:01:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:48:21.640 22:01:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid5f]' 00:48:21.640 22:01:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid5f 00:48:21.641 22:01:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:48:21.641 22:01:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:48:21.641 22:01:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:48:21.641 22:01:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:21.641 22:01:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:48:21.641 ************************************ 00:48:21.641 START TEST bdev_fio_rw_verify 00:48:21.641 ************************************ 00:48:21.641 22:01:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:48:21.641 22:01:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:48:21.641 22:01:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:48:21.641 22:01:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=(libasan libclang_rt.asan) 00:48:21.641 22:01:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:48:21.641 22:01:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:48:21.641 22:01:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:48:21.641 22:01:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:48:21.641 22:01:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:48:21.641 22:01:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:48:21.641 22:01:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:48:21.641 22:01:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:48:21.641 22:01:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:48:21.641 22:01:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:48:21.641 22:01:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:48:21.641 22:01:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:48:21.641 22:01:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:48:21.641 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:48:21.641 fio-3.35 00:48:21.641 Starting 1 thread 00:48:33.864 00:48:33.864 job_raid5f: (groupid=0, jobs=1): err= 0: pid=178970: Mon Jul 15 22:02:05 2024 00:48:33.864 read: IOPS=11.2k, BW=43.6MiB/s (45.7MB/s)(436MiB/10001msec) 00:48:33.864 slat (usec): min=17, max=1138, avg=20.31, stdev= 4.24 00:48:33.864 clat (usec): min=9, max=1355, avg=135.48, stdev=49.85 00:48:33.864 lat (usec): min=28, max=1377, avg=155.79, stdev=50.34 00:48:33.864 clat percentiles (usec): 00:48:33.864 | 50.000th=[ 137], 99.000th=[ 237], 99.900th=[ 281], 99.990th=[ 359], 00:48:33.864 | 99.999th=[ 1336] 00:48:33.864 write: IOPS=11.7k, BW=45.7MiB/s (47.9MB/s)(452MiB/9885msec); 0 zone resets 00:48:33.864 slat (usec): min=7, max=220, avg=19.06, stdev= 4.32 00:48:33.864 clat (usec): min=59, max=1368, avg=339.73, stdev=55.26 00:48:33.864 lat (usec): min=75, max=1588, avg=358.79, stdev=57.12 00:48:33.864 clat percentiles (usec): 00:48:33.864 | 50.000th=[ 334], 99.000th=[ 469], 99.900th=[ 619], 99.990th=[ 1106], 00:48:33.864 | 99.999th=[ 1303] 00:48:33.864 bw ( KiB/s): min=42264, max=50640, per=98.82%, avg=46241.26, stdev=2309.02, samples=19 00:48:33.864 iops : min=10566, max=12660, avg=11560.32, stdev=577.25, samples=19 00:48:33.864 lat (usec) : 10=0.01%, 20=0.01%, 50=0.01%, 100=14.10%, 250=36.44% 00:48:33.864 lat (usec) : 500=49.25%, 750=0.17%, 1000=0.02% 00:48:33.864 lat (msec) : 2=0.01% 00:48:33.864 cpu : usr=99.59%, sys=0.36%, ctx=81, majf=0, minf=7941 00:48:33.864 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:33.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.864 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.864 issued rwts: total=111628,115639,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:33.864 latency : target=0, window=0, percentile=100.00%, depth=8 00:48:33.864 00:48:33.864 Run status group 0 (all jobs): 00:48:33.864 READ: bw=43.6MiB/s (45.7MB/s), 43.6MiB/s-43.6MiB/s (45.7MB/s-45.7MB/s), io=436MiB (457MB), run=10001-10001msec 00:48:33.864 WRITE: bw=45.7MiB/s (47.9MB/s), 45.7MiB/s-45.7MiB/s (47.9MB/s-47.9MB/s), io=452MiB (474MB), run=9885-9885msec 00:48:34.123 ----------------------------------------------------- 00:48:34.123 Suppressions used: 00:48:34.123 count bytes template 00:48:34.123 1 7 /usr/src/fio/parse.c 00:48:34.123 537 51552 /usr/src/fio/iolog.c 00:48:34.123 2 596 libcrypto.so 00:48:34.123 ----------------------------------------------------- 00:48:34.123 00:48:34.123 ************************************ 00:48:34.123 END TEST bdev_fio_rw_verify 00:48:34.123 ************************************ 00:48:34.123 00:48:34.123 real 0m12.724s 00:48:34.123 user 0m13.343s 00:48:34.123 sys 0m0.514s 00:48:34.123 22:02:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:34.123 22:02:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:48:34.123 22:02:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:48:34.123 22:02:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:48:34.123 22:02:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:48:34.123 22:02:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:48:34.123 22:02:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:48:34.123 22:02:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:48:34.123 22:02:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:48:34.123 22:02:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:48:34.123 22:02:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:48:34.123 22:02:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:48:34.123 22:02:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:48:34.123 22:02:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:48:34.123 22:02:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:48:34.123 22:02:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:48:34.123 22:02:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:48:34.123 22:02:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:48:34.123 22:02:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:48:34.123 22:02:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:48:34.123 22:02:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "c7bcbf9e-929b-4273-81e4-e41536af0c24"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "c7bcbf9e-929b-4273-81e4-e41536af0c24",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "c7bcbf9e-929b-4273-81e4-e41536af0c24",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "9a3bb906-7667-479f-a6c4-5e57390b9f8c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "cb6a4935-6d0f-44fc-9da3-51d2062e561b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "55b5cd30-9a54-4473-a377-e2004f80993b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:48:34.123 22:02:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:48:34.123 22:02:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:48:34.123 22:02:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # popd 00:48:34.123 /home/vagrant/spdk_repo/spdk 00:48:34.123 22:02:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:48:34.123 22:02:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@364 -- # return 0 00:48:34.123 00:48:34.123 real 0m12.924s 00:48:34.123 user 0m13.451s 00:48:34.123 sys 0m0.609s 00:48:34.123 22:02:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:34.123 22:02:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:48:34.123 ************************************ 00:48:34.123 END TEST bdev_fio 00:48:34.123 ************************************ 00:48:34.380 22:02:07 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:48:34.380 22:02:07 blockdev_raid5f -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:48:34.380 22:02:07 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:48:34.380 22:02:07 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:48:34.380 22:02:07 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:34.380 22:02:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:48:34.380 ************************************ 00:48:34.380 START TEST bdev_verify 00:48:34.380 ************************************ 00:48:34.380 22:02:07 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:48:34.380 [2024-07-15 22:02:07.627544] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:48:34.380 [2024-07-15 22:02:07.627782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179149 ] 00:48:34.639 [2024-07-15 22:02:07.792819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:48:34.639 [2024-07-15 22:02:08.003007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:34.639 [2024-07-15 22:02:08.003013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:48:35.207 Running I/O for 5 seconds... 00:48:40.504 00:48:40.504 Latency(us) 00:48:40.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:40.504 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:48:40.504 Verification LBA range: start 0x0 length 0x2000 00:48:40.504 raid5f : 5.01 6538.91 25.54 0.00 0.00 29470.02 195.86 31594.65 00:48:40.504 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:48:40.504 Verification LBA range: start 0x2000 length 0x2000 00:48:40.504 raid5f : 5.01 6674.53 26.07 0.00 0.00 28557.03 243.26 23352.57 00:48:40.504 =================================================================================================================== 00:48:40.504 Total : 13213.44 51.62 0.00 0.00 29008.85 195.86 31594.65 00:48:41.884 ************************************ 00:48:41.884 END TEST bdev_verify 00:48:41.884 ************************************ 00:48:41.884 00:48:41.884 real 0m7.526s 00:48:41.884 user 0m13.813s 00:48:41.884 sys 0m0.260s 00:48:41.884 22:02:15 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:41.884 22:02:15 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:48:41.884 22:02:15 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:48:41.884 22:02:15 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:48:41.884 22:02:15 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:48:41.884 22:02:15 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:41.884 22:02:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:48:41.884 ************************************ 00:48:41.884 START TEST bdev_verify_big_io 00:48:41.884 ************************************ 00:48:41.884 22:02:15 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:48:41.884 [2024-07-15 22:02:15.217758] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:48:41.884 [2024-07-15 22:02:15.217975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179264 ] 00:48:42.143 [2024-07-15 22:02:15.383612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:48:42.401 [2024-07-15 22:02:15.596387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:42.401 [2024-07-15 22:02:15.596394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:48:42.991 Running I/O for 5 seconds... 00:48:48.301 00:48:48.301 Latency(us) 00:48:48.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:48.301 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:48:48.301 Verification LBA range: start 0x0 length 0x200 00:48:48.301 raid5f : 5.20 439.98 27.50 0.00 0.00 7221495.70 143.99 318693.84 00:48:48.301 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:48:48.301 Verification LBA range: start 0x200 length 0x200 00:48:48.301 raid5f : 5.19 440.52 27.53 0.00 0.00 7184302.78 248.62 316862.27 00:48:48.301 =================================================================================================================== 00:48:48.301 Total : 880.50 55.03 0.00 0.00 7202899.24 143.99 318693.84 00:48:49.675 ************************************ 00:48:49.675 END TEST bdev_verify_big_io 00:48:49.675 ************************************ 00:48:49.675 00:48:49.675 real 0m7.719s 00:48:49.675 user 0m14.224s 00:48:49.675 sys 0m0.253s 00:48:49.675 22:02:22 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:49.675 22:02:22 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:48:49.675 22:02:22 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:48:49.675 22:02:22 blockdev_raid5f -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:48:49.675 22:02:22 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:48:49.675 22:02:22 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:49.675 22:02:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:48:49.675 ************************************ 00:48:49.675 START TEST bdev_write_zeroes 00:48:49.675 ************************************ 00:48:49.675 22:02:22 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:48:49.675 [2024-07-15 22:02:22.989632] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:48:49.675 [2024-07-15 22:02:22.989824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179392 ] 00:48:49.933 [2024-07-15 22:02:23.147360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:50.189 [2024-07-15 22:02:23.338883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:50.752 Running I/O for 1 seconds... 00:48:51.687 00:48:51.687 Latency(us) 00:48:51.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:51.687 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:48:51.687 raid5f : 1.01 22893.07 89.43 0.00 0.00 5571.65 1545.39 7269.06 00:48:51.687 =================================================================================================================== 00:48:51.687 Total : 22893.07 89.43 0.00 0.00 5571.65 1545.39 7269.06 00:48:53.063 ************************************ 00:48:53.063 END TEST bdev_write_zeroes 00:48:53.063 ************************************ 00:48:53.063 00:48:53.063 real 0m3.493s 00:48:53.063 user 0m3.127s 00:48:53.063 sys 0m0.252s 00:48:53.063 22:02:26 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:53.063 22:02:26 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:48:53.322 22:02:26 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:48:53.322 22:02:26 blockdev_raid5f -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:48:53.322 22:02:26 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:48:53.322 22:02:26 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:53.322 22:02:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:48:53.322 ************************************ 00:48:53.322 START TEST bdev_json_nonenclosed 00:48:53.322 ************************************ 00:48:53.322 22:02:26 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:48:53.322 [2024-07-15 22:02:26.548686] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:48:53.322 [2024-07-15 22:02:26.548907] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179448 ] 00:48:53.580 [2024-07-15 22:02:26.711980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:53.580 [2024-07-15 22:02:26.909751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:53.580 [2024-07-15 22:02:26.909941] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:48:53.580 [2024-07-15 22:02:26.910024] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:48:53.580 [2024-07-15 22:02:26.910090] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:48:54.146 ************************************ 00:48:54.146 END TEST bdev_json_nonenclosed 00:48:54.146 ************************************ 00:48:54.146 00:48:54.146 real 0m0.832s 00:48:54.146 user 0m0.611s 00:48:54.146 sys 0m0.121s 00:48:54.146 22:02:27 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:48:54.146 22:02:27 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:54.146 22:02:27 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:48:54.146 22:02:27 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 234 00:48:54.146 22:02:27 blockdev_raid5f -- bdev/blockdev.sh@782 -- # true 00:48:54.146 22:02:27 blockdev_raid5f -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:48:54.146 22:02:27 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:48:54.146 22:02:27 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:54.146 22:02:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:48:54.146 ************************************ 00:48:54.146 START TEST bdev_json_nonarray 00:48:54.146 ************************************ 00:48:54.146 22:02:27 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:48:54.146 [2024-07-15 22:02:27.445955] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:48:54.146 [2024-07-15 22:02:27.446181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179486 ] 00:48:54.420 [2024-07-15 22:02:27.606204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:54.681 [2024-07-15 22:02:27.822968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:54.681 [2024-07-15 22:02:27.823148] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:48:54.681 [2024-07-15 22:02:27.823234] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:48:54.681 [2024-07-15 22:02:27.823293] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:48:54.940 ************************************ 00:48:54.940 END TEST bdev_json_nonarray 00:48:54.940 ************************************ 00:48:54.940 00:48:54.940 real 0m0.854s 00:48:54.940 user 0m0.624s 00:48:54.940 sys 0m0.129s 00:48:54.941 22:02:28 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:48:54.941 22:02:28 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:54.941 22:02:28 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:48:54.941 22:02:28 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 234 00:48:54.941 22:02:28 blockdev_raid5f -- bdev/blockdev.sh@785 -- # true 00:48:54.941 22:02:28 blockdev_raid5f -- bdev/blockdev.sh@787 -- # [[ raid5f == bdev ]] 00:48:54.941 22:02:28 blockdev_raid5f -- bdev/blockdev.sh@794 -- # [[ raid5f == gpt ]] 00:48:54.941 22:02:28 blockdev_raid5f -- bdev/blockdev.sh@798 -- # [[ raid5f == crypto_sw ]] 00:48:54.941 22:02:28 blockdev_raid5f -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:48:54.941 22:02:28 blockdev_raid5f -- bdev/blockdev.sh@811 -- # cleanup 00:48:54.941 22:02:28 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:48:54.941 22:02:28 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:48:54.941 22:02:28 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:48:54.941 22:02:28 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:48:54.941 22:02:28 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:48:54.941 22:02:28 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:48:54.941 ************************************ 00:48:54.941 END TEST blockdev_raid5f 00:48:54.941 ************************************ 00:48:54.941 00:48:54.941 real 0m50.310s 00:48:54.941 user 1m8.919s 00:48:54.941 sys 0m4.109s 00:48:54.941 22:02:28 blockdev_raid5f -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:54.941 22:02:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:48:55.199 22:02:28 -- common/autotest_common.sh@1142 -- # return 0 00:48:55.199 22:02:28 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:48:55.199 22:02:28 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:48:55.199 22:02:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:48:55.199 22:02:28 -- common/autotest_common.sh@10 -- # set +x 00:48:55.199 22:02:28 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:48:55.199 22:02:28 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:48:55.199 22:02:28 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:48:55.199 22:02:28 -- common/autotest_common.sh@10 -- # set +x 00:48:57.109 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:48:57.109 Waiting for block devices as requested 00:48:57.109 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:48:57.476 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:48:57.476 Cleaning 00:48:57.476 Removing: /var/run/dpdk/spdk0/config 00:48:57.476 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:48:57.476 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:48:57.476 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:48:57.476 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:48:57.476 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:48:57.476 Removing: /var/run/dpdk/spdk0/hugepage_info 00:48:57.476 Removing: /dev/shm/spdk_tgt_trace.pid111232 00:48:57.476 Removing: /var/run/dpdk/spdk0 00:48:57.476 Removing: /var/run/dpdk/spdk_pid110960 00:48:57.476 Removing: /var/run/dpdk/spdk_pid111232 00:48:57.476 Removing: /var/run/dpdk/spdk_pid111515 00:48:57.476 Removing: /var/run/dpdk/spdk_pid111655 00:48:57.476 Removing: /var/run/dpdk/spdk_pid111719 00:48:57.476 Removing: /var/run/dpdk/spdk_pid111891 00:48:57.476 Removing: /var/run/dpdk/spdk_pid111921 00:48:57.476 Removing: /var/run/dpdk/spdk_pid112091 00:48:57.476 Removing: /var/run/dpdk/spdk_pid112362 00:48:57.476 Removing: /var/run/dpdk/spdk_pid112563 00:48:57.476 Removing: /var/run/dpdk/spdk_pid112699 00:48:57.476 Removing: /var/run/dpdk/spdk_pid112846 00:48:57.476 Removing: /var/run/dpdk/spdk_pid112983 00:48:57.476 Removing: /var/run/dpdk/spdk_pid113119 00:48:57.476 Removing: /var/run/dpdk/spdk_pid113172 00:48:57.476 Removing: /var/run/dpdk/spdk_pid113214 00:48:57.476 Removing: /var/run/dpdk/spdk_pid113299 00:48:57.476 Removing: /var/run/dpdk/spdk_pid113433 00:48:57.476 Removing: /var/run/dpdk/spdk_pid113982 00:48:57.476 Removing: /var/run/dpdk/spdk_pid114079 00:48:57.476 Removing: /var/run/dpdk/spdk_pid114194 00:48:57.476 Removing: /var/run/dpdk/spdk_pid114222 00:48:57.476 Removing: /var/run/dpdk/spdk_pid114420 00:48:57.476 Removing: /var/run/dpdk/spdk_pid114469 00:48:57.476 Removing: /var/run/dpdk/spdk_pid114667 00:48:57.476 Removing: /var/run/dpdk/spdk_pid114700 00:48:57.476 Removing: /var/run/dpdk/spdk_pid114776 00:48:57.476 Removing: /var/run/dpdk/spdk_pid114815 00:48:57.736 Removing: /var/run/dpdk/spdk_pid114905 00:48:57.736 Removing: /var/run/dpdk/spdk_pid114934 00:48:57.736 Removing: /var/run/dpdk/spdk_pid115183 00:48:57.736 Removing: /var/run/dpdk/spdk_pid115233 00:48:57.736 Removing: /var/run/dpdk/spdk_pid115299 00:48:57.736 Removing: /var/run/dpdk/spdk_pid115397 00:48:57.736 Removing: /var/run/dpdk/spdk_pid115496 00:48:57.736 Removing: /var/run/dpdk/spdk_pid115547 00:48:57.736 Removing: /var/run/dpdk/spdk_pid115657 00:48:57.736 Removing: /var/run/dpdk/spdk_pid115711 00:48:57.736 Removing: /var/run/dpdk/spdk_pid115774 00:48:57.736 Removing: /var/run/dpdk/spdk_pid115831 00:48:57.736 Removing: /var/run/dpdk/spdk_pid115913 00:48:57.736 Removing: /var/run/dpdk/spdk_pid115971 00:48:57.736 Removing: /var/run/dpdk/spdk_pid116034 00:48:57.736 Removing: /var/run/dpdk/spdk_pid116112 00:48:57.736 Removing: /var/run/dpdk/spdk_pid116169 00:48:57.736 Removing: /var/run/dpdk/spdk_pid116232 00:48:57.736 Removing: /var/run/dpdk/spdk_pid116294 00:48:57.736 Removing: /var/run/dpdk/spdk_pid116372 00:48:57.736 Removing: /var/run/dpdk/spdk_pid116435 00:48:57.736 Removing: /var/run/dpdk/spdk_pid116493 00:48:57.736 Removing: /var/run/dpdk/spdk_pid116574 00:48:57.736 Removing: /var/run/dpdk/spdk_pid116632 00:48:57.736 Removing: /var/run/dpdk/spdk_pid116688 00:48:57.736 Removing: /var/run/dpdk/spdk_pid116749 00:48:57.736 Removing: /var/run/dpdk/spdk_pid116829 00:48:57.736 Removing: /var/run/dpdk/spdk_pid116887 00:48:57.736 Removing: /var/run/dpdk/spdk_pid116950 00:48:57.736 Removing: /var/run/dpdk/spdk_pid117053 00:48:57.736 Removing: /var/run/dpdk/spdk_pid117213 00:48:57.736 Removing: /var/run/dpdk/spdk_pid117434 00:48:57.736 Removing: /var/run/dpdk/spdk_pid117549 00:48:57.736 Removing: /var/run/dpdk/spdk_pid117640 00:48:57.736 Removing: /var/run/dpdk/spdk_pid118923 00:48:57.736 Removing: /var/run/dpdk/spdk_pid119164 00:48:57.736 Removing: /var/run/dpdk/spdk_pid119411 00:48:57.736 Removing: /var/run/dpdk/spdk_pid119570 00:48:57.736 Removing: /var/run/dpdk/spdk_pid119747 00:48:57.736 Removing: /var/run/dpdk/spdk_pid119833 00:48:57.736 Removing: /var/run/dpdk/spdk_pid119871 00:48:57.736 Removing: /var/run/dpdk/spdk_pid119929 00:48:57.736 Removing: /var/run/dpdk/spdk_pid120439 00:48:57.736 Removing: /var/run/dpdk/spdk_pid120533 00:48:57.736 Removing: /var/run/dpdk/spdk_pid120665 00:48:57.736 Removing: /var/run/dpdk/spdk_pid120739 00:48:57.736 Removing: /var/run/dpdk/spdk_pid122141 00:48:57.736 Removing: /var/run/dpdk/spdk_pid122540 00:48:57.736 Removing: /var/run/dpdk/spdk_pid122736 00:48:57.736 Removing: /var/run/dpdk/spdk_pid123736 00:48:57.736 Removing: /var/run/dpdk/spdk_pid124144 00:48:57.736 Removing: /var/run/dpdk/spdk_pid124334 00:48:57.736 Removing: /var/run/dpdk/spdk_pid125301 00:48:57.736 Removing: /var/run/dpdk/spdk_pid125836 00:48:57.736 Removing: /var/run/dpdk/spdk_pid126037 00:48:57.736 Removing: /var/run/dpdk/spdk_pid128199 00:48:57.736 Removing: /var/run/dpdk/spdk_pid128686 00:48:57.736 Removing: /var/run/dpdk/spdk_pid128904 00:48:57.736 Removing: /var/run/dpdk/spdk_pid131092 00:48:57.736 Removing: /var/run/dpdk/spdk_pid131578 00:48:57.736 Removing: /var/run/dpdk/spdk_pid131789 00:48:57.736 Removing: /var/run/dpdk/spdk_pid133990 00:48:57.736 Removing: /var/run/dpdk/spdk_pid134735 00:48:57.736 Removing: /var/run/dpdk/spdk_pid134945 00:48:57.736 Removing: /var/run/dpdk/spdk_pid137378 00:48:57.736 Removing: /var/run/dpdk/spdk_pid137958 00:48:57.736 Removing: /var/run/dpdk/spdk_pid138192 00:48:57.736 Removing: /var/run/dpdk/spdk_pid140754 00:48:57.736 Removing: /var/run/dpdk/spdk_pid141314 00:48:57.736 Removing: /var/run/dpdk/spdk_pid141543 00:48:57.736 Removing: /var/run/dpdk/spdk_pid144023 00:48:57.736 Removing: /var/run/dpdk/spdk_pid144912 00:48:57.736 Removing: /var/run/dpdk/spdk_pid145132 00:48:57.995 Removing: /var/run/dpdk/spdk_pid145339 00:48:57.995 Removing: /var/run/dpdk/spdk_pid145895 00:48:57.995 Removing: /var/run/dpdk/spdk_pid146877 00:48:57.995 Removing: /var/run/dpdk/spdk_pid147380 00:48:57.995 Removing: /var/run/dpdk/spdk_pid148278 00:48:57.995 Removing: /var/run/dpdk/spdk_pid148904 00:48:57.995 Removing: /var/run/dpdk/spdk_pid149881 00:48:57.995 Removing: /var/run/dpdk/spdk_pid150437 00:48:57.995 Removing: /var/run/dpdk/spdk_pid153447 00:48:57.995 Removing: /var/run/dpdk/spdk_pid154238 00:48:57.995 Removing: /var/run/dpdk/spdk_pid154820 00:48:57.995 Removing: /var/run/dpdk/spdk_pid158086 00:48:57.995 Removing: /var/run/dpdk/spdk_pid158949 00:48:57.995 Removing: /var/run/dpdk/spdk_pid159636 00:48:57.995 Removing: /var/run/dpdk/spdk_pid161079 00:48:57.995 Removing: /var/run/dpdk/spdk_pid161619 00:48:57.995 Removing: /var/run/dpdk/spdk_pid162920 00:48:57.995 Removing: /var/run/dpdk/spdk_pid163454 00:48:57.995 Removing: /var/run/dpdk/spdk_pid164785 00:48:57.996 Removing: /var/run/dpdk/spdk_pid165320 00:48:57.996 Removing: /var/run/dpdk/spdk_pid166213 00:48:57.996 Removing: /var/run/dpdk/spdk_pid166277 00:48:57.996 Removing: /var/run/dpdk/spdk_pid166343 00:48:57.996 Removing: /var/run/dpdk/spdk_pid166412 00:48:57.996 Removing: /var/run/dpdk/spdk_pid166567 00:48:57.996 Removing: /var/run/dpdk/spdk_pid166739 00:48:57.996 Removing: /var/run/dpdk/spdk_pid166962 00:48:57.996 Removing: /var/run/dpdk/spdk_pid167274 00:48:57.996 Removing: /var/run/dpdk/spdk_pid167289 00:48:57.996 Removing: /var/run/dpdk/spdk_pid167340 00:48:57.996 Removing: /var/run/dpdk/spdk_pid167375 00:48:57.996 Removing: /var/run/dpdk/spdk_pid167404 00:48:57.996 Removing: /var/run/dpdk/spdk_pid167451 00:48:57.996 Removing: /var/run/dpdk/spdk_pid167481 00:48:57.996 Removing: /var/run/dpdk/spdk_pid167513 00:48:57.996 Removing: /var/run/dpdk/spdk_pid167541 00:48:57.996 Removing: /var/run/dpdk/spdk_pid167572 00:48:57.996 Removing: /var/run/dpdk/spdk_pid167601 00:48:57.996 Removing: /var/run/dpdk/spdk_pid167650 00:48:57.996 Removing: /var/run/dpdk/spdk_pid167678 00:48:57.996 Removing: /var/run/dpdk/spdk_pid167710 00:48:57.996 Removing: /var/run/dpdk/spdk_pid167745 00:48:57.996 Removing: /var/run/dpdk/spdk_pid167769 00:48:57.996 Removing: /var/run/dpdk/spdk_pid167817 00:48:57.996 Removing: /var/run/dpdk/spdk_pid167848 00:48:57.996 Removing: /var/run/dpdk/spdk_pid167876 00:48:57.996 Removing: /var/run/dpdk/spdk_pid167904 00:48:57.996 Removing: /var/run/dpdk/spdk_pid167956 00:48:57.996 Removing: /var/run/dpdk/spdk_pid168002 00:48:57.996 Removing: /var/run/dpdk/spdk_pid168048 00:48:57.996 Removing: /var/run/dpdk/spdk_pid168127 00:48:57.996 Removing: /var/run/dpdk/spdk_pid168181 00:48:57.996 Removing: /var/run/dpdk/spdk_pid168208 00:48:57.996 Removing: /var/run/dpdk/spdk_pid168278 00:48:57.996 Removing: /var/run/dpdk/spdk_pid168310 00:48:57.996 Removing: /var/run/dpdk/spdk_pid168333 00:48:57.996 Removing: /var/run/dpdk/spdk_pid168390 00:48:57.996 Removing: /var/run/dpdk/spdk_pid168420 00:48:57.996 Removing: /var/run/dpdk/spdk_pid168484 00:48:57.996 Removing: /var/run/dpdk/spdk_pid168512 00:48:57.996 Removing: /var/run/dpdk/spdk_pid168540 00:48:57.996 Removing: /var/run/dpdk/spdk_pid168565 00:48:57.996 Removing: /var/run/dpdk/spdk_pid168589 00:48:57.996 Removing: /var/run/dpdk/spdk_pid168634 00:48:57.996 Removing: /var/run/dpdk/spdk_pid168651 00:48:57.996 Removing: /var/run/dpdk/spdk_pid168676 00:48:57.996 Removing: /var/run/dpdk/spdk_pid168725 00:48:57.996 Removing: /var/run/dpdk/spdk_pid168773 00:48:57.996 Removing: /var/run/dpdk/spdk_pid168826 00:48:57.996 Removing: /var/run/dpdk/spdk_pid168873 00:48:57.996 Removing: /var/run/dpdk/spdk_pid168900 00:48:57.996 Removing: /var/run/dpdk/spdk_pid168920 00:48:57.996 Removing: /var/run/dpdk/spdk_pid168984 00:48:57.996 Removing: /var/run/dpdk/spdk_pid169028 00:48:57.996 Removing: /var/run/dpdk/spdk_pid169076 00:48:58.255 Removing: /var/run/dpdk/spdk_pid169104 00:48:58.255 Removing: /var/run/dpdk/spdk_pid169128 00:48:58.255 Removing: /var/run/dpdk/spdk_pid169156 00:48:58.255 Removing: /var/run/dpdk/spdk_pid169189 00:48:58.255 Removing: /var/run/dpdk/spdk_pid169218 00:48:58.255 Removing: /var/run/dpdk/spdk_pid169240 00:48:58.255 Removing: /var/run/dpdk/spdk_pid169265 00:48:58.255 Removing: /var/run/dpdk/spdk_pid169364 00:48:58.255 Removing: /var/run/dpdk/spdk_pid169472 00:48:58.255 Removing: /var/run/dpdk/spdk_pid169648 00:48:58.255 Removing: /var/run/dpdk/spdk_pid169686 00:48:58.255 Removing: /var/run/dpdk/spdk_pid169738 00:48:58.255 Removing: /var/run/dpdk/spdk_pid169818 00:48:58.255 Removing: /var/run/dpdk/spdk_pid169850 00:48:58.255 Removing: /var/run/dpdk/spdk_pid169883 00:48:58.255 Removing: /var/run/dpdk/spdk_pid169920 00:48:58.255 Removing: /var/run/dpdk/spdk_pid169985 00:48:58.255 Removing: /var/run/dpdk/spdk_pid170015 00:48:58.255 Removing: /var/run/dpdk/spdk_pid170103 00:48:58.255 Removing: /var/run/dpdk/spdk_pid170176 00:48:58.255 Removing: /var/run/dpdk/spdk_pid170227 00:48:58.255 Removing: /var/run/dpdk/spdk_pid170509 00:48:58.255 Removing: /var/run/dpdk/spdk_pid170639 00:48:58.255 Removing: /var/run/dpdk/spdk_pid170692 00:48:58.255 Removing: /var/run/dpdk/spdk_pid170789 00:48:58.255 Removing: /var/run/dpdk/spdk_pid170896 00:48:58.255 Removing: /var/run/dpdk/spdk_pid170948 00:48:58.255 Removing: /var/run/dpdk/spdk_pid171223 00:48:58.255 Removing: /var/run/dpdk/spdk_pid171357 00:48:58.255 Removing: /var/run/dpdk/spdk_pid171464 00:48:58.255 Removing: /var/run/dpdk/spdk_pid171535 00:48:58.255 Removing: /var/run/dpdk/spdk_pid171573 00:48:58.255 Removing: /var/run/dpdk/spdk_pid171659 00:48:58.255 Removing: /var/run/dpdk/spdk_pid172230 00:48:58.255 Removing: /var/run/dpdk/spdk_pid172281 00:48:58.255 Removing: /var/run/dpdk/spdk_pid172616 00:48:58.255 Removing: /var/run/dpdk/spdk_pid172746 00:48:58.255 Removing: /var/run/dpdk/spdk_pid172865 00:48:58.255 Removing: /var/run/dpdk/spdk_pid172922 00:48:58.255 Removing: /var/run/dpdk/spdk_pid172960 00:48:58.255 Removing: /var/run/dpdk/spdk_pid172999 00:48:58.255 Removing: /var/run/dpdk/spdk_pid174419 00:48:58.255 Removing: /var/run/dpdk/spdk_pid174582 00:48:58.255 Removing: /var/run/dpdk/spdk_pid174587 00:48:58.255 Removing: /var/run/dpdk/spdk_pid174618 00:48:58.255 Removing: /var/run/dpdk/spdk_pid175165 00:48:58.255 Removing: /var/run/dpdk/spdk_pid175286 00:48:58.255 Removing: /var/run/dpdk/spdk_pid176247 00:48:58.255 Removing: /var/run/dpdk/spdk_pid178505 00:48:58.255 Removing: /var/run/dpdk/spdk_pid178597 00:48:58.255 Removing: /var/run/dpdk/spdk_pid178647 00:48:58.255 Removing: /var/run/dpdk/spdk_pid178951 00:48:58.255 Removing: /var/run/dpdk/spdk_pid179149 00:48:58.255 Removing: /var/run/dpdk/spdk_pid179264 00:48:58.255 Removing: /var/run/dpdk/spdk_pid179392 00:48:58.255 Removing: /var/run/dpdk/spdk_pid179448 00:48:58.255 Removing: /var/run/dpdk/spdk_pid179486 00:48:58.255 Clean 00:48:58.514 22:02:31 -- common/autotest_common.sh@1451 -- # return 0 00:48:58.514 22:02:31 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:48:58.514 22:02:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:48:58.514 22:02:31 -- common/autotest_common.sh@10 -- # set +x 00:48:58.514 22:02:31 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:48:58.514 22:02:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:48:58.514 22:02:31 -- common/autotest_common.sh@10 -- # set +x 00:48:58.514 22:02:31 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:48:58.514 22:02:31 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:48:58.514 22:02:31 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:48:58.514 22:02:31 -- spdk/autotest.sh@391 -- # hash lcov 00:48:58.514 22:02:31 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:48:58.514 22:02:31 -- spdk/autotest.sh@393 -- # hostname 00:48:58.514 22:02:31 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t ubuntu2004-cloud-1712646987-2220 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:48:58.773 geninfo: WARNING: invalid characters removed from testname! 00:49:45.461 22:03:16 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:49:48.749 22:03:21 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:49:51.346 22:03:24 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:49:54.647 22:03:27 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:49:57.187 22:03:30 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:50:00.476 22:03:33 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:50:03.007 22:03:36 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:50:03.007 22:03:36 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:50:03.007 22:03:36 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:50:03.007 22:03:36 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:50:03.007 22:03:36 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:50:03.007 22:03:36 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:50:03.007 22:03:36 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:50:03.007 22:03:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:50:03.007 22:03:36 -- paths/export.sh@5 -- $ export PATH 00:50:03.008 22:03:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:50:03.008 22:03:36 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:50:03.008 22:03:36 -- common/autobuild_common.sh@444 -- $ date +%s 00:50:03.008 22:03:36 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721081016.XXXXXX 00:50:03.008 22:03:36 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721081016.xnQkgD 00:50:03.008 22:03:36 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:50:03.008 22:03:36 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:50:03.008 22:03:36 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:50:03.008 22:03:36 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:50:03.008 22:03:36 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:50:03.008 22:03:36 -- common/autobuild_common.sh@460 -- $ get_config_params 00:50:03.008 22:03:36 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:50:03.008 22:03:36 -- common/autotest_common.sh@10 -- $ set +x 00:50:03.008 22:03:36 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:50:03.008 22:03:36 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:50:03.008 22:03:36 -- pm/common@17 -- $ local monitor 00:50:03.008 22:03:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:50:03.008 22:03:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:50:03.008 22:03:36 -- pm/common@25 -- $ sleep 1 00:50:03.008 22:03:36 -- pm/common@21 -- $ date +%s 00:50:03.008 22:03:36 -- pm/common@21 -- $ date +%s 00:50:03.008 22:03:36 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721081016 00:50:03.008 22:03:36 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721081016 00:50:03.267 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721081016_collect-vmstat.pm.log 00:50:03.267 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721081016_collect-cpu-load.pm.log 00:50:04.203 22:03:37 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:50:04.203 22:03:37 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:50:04.203 22:03:37 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:50:04.203 22:03:37 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:50:04.203 22:03:37 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:50:04.203 22:03:37 -- spdk/autopackage.sh@19 -- $ timing_finish 00:50:04.203 22:03:37 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:50:04.203 22:03:37 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:50:04.203 22:03:37 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:50:04.203 22:03:37 -- spdk/autopackage.sh@20 -- $ exit 0 00:50:04.203 22:03:37 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:50:04.203 22:03:37 -- pm/common@29 -- $ signal_monitor_resources TERM 00:50:04.203 22:03:37 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:50:04.203 22:03:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:50:04.203 22:03:37 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:50:04.203 22:03:37 -- pm/common@44 -- $ pid=181112 00:50:04.203 22:03:37 -- pm/common@50 -- $ kill -TERM 181112 00:50:04.203 22:03:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:50:04.203 22:03:37 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:50:04.203 22:03:37 -- pm/common@44 -- $ pid=181114 00:50:04.203 22:03:37 -- pm/common@50 -- $ kill -TERM 181114 00:50:04.203 + [[ -n 2443 ]] 00:50:04.203 + sudo kill 2443 00:50:04.203 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:50:04.214 [Pipeline] } 00:50:04.234 [Pipeline] // timeout 00:50:04.240 [Pipeline] } 00:50:04.264 [Pipeline] // stage 00:50:04.270 [Pipeline] } 00:50:04.285 [Pipeline] // catchError 00:50:04.296 [Pipeline] stage 00:50:04.299 [Pipeline] { (Stop VM) 00:50:04.314 [Pipeline] sh 00:50:04.593 + vagrant halt 00:50:07.874 ==> default: Halting domain... 00:50:17.853 [Pipeline] sh 00:50:18.137 + vagrant destroy -f 00:50:21.431 ==> default: Removing domain... 00:50:22.009 [Pipeline] sh 00:50:22.285 + mv output /var/jenkins/workspace/ubuntu20-vg-autotest/output 00:50:22.292 [Pipeline] } 00:50:22.303 [Pipeline] // stage 00:50:22.307 [Pipeline] } 00:50:22.320 [Pipeline] // dir 00:50:22.324 [Pipeline] } 00:50:22.336 [Pipeline] // wrap 00:50:22.342 [Pipeline] } 00:50:22.352 [Pipeline] // catchError 00:50:22.360 [Pipeline] stage 00:50:22.361 [Pipeline] { (Epilogue) 00:50:22.372 [Pipeline] sh 00:50:22.649 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:50:49.191 [Pipeline] catchError 00:50:49.193 [Pipeline] { 00:50:49.209 [Pipeline] sh 00:50:49.490 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:50:49.490 Artifacts sizes are good 00:50:49.499 [Pipeline] } 00:50:49.517 [Pipeline] // catchError 00:50:49.530 [Pipeline] archiveArtifacts 00:50:49.536 Archiving artifacts 00:50:50.007 [Pipeline] cleanWs 00:50:50.016 [WS-CLEANUP] Deleting project workspace... 00:50:50.016 [WS-CLEANUP] Deferred wipeout is used... 00:50:50.021 [WS-CLEANUP] done 00:50:50.023 [Pipeline] } 00:50:50.049 [Pipeline] // stage 00:50:50.068 [Pipeline] } 00:50:50.095 [Pipeline] // node 00:50:50.099 [Pipeline] End of Pipeline 00:50:50.124 Finished: SUCCESS